Compare commits

..

11 Commits

Author SHA1 Message Date
pikachuim
55d3827dee add(interface): driver&mamage 2025-08-14 22:16:19 +08:00
pikachuim
1fbc9427df add(interface): driver&mamage 2025-08-14 22:16:01 +08:00
pikachuim
bb3d139a47 add(interface): driver&mamage 2025-08-14 21:59:44 +08:00
pikachuim
d227ab85d6 add(trunk): base interface 2025-08-14 21:44:34 +08:00
pikachuim
5342ae96d0 add(trunk): base interface 2025-08-14 21:39:00 +08:00
pikachuim
273e15a050 add(trunk): base interface 2025-08-14 21:30:18 +08:00
pikachuim
13aad2c2fa add(trunk): base interface 2025-08-14 19:56:43 +08:00
j2rong4cn
368dc65a6e feat: Implement plugin architecture with gRPC support
- Added driver initialization for gRPC plugins in internal/bootstrap/driver.go.
- Introduced configuration structures and protobuf definitions for driver plugins in proto/driver/config.proto and proto/driver/driver.proto.
- Implemented gRPC server and client interfaces for driver plugins in shared/driver/grpc.go.
- Created common response handling utilities in server/common/common.go and server/common/resp.go.
- Developed plugin registration endpoint in server/handles/plugin.go.
- Added test cases for plugin functionality in shared/driver/plugin_test.go.
- Defined plugin reattachment configuration model in shared/model/plugin.go.
2025-08-13 19:04:38 +08:00
j2rong4cn
8b4b6ba970 feat(config): enhance configuration management and add CORS support
feat(server): implement server initialization with context and graceful shutdown
feat(utils): add utility functions for file and JSON operations
refactor(conf): restructure configuration types and improve default settings
2025-08-13 10:03:22 +08:00
j2rong4cn
4d28e838ce feat(cmd): initialize command structure and configuration management 2025-08-12 22:15:25 +08:00
j2rong4cn
3930d4789a add(trunk): next branch 2025-08-12 21:20:33 +08:00
778 changed files with 5883 additions and 15858 deletions

View File

@@ -1,56 +0,0 @@
<!--
Provide a general summary of your changes in the Title above.
The PR title must start with `feat(): `, `docs(): `, `fix(): `, `style(): `, or `refactor(): `, `chore(): `. For example: `feat(component): add new feature`.
If it spans multiple components, use the main component as the prefix and enumerate in the title, describe in the body.
-->
<!--
在上方标题中提供您更改的总体摘要。
PR 标题需以 `feat(): `, `docs(): `, `fix(): `, `style(): `, `refactor(): `, `chore(): ` 其中之一开头,例如:`feat(component): 新增功能`
如果跨多个组件,请使用主要组件作为前缀,并在标题中枚举、描述中说明。
-->
## Description / 描述
<!-- Describe your changes in detail -->
<!-- 详细描述您的更改 -->
## Motivation and Context / 背景
<!-- Why is this change required? What problem does it solve? -->
<!-- 为什么需要此更改?它解决了什么问题? -->
<!-- If it fixes an open issue, please link to the issue here. -->
<!-- 如果修复了一个打开的issue请在此处链接到该issue -->
Closes #XXXX
<!-- or -->
<!-- 或者 -->
Relates to #XXXX
## How Has This Been Tested? / 测试
<!-- Please describe in detail how you tested your changes. -->
<!-- 请详细描述您如何测试更改 -->
## Checklist / 检查清单
<!-- Go over all the following points, and put an `x` in all the boxes that apply. -->
<!-- 检查以下所有要点,并在所有适用的框中打`x` -->
<!-- If you're unsure about any of these, don't hesitate to ask. We're here to help! -->
<!-- 如果您对其中任何一项不确定,请不要犹豫提问。我们会帮助您! -->
- [ ] I have read the [CONTRIBUTING](https://github.com/OpenListTeam/OpenList/blob/main/CONTRIBUTING.md) document.
我已阅读 [CONTRIBUTING](https://github.com/OpenListTeam/OpenList/blob/main/CONTRIBUTING.md) 文档。
- [ ] I have formatted my code with `go fmt` or [prettier](https://prettier.io/).
我已使用 `go fmt` 或 [prettier](https://prettier.io/) 格式化提交的代码。
- [ ] I have added appropriate labels to this PR (or mentioned needed labels in the description if lacking permissions).
我已为此 PR 添加了适当的标签(如无权限或需要的标签不存在,请在描述中说明,管理员将后续处理)。
- [ ] I have requested review from relevant code authors using the "Request review" feature when applicable.
我已在适当情况下使用"Request review"功能请求相关代码作者进行审查。
- [ ] I have updated the repository accordingly (If its needed).
我已相应更新了相关仓库(若适用)。
- [ ] [OpenList-Frontend](https://github.com/OpenListTeam/OpenList-Frontend) #XXXX
- [ ] [OpenList-Docs](https://github.com/OpenListTeam/OpenList-Docs) #XXXX

View File

@@ -1,38 +0,0 @@
name: Sync to Gitee
on:
push:
branches:
- main
workflow_dispatch:
jobs:
sync:
runs-on: ubuntu-latest
name: Sync GitHub to Gitee
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup SSH
run: |
mkdir -p ~/.ssh
echo "${{ secrets.GITEE_SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
ssh-keyscan gitee.com >> ~/.ssh/known_hosts
- name: Create single commit and push
run: |
git config user.name "GitHub Actions"
git config user.email "actions@github.com"
# Create a new branch
git checkout --orphan new-main
git add .
git commit -m "Sync from GitHub: $(date)"
# Add Gitee remote and force push
git remote add gitee ${{ vars.GITEE_REPO_URL }}
git push --force gitee new-main:main

View File

@@ -1,77 +0,0 @@
# Contributing
## Setup your machine
`OpenList` is written in [Go](https://golang.org/) and [SolidJS](https://www.solidjs.com/).
Prerequisites:
- [git](https://git-scm.com)
- [Go 1.24+](https://golang.org/doc/install)
- [gcc](https://gcc.gnu.org/)
- [nodejs](https://nodejs.org/)
## Cloning a fork
Fork and clone `OpenList` and `OpenList-Frontend` anywhere:
```shell
$ git clone https://github.com/<your-username>/OpenList.git
$ git clone --recurse-submodules https://github.com/<your-username>/OpenList-Frontend.git
```
## Creating a branch
Create a new branch from the `main` branch, with an appropriate name.
```shell
$ git checkout -b <branch-name>
```
## Preview your change
### backend
```shell
$ go run main.go
```
### frontend
```shell
$ pnpm dev
```
## Add a new driver
Copy `drivers/template` folder and rename it, and follow the comments in it.
## Create a commit
Commit messages should be well formatted, and to make that "standardized".
Submit your pull request. For PR titles, follow [Conventional Commits](https://www.conventionalcommits.org).
https://github.com/OpenListTeam/OpenList/issues/376
It's suggested to sign your commits. See: [How to sign commits](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits)
## Submit a pull request
Please make sure your code has been formatted with `go fmt` or [prettier](https://prettier.io/) before submitting.
Push your branch to your `openlist` fork and open a pull request against the `main` branch.
## Merge your pull request
Your pull request will be merged after review. Please wait for the maintainer to merge your pull request after review.
At least 1 approving review is required by reviewers with write access. You can also request a review from maintainers.
## Delete your branch
(Optional) After your pull request is merged, you can delete your branch.
---
Thank you for your contribution! Let's make OpenList better together!

11
buf.gen.yaml Normal file
View File

@@ -0,0 +1,11 @@
version: v1
plugins:
- plugin: buf.build/protocolbuffers/go:v1.36.7
out: .
opt:
- paths=source_relative
- plugin: buf.build/grpc/go:v1.5.1
out: .
opt:
- paths=source_relative
- require_unimplemented_servers=false

1
buf.yaml Normal file
View File

@@ -0,0 +1 @@
version: v1

View File

@@ -1,51 +1,42 @@
package cmd
import (
"os"
"path/filepath"
"strconv"
"context"
"github.com/OpenListTeam/OpenList/v4/internal/bootstrap"
"github.com/OpenListTeam/OpenList/v4/internal/bootstrap/data"
"github.com/OpenListTeam/OpenList/v4/internal/db"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
log "github.com/sirupsen/logrus"
"github.com/OpenListTeam/OpenList/v5/cmd/flags"
"github.com/OpenListTeam/OpenList/v5/internal/bootstrap"
"github.com/sirupsen/logrus"
)
func Init() {
func Init(ctx context.Context) {
if flags.Dev {
flags.Debug = true
}
initLogrus()
bootstrap.InitConfig()
bootstrap.Log()
bootstrap.InitDB()
data.InitData()
bootstrap.InitStreamLimit()
bootstrap.InitIndex()
bootstrap.InitUpgradePatch()
bootstrap.InitDriverPlugins()
}
func Release() {
db.Close()
}
var pid = -1
var pidFile string
func initDaemon() {
ex, err := os.Executable()
if err != nil {
log.Fatal(err)
}
exPath := filepath.Dir(ex)
_ = os.MkdirAll(filepath.Join(exPath, "daemon"), 0700)
pidFile = filepath.Join(exPath, "daemon/pid")
if utils.Exists(pidFile) {
bytes, err := os.ReadFile(pidFile)
if err != nil {
log.Fatal("failed to read pid file", err)
}
id, err := strconv.Atoi(string(bytes))
if err != nil {
log.Fatal("failed to parse pid data", err)
}
pid = id
func initLog(l *logrus.Logger) {
if flags.Debug {
l.SetLevel(logrus.DebugLevel)
l.SetReportCaller(true)
} else {
l.SetLevel(logrus.InfoLevel)
l.SetReportCaller(false)
}
}
func initLogrus() {
formatter := logrus.TextFormatter{
ForceColors: true,
EnvironmentOverrideColors: true,
TimestampFormat: "2006-01-02 15:04:05",
FullTimestamp: true,
}
logrus.SetFormatter(&formatter)
initLog(logrus.StandardLogger())
}

View File

@@ -1,10 +1,40 @@
package flags
import (
"os"
"path/filepath"
"github.com/sirupsen/logrus"
)
var (
DataDir string
ConfigFile string
Debug bool
NoPrefix bool
Dev bool
ForceBinDir bool
LogStd bool
pwd string
)
// Program working directory
func PWD() string {
if pwd != "" {
return pwd
}
if ForceBinDir {
ex, err := os.Executable()
if err != nil {
logrus.Fatal(err)
}
pwd = filepath.Dir(ex)
return pwd
}
d, err := os.Getwd()
if err != nil {
logrus.Fatal(err)
}
pwd = d
return d
}

View File

@@ -4,10 +4,7 @@ import (
"fmt"
"os"
"github.com/OpenListTeam/OpenList/v4/cmd/flags"
_ "github.com/OpenListTeam/OpenList/v4/drivers"
_ "github.com/OpenListTeam/OpenList/v4/internal/archive"
_ "github.com/OpenListTeam/OpenList/v4/internal/offline_download"
"github.com/OpenListTeam/OpenList/v5/cmd/flags"
"github.com/spf13/cobra"
)
@@ -27,10 +24,10 @@ func Execute() {
}
func init() {
RootCmd.PersistentFlags().StringVar(&flags.DataDir, "data", "data", "data folder")
RootCmd.PersistentFlags().StringVarP(&flags.ConfigFile, "config", "c", "data/config.json", "config file")
RootCmd.PersistentFlags().BoolVar(&flags.Debug, "debug", false, "start with debug mode")
RootCmd.PersistentFlags().BoolVar(&flags.NoPrefix, "no-prefix", false, "disable env prefix")
RootCmd.PersistentFlags().BoolVar(&flags.Dev, "dev", false, "start with dev mode")
RootCmd.PersistentFlags().BoolVar(&flags.ForceBinDir, "force-bin-dir", false, "Force to use the directory where the binary file is located as data directory")
RootCmd.PersistentFlags().BoolVar(&flags.LogStd, "log-std", false, "Force to log to std")
RootCmd.PersistentFlags().BoolVarP(&flags.ForceBinDir, "force-bin-dir", "f", false, "force to use the directory where the binary file is located as data directory")
RootCmd.PersistentFlags().BoolVar(&flags.LogStd, "log-std", false, "force to log to std")
}

View File

@@ -13,22 +13,14 @@ import (
"syscall"
"time"
"github.com/OpenListTeam/OpenList/v4/cmd/flags"
"github.com/OpenListTeam/OpenList/v4/internal/bootstrap"
"github.com/OpenListTeam/OpenList/v4/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/fs"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server"
"github.com/OpenListTeam/OpenList/v4/server/middlewares"
"github.com/OpenListTeam/sftpd-openlist"
ftpserver "github.com/fclairamb/ftpserverlib"
"github.com/OpenListTeam/OpenList/v5/cmd/flags"
"github.com/OpenListTeam/OpenList/v5/internal/conf"
"github.com/OpenListTeam/OpenList/v5/server"
"github.com/gin-gonic/gin"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"golang.org/x/net/http2"
"golang.org/x/net/http2/h2c"
"github.com/quic-go/quic-go/http3"
)
// ServerCmd represents the server command
@@ -37,248 +29,127 @@ var ServerCmd = &cobra.Command{
Short: "Start the server at the specified address",
Long: `Start the server at the specified address
the address is defined in config file`,
Run: func(cmd *cobra.Command, args []string) {
Init()
if conf.Conf.DelayedStart != 0 {
utils.Log.Infof("delayed start for %d seconds", conf.Conf.DelayedStart)
time.Sleep(time.Duration(conf.Conf.DelayedStart) * time.Second)
}
bootstrap.InitOfflineDownloadTools()
bootstrap.LoadStorages()
bootstrap.InitTaskManager()
if !flags.Debug && !flags.Dev {
Run: func(_ *cobra.Command, args []string) {
serverCtx, serverCancel := context.WithCancel(context.Background())
defer serverCancel()
Init(serverCtx)
if !flags.Debug {
gin.SetMode(gin.ReleaseMode)
}
r := gin.New()
// gin log
if conf.Conf.Log.Filter.Enable {
r.Use(middlewares.FilteredLogger())
} else {
r.Use(gin.LoggerWithWriter(log.StandardLogger().Out))
}
r.Use(gin.RecoveryWithWriter(log.StandardLogger().Out))
server.Init(r)
var httpHandler http.Handler = r
if conf.Conf.Scheme.EnableH2c {
httpHandler = h2c.NewHandler(r, &http2.Server{})
}
var httpSrv, httpsSrv, unixSrv *http.Server
var quicSrv *http3.Server
if conf.Conf.Scheme.HttpPort != -1 {
if conf.Conf.Scheme.HttpPort > 0 {
httpBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpPort)
fmt.Printf("start HTTP server @ %s\n", httpBase)
utils.Log.Infof("start HTTP server @ %s", httpBase)
log.Infoln("start HTTP server", "@", httpBase)
httpSrv = &http.Server{Addr: httpBase, Handler: httpHandler}
go func() {
err := httpSrv.ListenAndServe()
if err != nil && !errors.Is(err, http.ErrServerClosed) {
utils.Log.Fatalf("failed to start http: %s", err.Error())
log.Errorln("start HTTP server", ":", err)
serverCancel()
}
}()
}
if conf.Conf.Scheme.HttpsPort != -1 {
if conf.Conf.Scheme.HttpsPort > 0 {
httpsBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpsPort)
fmt.Printf("start HTTPS server @ %s\n", httpsBase)
utils.Log.Infof("start HTTPS server @ %s", httpsBase)
log.Infoln("start HTTPS server", "@", httpsBase)
httpsSrv = &http.Server{Addr: httpsBase, Handler: r}
go func() {
err := httpsSrv.ListenAndServeTLS(conf.Conf.Scheme.CertFile, conf.Conf.Scheme.KeyFile)
if err != nil && !errors.Is(err, http.ErrServerClosed) {
utils.Log.Fatalf("failed to start https: %s", err.Error())
log.Errorln("start HTTPS server", ":", err)
serverCancel()
}
}()
if conf.Conf.Scheme.EnableH3 {
fmt.Printf("start HTTP3 (quic) server @ %s\n", httpsBase)
utils.Log.Infof("start HTTP3 (quic) server @ %s", httpsBase)
r.Use(func(c *gin.Context) {
if c.Request.TLS != nil {
port := conf.Conf.Scheme.HttpsPort
c.Header("Alt-Svc", fmt.Sprintf("h3=\":%d\"; ma=86400", port))
}
c.Next()
})
quicSrv = &http3.Server{Addr: httpsBase, Handler: r}
go func() {
err := quicSrv.ListenAndServeTLS(conf.Conf.Scheme.CertFile, conf.Conf.Scheme.KeyFile)
if err != nil && !errors.Is(err, http.ErrServerClosed) {
utils.Log.Fatalf("failed to start http3 (quic): %s", err.Error())
}
}()
}
}
if conf.Conf.Scheme.UnixFile != "" {
fmt.Printf("start unix server @ %s\n", conf.Conf.Scheme.UnixFile)
utils.Log.Infof("start unix server @ %s", conf.Conf.Scheme.UnixFile)
log.Infoln("start Unix server", "@", conf.Conf.Scheme.UnixFile)
unixSrv = &http.Server{Handler: httpHandler}
go func() {
listener, err := net.Listen("unix", conf.Conf.Scheme.UnixFile)
if err != nil {
utils.Log.Fatalf("failed to listen unix: %+v", err)
log.Errorln("start Unix server", ":", err)
serverCancel()
return
}
// set socket file permission
mode, err := strconv.ParseUint(conf.Conf.Scheme.UnixFilePerm, 8, 32)
if err != nil {
utils.Log.Errorf("failed to parse socket file permission: %+v", err)
log.Errorln("parse unix_file_perm", ":", err)
} else {
err = os.Chmod(conf.Conf.Scheme.UnixFile, os.FileMode(mode))
if err != nil {
utils.Log.Errorf("failed to chmod socket file: %+v", err)
log.Errorln("chmod socket file", ":", err)
}
}
err = unixSrv.Serve(listener)
if err != nil && !errors.Is(err, http.ErrServerClosed) {
utils.Log.Fatalf("failed to start unix: %s", err.Error())
log.Errorln("start Unix server", ":", err)
serverCancel()
}
}()
}
if conf.Conf.S3.Port != -1 && conf.Conf.S3.Enable {
s3r := gin.New()
s3r.Use(gin.LoggerWithWriter(log.StandardLogger().Out), gin.RecoveryWithWriter(log.StandardLogger().Out))
server.InitS3(s3r)
s3Base := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.S3.Port)
fmt.Printf("start S3 server @ %s\n", s3Base)
utils.Log.Infof("start S3 server @ %s", s3Base)
go func() {
var err error
if conf.Conf.S3.SSL {
httpsSrv = &http.Server{Addr: s3Base, Handler: s3r}
err = httpsSrv.ListenAndServeTLS(conf.Conf.Scheme.CertFile, conf.Conf.Scheme.KeyFile)
}
if !conf.Conf.S3.SSL {
httpSrv = &http.Server{Addr: s3Base, Handler: s3r}
err = httpSrv.ListenAndServe()
}
if err != nil && !errors.Is(err, http.ErrServerClosed) {
utils.Log.Fatalf("failed to start s3 server: %s", err.Error())
}
}()
}
var ftpDriver *server.FtpMainDriver
var ftpServer *ftpserver.FtpServer
if conf.Conf.FTP.Listen != "" && conf.Conf.FTP.Enable {
var err error
ftpDriver, err = server.NewMainDriver()
if err != nil {
utils.Log.Fatalf("failed to start ftp driver: %s", err.Error())
} else {
fmt.Printf("start ftp server on %s\n", conf.Conf.FTP.Listen)
utils.Log.Infof("start ftp server on %s", conf.Conf.FTP.Listen)
go func() {
ftpServer = ftpserver.NewFtpServer(ftpDriver)
err = ftpServer.ListenAndServe()
if err != nil {
utils.Log.Fatalf("problem ftp server listening: %s", err.Error())
}
}()
}
}
var sftpDriver *server.SftpDriver
var sftpServer *sftpd.SftpServer
if conf.Conf.SFTP.Listen != "" && conf.Conf.SFTP.Enable {
var err error
sftpDriver, err = server.NewSftpDriver()
if err != nil {
utils.Log.Fatalf("failed to start sftp driver: %s", err.Error())
} else {
fmt.Printf("start sftp server on %s", conf.Conf.SFTP.Listen)
utils.Log.Infof("start sftp server on %s", conf.Conf.SFTP.Listen)
go func() {
sftpServer = sftpd.NewSftpServer(sftpDriver)
err = sftpServer.RunServer()
if err != nil {
utils.Log.Fatalf("problem sftp server listening: %s", err.Error())
}
}()
}
}
// Wait for interrupt signal to gracefully shutdown the server with
// a timeout of 1 second.
quit := make(chan os.Signal, 1)
// kill (no param) default send syscanll.SIGTERM
// kill -2 is syscall.SIGINT
// kill -9 is syscall. SIGKILL but can"t be catch, so don't need add it
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
<-quit
utils.Log.Println("Shutdown server...")
fs.ArchiveContentUploadTaskManager.RemoveAll()
select {
case <-quit:
case <-serverCtx.Done():
}
log.Println("shutdown server...")
Release()
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
quitCtx, quitCancel := context.WithTimeout(context.Background(), time.Second)
defer quitCancel()
var wg sync.WaitGroup
if conf.Conf.Scheme.HttpPort != -1 {
if httpSrv != nil {
wg.Add(1)
go func() {
defer wg.Done()
if err := httpSrv.Shutdown(ctx); err != nil {
utils.Log.Fatal("HTTP server shutdown err: ", err)
if err := httpSrv.Shutdown(quitCtx); err != nil {
log.Errorln("shutdown HTTP server", ":", err)
}
}()
}
if conf.Conf.Scheme.HttpsPort != -1 {
if httpsSrv != nil {
wg.Add(1)
go func() {
defer wg.Done()
if err := httpsSrv.Shutdown(ctx); err != nil {
utils.Log.Fatal("HTTPS server shutdown err: ", err)
}
}()
if conf.Conf.Scheme.EnableH3 {
wg.Add(1)
go func() {
defer wg.Done()
if err := quicSrv.Shutdown(ctx); err != nil {
utils.Log.Fatal("HTTP3 (quic) server shutdown err: ", err)
if err := httpsSrv.Shutdown(quitCtx); err != nil {
log.Errorln("shutdown HTTPS server", ":", err)
}
}()
}
}
if conf.Conf.Scheme.UnixFile != "" {
if unixSrv != nil {
wg.Add(1)
go func() {
defer wg.Done()
if err := unixSrv.Shutdown(ctx); err != nil {
utils.Log.Fatal("Unix server shutdown err: ", err)
}
}()
}
if conf.Conf.FTP.Listen != "" && conf.Conf.FTP.Enable && ftpServer != nil && ftpDriver != nil {
wg.Add(1)
go func() {
defer wg.Done()
ftpDriver.Stop()
if err := ftpServer.Stop(); err != nil {
utils.Log.Fatal("FTP server shutdown err: ", err)
}
}()
}
if conf.Conf.SFTP.Listen != "" && conf.Conf.SFTP.Enable && sftpServer != nil && sftpDriver != nil {
wg.Add(1)
go func() {
defer wg.Done()
if err := sftpServer.Close(); err != nil {
utils.Log.Fatal("SFTP server shutdown err: ", err)
if err := unixSrv.Shutdown(quitCtx); err != nil {
log.Errorln("shutdown Unix server", ":", err)
}
}()
}
wg.Wait()
utils.Log.Println("Server exit")
log.Println("server exit")
},
}
func init() {
RootCmd.AddCommand(ServerCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// serverCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// serverCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
// OutOpenListInit 暴露用于外部启动server的函数

View File

@@ -1,60 +0,0 @@
package _115
import (
"errors"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
driver115 "github.com/SheltonZhu/115driver/pkg/driver"
log "github.com/sirupsen/logrus"
)
var (
md5Salt = "Qclm8MGWUv59TnrR0XPg"
appVer = "35.6.0.3"
)
func (d *Pan115) getAppVersion() (string, error) {
result := VersionResp{}
res, err := base.RestyClient.R().Get(driver115.ApiGetVersion)
if err != nil {
return "", err
}
err = utils.Json.Unmarshal(res.Body(), &result)
if err != nil {
return "", err
}
if len(result.Error) > 0 {
return "", errors.New(result.Error)
}
return result.Data.Win.Version, nil
}
func (d *Pan115) getAppVer() string {
ver, err := d.getAppVersion()
if err != nil {
log.Warnf("[115] get app version failed: %v", err)
return appVer
}
if len(ver) > 0 {
return ver
}
return appVer
}
func (d *Pan115) initAppVer() {
appVer = d.getAppVer()
log.Debugf("use app version: %v", appVer)
}
type VersionResp struct {
Error string `json:"error,omitempty"`
Data Versions `json:"data"`
}
type Versions struct {
Win Version `json:"win"`
}
type Version struct {
Version string `json:"version_code"`
}

View File

@@ -1,501 +0,0 @@
package chunk
import (
"bytes"
"context"
"errors"
"fmt"
"io"
stdpath "path"
"strconv"
"strings"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/fs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/sign"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/errgroup"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server/common"
"github.com/avast/retry-go"
)
type Chunk struct {
model.Storage
Addition
}
func (d *Chunk) Config() driver.Config {
return config
}
func (d *Chunk) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Chunk) Init(ctx context.Context) error {
if d.PartSize <= 0 {
return errors.New("part size must be positive")
}
if len(d.ChunkPrefix) <= 0 {
return errors.New("chunk folder prefix must not be empty")
}
d.RemotePath = utils.FixAndCleanPath(d.RemotePath)
return nil
}
func (d *Chunk) Drop(ctx context.Context) error {
return nil
}
func (d *Chunk) Get(ctx context.Context, path string) (model.Obj, error) {
if utils.PathEqual(path, "/") {
return &model.Object{
Name: "Root",
IsFolder: true,
Path: "/",
}, nil
}
remoteStorage, remoteActualPath, err := op.GetStorageAndActualPath(d.RemotePath)
if err != nil {
return nil, err
}
remoteActualPath = stdpath.Join(remoteActualPath, path)
if remoteObj, err := op.Get(ctx, remoteStorage, remoteActualPath); err == nil {
return &model.Object{
Path: path,
Name: remoteObj.GetName(),
Size: remoteObj.GetSize(),
Modified: remoteObj.ModTime(),
IsFolder: remoteObj.IsDir(),
HashInfo: remoteObj.GetHash(),
}, nil
}
remoteActualDir, name := stdpath.Split(remoteActualPath)
chunkName := d.ChunkPrefix + name
chunkObjs, err := op.List(ctx, remoteStorage, stdpath.Join(remoteActualDir, chunkName), model.ListArgs{})
if err != nil {
return nil, err
}
var totalSize int64 = 0
// 0号块默认为-1 以支持空文件
chunkSizes := []int64{-1}
h := make(map[*utils.HashType]string)
var first model.Obj
for _, o := range chunkObjs {
if o.IsDir() {
continue
}
if after, ok := strings.CutPrefix(o.GetName(), "hash_"); ok {
hn, value, ok := strings.Cut(strings.TrimSuffix(after, d.CustomExt), "_")
if ok {
ht, ok := utils.GetHashByName(hn)
if ok {
h[ht] = value
}
}
continue
}
idx, err := strconv.Atoi(strings.TrimSuffix(o.GetName(), d.CustomExt))
if err != nil {
continue
}
totalSize += o.GetSize()
if len(chunkSizes) > idx {
if idx == 0 {
first = o
}
chunkSizes[idx] = o.GetSize()
} else if len(chunkSizes) == idx {
chunkSizes = append(chunkSizes, o.GetSize())
} else {
newChunkSizes := make([]int64, idx+1)
copy(newChunkSizes, chunkSizes)
chunkSizes = newChunkSizes
chunkSizes[idx] = o.GetSize()
}
}
reqDir, _ := stdpath.Split(path)
objRes := chunkObject{
Object: model.Object{
Path: stdpath.Join(reqDir, chunkName),
Name: name,
Size: totalSize,
Modified: first.ModTime(),
Ctime: first.CreateTime(),
},
chunkSizes: chunkSizes,
}
if len(h) > 0 {
objRes.HashInfo = utils.NewHashInfoByMap(h)
}
return &objRes, nil
}
func (d *Chunk) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
remoteStorage, remoteActualPath, err := op.GetStorageAndActualPath(d.RemotePath)
if err != nil {
return nil, err
}
remoteActualDir := stdpath.Join(remoteActualPath, dir.GetPath())
remoteObjs, err := op.List(ctx, remoteStorage, remoteActualDir, model.ListArgs{
ReqPath: args.ReqPath,
Refresh: args.Refresh,
})
if err != nil {
return nil, err
}
result := make([]model.Obj, 0, len(remoteObjs))
listG, listCtx := errgroup.NewGroupWithContext(ctx, d.NumListWorkers, retry.Attempts(3))
for _, obj := range remoteObjs {
if utils.IsCanceled(listCtx) {
break
}
rawName := obj.GetName()
if obj.IsDir() {
if name, ok := strings.CutPrefix(rawName, d.ChunkPrefix); ok {
resultIdx := len(result)
result = append(result, nil)
listG.Go(func(ctx context.Context) error {
chunkObjs, err := op.List(ctx, remoteStorage, stdpath.Join(remoteActualDir, rawName), model.ListArgs{
ReqPath: stdpath.Join(args.ReqPath, rawName),
Refresh: args.Refresh,
})
if err != nil {
return err
}
totalSize := int64(0)
h := make(map[*utils.HashType]string)
first := obj
for _, o := range chunkObjs {
if o.IsDir() {
continue
}
if after, ok := strings.CutPrefix(strings.TrimSuffix(o.GetName(), d.CustomExt), "hash_"); ok {
hn, value, ok := strings.Cut(after, "_")
if ok {
ht, ok := utils.GetHashByName(hn)
if ok {
h[ht] = value
}
continue
}
}
idx, err := strconv.Atoi(strings.TrimSuffix(o.GetName(), d.CustomExt))
if err != nil {
continue
}
if idx == 0 {
first = o
}
totalSize += o.GetSize()
}
objRes := model.Object{
Name: name,
Size: totalSize,
Modified: first.ModTime(),
Ctime: first.CreateTime(),
}
if len(h) > 0 {
objRes.HashInfo = utils.NewHashInfoByMap(h)
}
if !d.Thumbnail {
result[resultIdx] = &objRes
} else {
thumbPath := stdpath.Join(args.ReqPath, ".thumbnails", name+".webp")
thumb := fmt.Sprintf("%s/d%s?sign=%s",
common.GetApiUrl(ctx),
utils.EncodePath(thumbPath, true),
sign.Sign(thumbPath))
result[resultIdx] = &model.ObjThumb{
Object: objRes,
Thumbnail: model.Thumbnail{
Thumbnail: thumb,
},
}
}
return nil
})
continue
}
}
if !d.ShowHidden && strings.HasPrefix(rawName, ".") {
continue
}
thumb, ok := model.GetThumb(obj)
objRes := model.Object{
Name: rawName,
Size: obj.GetSize(),
Modified: obj.ModTime(),
IsFolder: obj.IsDir(),
HashInfo: obj.GetHash(),
}
if !ok {
result = append(result, &objRes)
} else {
result = append(result, &model.ObjThumb{
Object: objRes,
Thumbnail: model.Thumbnail{
Thumbnail: thumb,
},
})
}
}
if err = listG.Wait(); err != nil {
return nil, err
}
return result, nil
}
func (d *Chunk) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
remoteStorage, remoteActualPath, err := op.GetStorageAndActualPath(d.RemotePath)
if err != nil {
return nil, err
}
chunkFile, ok := file.(*chunkObject)
remoteActualPath = stdpath.Join(remoteActualPath, file.GetPath())
if !ok {
l, _, err := op.Link(ctx, remoteStorage, remoteActualPath, args)
if err != nil {
return nil, err
}
resultLink := *l
resultLink.SyncClosers = utils.NewSyncClosers(l)
return &resultLink, nil
}
// 检查0号块不等于-1 以支持空文件
// 如果块数量大于1 最后一块不可能为0
// 只检查中间块是否有0
for i, l := 0, len(chunkFile.chunkSizes)-2; ; i++ {
if i == 0 {
if chunkFile.chunkSizes[i] == -1 {
return nil, fmt.Errorf("chunk part[%d] are missing", i)
}
} else if chunkFile.chunkSizes[i] == 0 {
return nil, fmt.Errorf("chunk part[%d] are missing", i)
}
if i >= l {
break
}
}
fileSize := chunkFile.GetSize()
mergedRrf := func(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error) {
start := httpRange.Start
length := httpRange.Length
if length < 0 || start+length > fileSize {
length = fileSize - start
}
if length == 0 {
return io.NopCloser(strings.NewReader("")), nil
}
rs := make([]io.Reader, 0)
cs := make(utils.Closers, 0)
var (
rc io.ReadCloser
readFrom bool
)
for idx, chunkSize := range chunkFile.chunkSizes {
if readFrom {
l, o, err := op.Link(ctx, remoteStorage, stdpath.Join(remoteActualPath, d.getPartName(idx)), args)
if err != nil {
_ = cs.Close()
return nil, err
}
cs = append(cs, l)
chunkSize2 := l.ContentLength
if chunkSize2 <= 0 {
chunkSize2 = o.GetSize()
}
if chunkSize2 != chunkSize {
_ = cs.Close()
return nil, fmt.Errorf("chunk part[%d] size not match", idx)
}
rrf, err := stream.GetRangeReaderFromLink(chunkSize2, l)
if err != nil {
_ = cs.Close()
return nil, err
}
newLength := length - chunkSize2
if newLength >= 0 {
length = newLength
rc, err = rrf.RangeRead(ctx, http_range.Range{Length: -1})
} else {
rc, err = rrf.RangeRead(ctx, http_range.Range{Length: length})
}
if err != nil {
_ = cs.Close()
return nil, err
}
rs = append(rs, rc)
cs = append(cs, rc)
if newLength <= 0 {
return utils.ReadCloser{
Reader: io.MultiReader(rs...),
Closer: &cs,
}, nil
}
} else if newStart := start - chunkSize; newStart >= 0 {
start = newStart
} else {
l, o, err := op.Link(ctx, remoteStorage, stdpath.Join(remoteActualPath, d.getPartName(idx)), args)
if err != nil {
_ = cs.Close()
return nil, err
}
cs = append(cs, l)
chunkSize2 := l.ContentLength
if chunkSize2 <= 0 {
chunkSize2 = o.GetSize()
}
if chunkSize2 != chunkSize {
_ = cs.Close()
return nil, fmt.Errorf("chunk part[%d] size not match", idx)
}
rrf, err := stream.GetRangeReaderFromLink(chunkSize2, l)
if err != nil {
_ = cs.Close()
return nil, err
}
rc, err = rrf.RangeRead(ctx, http_range.Range{Start: start, Length: -1})
if err != nil {
_ = cs.Close()
return nil, err
}
length -= chunkSize2 - start
cs = append(cs, rc)
if length <= 0 {
return utils.ReadCloser{
Reader: rc,
Closer: &cs,
}, nil
}
rs = append(rs, rc)
readFrom = true
}
}
return nil, fmt.Errorf("invalid range: start=%d,length=%d,fileSize=%d", httpRange.Start, httpRange.Length, fileSize)
}
return &model.Link{
RangeReader: stream.RangeReaderFunc(mergedRrf),
}, nil
}
func (d *Chunk) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
path := stdpath.Join(d.RemotePath, parentDir.GetPath(), dirName)
return fs.MakeDir(ctx, path)
}
func (d *Chunk) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
src := stdpath.Join(d.RemotePath, srcObj.GetPath())
dst := stdpath.Join(d.RemotePath, dstDir.GetPath())
_, err := fs.Move(ctx, src, dst)
return err
}
func (d *Chunk) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
if _, ok := srcObj.(*chunkObject); ok {
newName = d.ChunkPrefix + newName
}
return fs.Rename(ctx, stdpath.Join(d.RemotePath, srcObj.GetPath()), newName)
}
func (d *Chunk) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
dst := stdpath.Join(d.RemotePath, dstDir.GetPath())
src := stdpath.Join(d.RemotePath, srcObj.GetPath())
_, err := fs.Copy(ctx, src, dst)
return err
}
func (d *Chunk) Remove(ctx context.Context, obj model.Obj) error {
return fs.Remove(ctx, stdpath.Join(d.RemotePath, obj.GetPath()))
}
func (d *Chunk) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
remoteStorage, remoteActualPath, err := op.GetStorageAndActualPath(d.RemotePath)
if err != nil {
return err
}
if (d.Thumbnail && dstDir.GetName() == ".thumbnails") || (d.ChunkLargeFileOnly && file.GetSize() <= d.PartSize) {
return op.Put(ctx, remoteStorage, stdpath.Join(remoteActualPath, dstDir.GetPath()), file, up)
}
upReader := &driver.ReaderUpdatingProgress{
Reader: file,
UpdateProgress: up,
}
dst := stdpath.Join(remoteActualPath, dstDir.GetPath(), d.ChunkPrefix+file.GetName())
if d.StoreHash {
for ht, value := range file.GetHash().All() {
_ = op.Put(ctx, remoteStorage, dst, &stream.FileStream{
Obj: &model.Object{
Name: fmt.Sprintf("hash_%s_%s%s", ht.Name, value, d.CustomExt),
Size: 1,
Modified: file.ModTime(),
},
Mimetype: "application/octet-stream",
Reader: bytes.NewReader([]byte{0}), // 兼容不支持空文件的驱动
}, nil, true)
}
}
fullPartCount := int(file.GetSize() / d.PartSize)
tailSize := file.GetSize() % d.PartSize
if tailSize == 0 && fullPartCount > 0 {
fullPartCount--
tailSize = d.PartSize
}
partIndex := 0
for partIndex < fullPartCount {
err = op.Put(ctx, remoteStorage, dst, &stream.FileStream{
Obj: &model.Object{
Name: d.getPartName(partIndex),
Size: d.PartSize,
Modified: file.ModTime(),
},
Mimetype: file.GetMimetype(),
Reader: io.LimitReader(upReader, d.PartSize),
}, nil, true)
if err != nil {
_ = op.Remove(ctx, remoteStorage, dst)
return err
}
partIndex++
}
err = op.Put(ctx, remoteStorage, dst, &stream.FileStream{
Obj: &model.Object{
Name: d.getPartName(fullPartCount),
Size: tailSize,
Modified: file.ModTime(),
},
Mimetype: file.GetMimetype(),
Reader: upReader,
}, nil)
if err != nil {
_ = op.Remove(ctx, remoteStorage, dst)
}
return err
}
func (d *Chunk) getPartName(part int) string {
return fmt.Sprintf("%d%s", part, d.CustomExt)
}
func (d *Chunk) GetDetails(ctx context.Context) (*model.StorageDetails, error) {
remoteStorage, err := fs.GetStorage(d.RemotePath, &fs.GetStoragesArgs{})
if err != nil {
return nil, errs.NotImplement
}
remoteDetails, err := op.GetStorageDetails(ctx, remoteStorage)
if err != nil {
return nil, err
}
return &model.StorageDetails{
DiskUsage: remoteDetails.DiskUsage,
}, nil
}
var _ driver.Driver = (*Chunk)(nil)

View File

@@ -1,39 +0,0 @@
package chunk
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
RemotePath string `json:"remote_path" required:"true"`
PartSize int64 `json:"part_size" required:"true" type:"number" help:"bytes"`
ChunkLargeFileOnly bool `json:"chunk_large_file_only" default:"false" help:"chunk only if file size > part_size"`
ChunkPrefix string `json:"chunk_prefix" type:"string" default:"[openlist_chunk]" help:"the prefix of chunk folder"`
CustomExt string `json:"custom_ext" type:"string"`
StoreHash bool `json:"store_hash" type:"bool" default:"true"`
NumListWorkers int `json:"num_list_workers" required:"true" type:"number" default:"5"`
Thumbnail bool `json:"thumbnail" required:"true" default:"false" help:"enable thumbnail which pre-generated under .thumbnails folder"`
ShowHidden bool `json:"show_hidden" default:"true" required:"false" help:"show hidden directories and files"`
}
var config = driver.Config{
Name: "Chunk",
LocalSort: true,
OnlyProxy: true,
NoCache: true,
DefaultRoot: "/",
NoLinkURL: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Chunk{
Addition: Addition{
ChunkPrefix: "[openlist_chunk]",
NumListWorkers: 5,
},
}
})
}

View File

@@ -1,8 +0,0 @@
package chunk
import "github.com/OpenListTeam/OpenList/v4/internal/model"
type chunkObject struct {
model.Object
chunkSizes []int64
}

View File

@@ -1,230 +0,0 @@
package cnb_releases
import (
"bytes"
"context"
"fmt"
"io"
"mime/multipart"
"net/http"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/go-resty/resty/v2"
)
type CnbReleases struct {
model.Storage
Addition
ref *CnbReleases
}
func (d *CnbReleases) Config() driver.Config {
return config
}
func (d *CnbReleases) GetAddition() driver.Additional {
return &d.Addition
}
func (d *CnbReleases) Init(ctx context.Context) error {
return nil
}
func (d *CnbReleases) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*CnbReleases)
if ok {
d.ref = refStorage
return nil
}
return fmt.Errorf("ref: storage is not CnbReleases")
}
func (d *CnbReleases) Drop(ctx context.Context) error {
d.ref = nil
return nil
}
func (d *CnbReleases) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
if dir.GetPath() == "/" {
// get all releases for root dir
var resp ReleaseList
err := d.Request(http.MethodGet, "/{repo}/-/releases", func(req *resty.Request) {
req.SetPathParam("repo", d.Repo)
}, &resp)
if err != nil {
return nil, err
}
return utils.SliceConvert(resp, func(src Release) (model.Obj, error) {
name := src.Name
if d.UseTagName {
name = src.TagName
}
return &model.Object{
ID: src.ID,
Name: name,
Size: d.sumAssetsSize(src.Assets),
Ctime: src.CreatedAt,
Modified: src.UpdatedAt,
IsFolder: true,
}, nil
})
} else {
// get release info by release id
releaseID := dir.GetID()
if releaseID == "" {
return nil, errs.ObjectNotFound
}
var resp Release
err := d.Request(http.MethodGet, "/{repo}/-/releases/{release_id}", func(req *resty.Request) {
req.SetPathParam("repo", d.Repo)
req.SetPathParam("release_id", releaseID)
}, &resp)
if err != nil {
return nil, err
}
return utils.SliceConvert(resp.Assets, func(src ReleaseAsset) (model.Obj, error) {
return &Object{
Object: model.Object{
ID: src.ID,
Path: src.Path,
Name: src.Name,
Size: src.Size,
Ctime: src.CreatedAt,
Modified: src.UpdatedAt,
IsFolder: false,
},
ParentID: dir.GetID(),
}, nil
})
}
}
func (d *CnbReleases) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
return &model.Link{
URL: "https://cnb.cool" + file.GetPath(),
}, nil
}
func (d *CnbReleases) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
if parentDir.GetPath() == "/" {
// create a new release
branch := d.DefaultBranch
if branch == "" {
branch = "main" // fallback to "main" if not set
}
return d.Request(http.MethodPost, "/{repo}/-/releases", func(req *resty.Request) {
req.SetPathParam("repo", d.Repo)
req.SetBody(base.Json{
"name": dirName,
"tag_name": dirName,
"target_commitish": branch,
})
}, nil)
}
return errs.NotImplement
}
func (d *CnbReleases) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
return nil, errs.NotImplement
}
func (d *CnbReleases) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
if srcObj.IsDir() && !d.UseTagName {
return d.Request(http.MethodPatch, "/{repo}/-/releases/{release_id}", func(req *resty.Request) {
req.SetPathParam("repo", d.Repo)
req.SetPathParam("release_id", srcObj.GetID())
req.SetFormData(map[string]string{
"name": newName,
})
}, nil)
}
return errs.NotImplement
}
func (d *CnbReleases) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
return nil, errs.NotImplement
}
func (d *CnbReleases) Remove(ctx context.Context, obj model.Obj) error {
if obj.IsDir() {
return d.Request(http.MethodDelete, "/{repo}/-/releases/{release_id}", func(req *resty.Request) {
req.SetPathParam("repo", d.Repo)
req.SetPathParam("release_id", obj.GetID())
}, nil)
}
if o, ok := obj.(*Object); ok {
return d.Request(http.MethodDelete, "/{repo}/-/releases/{release_id}/assets/{asset_id}", func(req *resty.Request) {
req.SetPathParam("repo", d.Repo)
req.SetPathParam("release_id", o.ParentID)
req.SetPathParam("asset_id", obj.GetID())
}, nil)
} else {
return fmt.Errorf("unable to get release ID")
}
}
func (d *CnbReleases) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
// 1. get upload info
var resp ReleaseAssetUploadURL
err := d.Request(http.MethodPost, "/{repo}/-/releases/{release_id}/asset-upload-url", func(req *resty.Request) {
req.SetPathParam("repo", d.Repo)
req.SetPathParam("release_id", dstDir.GetID())
req.SetBody(base.Json{
"asset_name": file.GetName(),
"overwrite": true,
"size": file.GetSize(),
})
}, &resp)
if err != nil {
return err
}
// 2. upload file
// use multipart to create form file
var b bytes.Buffer
w := multipart.NewWriter(&b)
_, err = w.CreateFormFile("file", file.GetName())
if err != nil {
return err
}
headSize := b.Len()
err = w.Close()
if err != nil {
return err
}
head := bytes.NewReader(b.Bytes()[:headSize])
tail := bytes.NewReader(b.Bytes()[headSize:])
rateLimitedRd := driver.NewLimitedUploadStream(ctx, io.MultiReader(head, file, tail))
// use net/http to upload file
ctxWithTimeout, cancel := context.WithTimeout(ctx, time.Duration(resp.ExpiresInSec+1)*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(ctxWithTimeout, http.MethodPost, resp.UploadURL, rateLimitedRd)
if err != nil {
return err
}
req.Header.Set("Content-Type", w.FormDataContentType())
req.Header.Set("User-Agent", base.UserAgent)
httpResp, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer httpResp.Body.Close()
if httpResp.StatusCode != http.StatusNoContent {
return fmt.Errorf("upload file failed: %s", httpResp.Status)
}
// 3. verify upload
return d.Request(http.MethodPost, resp.VerifyURL, nil, nil)
}
var _ driver.Driver = (*CnbReleases)(nil)

View File

@@ -1,26 +0,0 @@
package cnb_releases
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootPath
Repo string `json:"repo" type:"string" required:"true"`
Token string `json:"token" type:"string" required:"true"`
UseTagName bool `json:"use_tag_name" type:"bool" default:"false" help:"Use tag name instead of release name"`
DefaultBranch string `json:"default_branch" type:"string" default:"main" help:"Default branch for new releases"`
}
var config = driver.Config{
Name: "CNB Releases",
LocalSort: true,
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &CnbReleases{}
})
}

View File

@@ -1,100 +0,0 @@
package cnb_releases
import (
"time"
"github.com/OpenListTeam/OpenList/v4/internal/model"
)
type Object struct {
model.Object
ParentID string
}
type TagList []Tag
type Tag struct {
Commit struct {
Author UserInfo `json:"author"`
Commit CommitObject `json:"commit"`
Committer UserInfo `json:"committer"`
Parents []CommitParent `json:"parents"`
Sha string `json:"sha"`
} `json:"commit"`
Name string `json:"name"`
Target string `json:"target"`
TargetType string `json:"target_type"`
Verification TagObjectVerification `json:"verification"`
}
type UserInfo struct {
Freeze bool `json:"freeze"`
Nickname string `json:"nickname"`
Username string `json:"username"`
}
type CommitObject struct {
Author Signature `json:"author"`
CommentCount int `json:"comment_count"`
Committer Signature `json:"committer"`
Message string `json:"message"`
Tree CommitObjectTree `json:"tree"`
Verification CommitObjectVerification `json:"verification"`
}
type Signature struct {
Date time.Time `json:"date"`
Email string `json:"email"`
Name string `json:"name"`
}
type CommitObjectTree struct {
Sha string `json:"sha"`
}
type CommitObjectVerification struct {
Payload string `json:"payload"`
Reason string `json:"reason"`
Signature string `json:"signature"`
Verified bool `json:"verified"`
VerifiedAt string `json:"verified_at"`
}
type CommitParent = CommitObjectTree
type TagObjectVerification = CommitObjectVerification
type ReleaseList []Release
type Release struct {
Assets []ReleaseAsset `json:"assets"`
Author UserInfo `json:"author"`
Body string `json:"body"`
CreatedAt time.Time `json:"created_at"`
Draft bool `json:"draft"`
ID string `json:"id"`
IsLatest bool `json:"is_latest"`
Name string `json:"name"`
Prerelease bool `json:"prerelease"`
PublishedAt time.Time `json:"published_at"`
TagCommitish string `json:"tag_commitish"`
TagName string `json:"tag_name"`
UpdatedAt time.Time `json:"updated_at"`
}
type ReleaseAsset struct {
ContentType string `json:"content_type"`
CreatedAt time.Time `json:"created_at"`
ID string `json:"id"`
Name string `json:"name"`
Path string `json:"path"`
Size int64 `json:"size"`
UpdatedAt time.Time `json:"updated_at"`
Uploader UserInfo `json:"uploader"`
}
type ReleaseAssetUploadURL struct {
UploadURL string `json:"upload_url"`
ExpiresInSec int `json:"expires_in_sec"`
VerifyURL string `json:"verify_url"`
}

View File

@@ -1,58 +0,0 @@
package cnb_releases
import (
"encoding/json"
"fmt"
"net/http"
"strings"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
log "github.com/sirupsen/logrus"
)
// do others that not defined in Driver interface
func (d *CnbReleases) Request(method string, path string, callback base.ReqCallback, resp any) error {
if d.ref != nil {
return d.ref.Request(method, path, callback, resp)
}
var url string
if strings.HasPrefix(path, "http") {
url = path
} else {
url = "https://api.cnb.cool" + path
}
req := base.RestyClient.R()
req.SetHeader("Accept", "application/json")
req.SetAuthScheme("Bearer")
req.SetAuthToken(d.Token)
if callback != nil {
callback(req)
}
res, err := req.Execute(method, url)
log.Debugln(res.String())
if err != nil {
return err
}
if res.StatusCode() != http.StatusOK && res.StatusCode() != http.StatusCreated && res.StatusCode() != http.StatusNoContent {
return fmt.Errorf("failed to request %s, status code: %d, message: %s", url, res.StatusCode(), res.String())
}
if resp != nil {
err = json.Unmarshal(res.Body(), resp)
if err != nil {
return err
}
}
return nil
}
func (d *CnbReleases) sumAssetsSize(assets []ReleaseAsset) int64 {
var size int64
for _, asset := range assets {
size += asset.Size
}
return size
}

View File

@@ -1,203 +0,0 @@
package degoo
import (
"context"
"fmt"
"net/http"
"strconv"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
)
type Degoo struct {
model.Storage
Addition
client *http.Client
}
func (d *Degoo) Config() driver.Config {
return config
}
func (d *Degoo) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Degoo) Init(ctx context.Context) error {
d.client = base.HttpClient
// Ensure we have a valid token (will login if needed or refresh if expired)
if err := d.ensureValidToken(ctx); err != nil {
return fmt.Errorf("failed to initialize token: %w", err)
}
return d.getDevices(ctx)
}
func (d *Degoo) Drop(ctx context.Context) error {
return nil
}
func (d *Degoo) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
items, err := d.getAllFileChildren5(ctx, dir.GetID())
if err != nil {
return nil, err
}
return utils.MustSliceConvert(items, func(s DegooFileItem) model.Obj {
isFolder := s.Category == 2 || s.Category == 1 || s.Category == 10
createTime, modTime, _ := humanReadableTimes(s.CreationTime, s.LastModificationTime, s.LastUploadTime)
size, err := strconv.ParseInt(s.Size, 10, 64)
if err != nil {
size = 0 // Default to 0 if size parsing fails
}
return &model.Object{
ID: s.ID,
Path: s.FilePath,
Name: s.Name,
Size: size,
Modified: modTime,
Ctime: createTime,
IsFolder: isFolder,
}
}), nil
}
func (d *Degoo) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
item, err := d.getOverlay4(ctx, file.GetID())
if err != nil {
return nil, err
}
return &model.Link{URL: item.URL}, nil
}
func (d *Degoo) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
// This is done by calling the setUploadFile3 API with a special checksum and size.
const query = `mutation SetUploadFile3($Token: String!, $FileInfos: [FileInfoUpload3]!) { setUploadFile3(Token: $Token, FileInfos: $FileInfos) }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"FileInfos": []map[string]interface{}{
{
"Checksum": folderChecksum,
"Name": dirName,
"CreationTime": time.Now().UnixMilli(),
"ParentID": parentDir.GetID(),
"Size": 0,
},
},
}
_, err := d.apiCall(ctx, "SetUploadFile3", query, variables)
if err != nil {
return err
}
return nil
}
func (d *Degoo) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
const query = `mutation SetMoveFile($Token: String!, $Copy: Boolean, $NewParentID: String!, $FileIDs: [String]!) { setMoveFile(Token: $Token, Copy: $Copy, NewParentID: $NewParentID, FileIDs: $FileIDs) }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"Copy": false,
"NewParentID": dstDir.GetID(),
"FileIDs": []string{srcObj.GetID()},
}
_, err := d.apiCall(ctx, "SetMoveFile", query, variables)
if err != nil {
return nil, err
}
return srcObj, nil
}
func (d *Degoo) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
const query = `mutation SetRenameFile($Token: String!, $FileRenames: [FileRenameInfo]!) { setRenameFile(Token: $Token, FileRenames: $FileRenames) }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"FileRenames": []DegooFileRenameInfo{
{
ID: srcObj.GetID(),
NewName: newName,
},
},
}
_, err := d.apiCall(ctx, "SetRenameFile", query, variables)
if err != nil {
return err
}
return nil
}
func (d *Degoo) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// Copy is not implemented, Degoo API does not support direct copy.
return nil, errs.NotImplement
}
func (d *Degoo) Remove(ctx context.Context, obj model.Obj) error {
// Remove deletes a file or folder (moves to trash).
const query = `mutation SetDeleteFile5($Token: String!, $IsInRecycleBin: Boolean!, $IDs: [IDType]!) { setDeleteFile5(Token: $Token, IsInRecycleBin: $IsInRecycleBin, IDs: $IDs) }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"IsInRecycleBin": false,
"IDs": []map[string]string{{"FileID": obj.GetID()}},
}
_, err := d.apiCall(ctx, "SetDeleteFile5", query, variables)
return err
}
func (d *Degoo) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
tmpF, err := file.CacheFullAndWriter(&up, nil)
if err != nil {
return err
}
parentID := dstDir.GetID()
// Calculate the checksum for the file.
checksum, err := d.checkSum(tmpF)
if err != nil {
return err
}
// 1. Get upload authorization via getBucketWriteAuth4.
auths, err := d.getBucketWriteAuth4(ctx, file, parentID, checksum)
if err != nil {
return err
}
// 2. Upload file.
// support rapid upload
if auths.GetBucketWriteAuth4[0].Error != "Already exist!" {
err = d.uploadS3(ctx, auths, tmpF, file, checksum)
if err != nil {
return err
}
}
// 3. Register metadata with setUploadFile3.
data, err := d.SetUploadFile3(ctx, file, parentID, checksum)
if err != nil {
return err
}
if !data.SetUploadFile3 {
return fmt.Errorf("setUploadFile3 failed: %v", data)
}
return nil
}

View File

@@ -1,27 +0,0 @@
package degoo
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootID
Username string `json:"username" help:"Your Degoo account email"`
Password string `json:"password" help:"Your Degoo account password"`
RefreshToken string `json:"refresh_token" help:"Refresh token for automatic token renewal, obtained automatically"`
AccessToken string `json:"access_token" help:"Access token for Degoo API, obtained automatically"`
}
var config = driver.Config{
Name: "Degoo",
LocalSort: true,
DefaultRoot: "0",
NoOverwriteUpload: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Degoo{}
})
}

View File

@@ -1,110 +0,0 @@
package degoo
import (
"encoding/json"
)
// DegooLoginRequest represents the login request body.
type DegooLoginRequest struct {
GenerateToken bool `json:"GenerateToken"`
Username string `json:"Username"`
Password string `json:"Password"`
}
// DegooLoginResponse represents a successful login response.
type DegooLoginResponse struct {
Token string `json:"Token"`
RefreshToken string `json:"RefreshToken"`
}
// DegooAccessTokenRequest represents the token refresh request body.
type DegooAccessTokenRequest struct {
RefreshToken string `json:"RefreshToken"`
}
// DegooAccessTokenResponse represents the token refresh response.
type DegooAccessTokenResponse struct {
AccessToken string `json:"AccessToken"`
}
// DegooFileItem represents a Degoo file or folder.
type DegooFileItem struct {
ID string `json:"ID"`
ParentID string `json:"ParentID"`
Name string `json:"Name"`
Category int `json:"Category"`
Size string `json:"Size"`
URL string `json:"URL"`
CreationTime string `json:"CreationTime"`
LastModificationTime string `json:"LastModificationTime"`
LastUploadTime string `json:"LastUploadTime"`
MetadataID string `json:"MetadataID"`
DeviceID int64 `json:"DeviceID"`
FilePath string `json:"FilePath"`
IsInRecycleBin bool `json:"IsInRecycleBin"`
}
type DegooErrors struct {
Path []string `json:"path"`
Data interface{} `json:"data"`
ErrorType string `json:"errorType"`
ErrorInfo interface{} `json:"errorInfo"`
Message string `json:"message"`
}
// DegooGraphqlResponse is the common structure for GraphQL API responses.
type DegooGraphqlResponse struct {
Data json.RawMessage `json:"data"`
Errors []DegooErrors `json:"errors,omitempty"`
}
// DegooGetChildren5Data is the data field for getFileChildren5.
type DegooGetChildren5Data struct {
GetFileChildren5 struct {
Items []DegooFileItem `json:"Items"`
NextToken string `json:"NextToken"`
} `json:"getFileChildren5"`
}
// DegooGetOverlay4Data is the data field for getOverlay4.
type DegooGetOverlay4Data struct {
GetOverlay4 DegooFileItem `json:"getOverlay4"`
}
// DegooFileRenameInfo represents a file rename operation.
type DegooFileRenameInfo struct {
ID string `json:"ID"`
NewName string `json:"NewName"`
}
// DegooFileIDs represents a list of file IDs for move operations.
type DegooFileIDs struct {
FileIDs []string `json:"FileIDs"`
}
// DegooGetBucketWriteAuth4Data is the data field for GetBucketWriteAuth4.
type DegooGetBucketWriteAuth4Data struct {
GetBucketWriteAuth4 []struct {
AuthData struct {
PolicyBase64 string `json:"PolicyBase64"`
Signature string `json:"Signature"`
BaseURL string `json:"BaseURL"`
KeyPrefix string `json:"KeyPrefix"`
AccessKey struct {
Key string `json:"Key"`
Value string `json:"Value"`
} `json:"AccessKey"`
ACL string `json:"ACL"`
AdditionalBody []struct {
Key string `json:"Key"`
Value string `json:"Value"`
} `json:"AdditionalBody"`
} `json:"AuthData"`
Error interface{} `json:"Error"`
} `json:"getBucketWriteAuth4"`
}
// DegooSetUploadFile3Data is the data field for SetUploadFile3.
type DegooSetUploadFile3Data struct {
SetUploadFile3 bool `json:"setUploadFile3"`
}

View File

@@ -1,198 +0,0 @@
package degoo
import (
"bytes"
"context"
"crypto/sha1"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"mime/multipart"
"net/http"
"strconv"
"strings"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
)
func (d *Degoo) getBucketWriteAuth4(ctx context.Context, file model.FileStreamer, parentID string, checksum string) (*DegooGetBucketWriteAuth4Data, error) {
const query = `query GetBucketWriteAuth4(
$Token: String!
$ParentID: String!
$StorageUploadInfos: [StorageUploadInfo2]
) {
getBucketWriteAuth4(
Token: $Token
ParentID: $ParentID
StorageUploadInfos: $StorageUploadInfos
) {
AuthData {
PolicyBase64
Signature
BaseURL
KeyPrefix
AccessKey {
Key
Value
}
ACL
AdditionalBody {
Key
Value
}
}
Error
}
}`
variables := map[string]interface{}{
"Token": d.AccessToken,
"ParentID": parentID,
"StorageUploadInfos": []map[string]string{{
"FileName": file.GetName(),
"Checksum": checksum,
"Size": strconv.FormatInt(file.GetSize(), 10),
}}}
data, err := d.apiCall(ctx, "GetBucketWriteAuth4", query, variables)
if err != nil {
return nil, err
}
var resp DegooGetBucketWriteAuth4Data
err = json.Unmarshal(data, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}
// checkSum calculates the SHA1-based checksum for Degoo upload API.
func (d *Degoo) checkSum(file io.Reader) (string, error) {
seed := []byte{13, 7, 2, 2, 15, 40, 75, 117, 13, 10, 19, 16, 29, 23, 3, 36}
hasher := sha1.New()
hasher.Write(seed)
if _, err := utils.CopyWithBuffer(hasher, file); err != nil {
return "", err
}
cs := hasher.Sum(nil)
csBytes := []byte{10, byte(len(cs))}
csBytes = append(csBytes, cs...)
csBytes = append(csBytes, 16, 0)
return strings.ReplaceAll(base64.StdEncoding.EncodeToString(csBytes), "/", "_"), nil
}
func (d *Degoo) uploadS3(ctx context.Context, auths *DegooGetBucketWriteAuth4Data, tmpF model.File, file model.FileStreamer, checksum string) error {
a := auths.GetBucketWriteAuth4[0].AuthData
_, err := tmpF.Seek(0, io.SeekStart)
if err != nil {
return err
}
ext := utils.Ext(file.GetName())
key := fmt.Sprintf("%s%s/%s.%s", a.KeyPrefix, ext, checksum, ext)
var b bytes.Buffer
w := multipart.NewWriter(&b)
err = w.WriteField("key", key)
if err != nil {
return err
}
err = w.WriteField("acl", a.ACL)
if err != nil {
return err
}
err = w.WriteField("policy", a.PolicyBase64)
if err != nil {
return err
}
err = w.WriteField("signature", a.Signature)
if err != nil {
return err
}
err = w.WriteField(a.AccessKey.Key, a.AccessKey.Value)
if err != nil {
return err
}
for _, additional := range a.AdditionalBody {
err = w.WriteField(additional.Key, additional.Value)
if err != nil {
return err
}
}
err = w.WriteField("Content-Type", "")
if err != nil {
return err
}
_, err = w.CreateFormFile("file", key)
if err != nil {
return err
}
headSize := b.Len()
err = w.Close()
if err != nil {
return err
}
head := bytes.NewReader(b.Bytes()[:headSize])
tail := bytes.NewReader(b.Bytes()[headSize:])
rateLimitedRd := driver.NewLimitedUploadStream(ctx, io.MultiReader(head, tmpF, tail))
req, err := http.NewRequestWithContext(ctx, http.MethodPost, a.BaseURL, rateLimitedRd)
if err != nil {
return err
}
req.Header.Add("ngsw-bypass", "1")
req.Header.Add("Content-Type", w.FormDataContentType())
res, err := d.client.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusNoContent {
return fmt.Errorf("upload failed with status code %d", res.StatusCode)
}
return nil
}
var _ driver.Driver = (*Degoo)(nil)
func (d *Degoo) SetUploadFile3(ctx context.Context, file model.FileStreamer, parentID string, checksum string) (*DegooSetUploadFile3Data, error) {
const query = `mutation SetUploadFile3($Token: String!, $FileInfos: [FileInfoUpload3]!) {
setUploadFile3(Token: $Token, FileInfos: $FileInfos)
}`
variables := map[string]interface{}{
"Token": d.AccessToken,
"FileInfos": []map[string]string{{
"Checksum": checksum,
"CreationTime": strconv.FormatInt(file.CreateTime().UnixMilli(), 10),
"Name": file.GetName(),
"ParentID": parentID,
"Size": strconv.FormatInt(file.GetSize(), 10),
}}}
data, err := d.apiCall(ctx, "SetUploadFile3", query, variables)
if err != nil {
return nil, err
}
var resp DegooSetUploadFile3Data
err = json.Unmarshal(data, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}

View File

@@ -1,462 +0,0 @@
package degoo
import (
"bytes"
"context"
"encoding/base64"
"encoding/json"
"fmt"
"net/http"
"strconv"
"strings"
"sync"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
// Thanks to https://github.com/bernd-wechner/Degoo for API research.
const (
// API endpoints
loginURL = "https://rest-api.degoo.com/login"
accessTokenURL = "https://rest-api.degoo.com/access-token/v2"
apiURL = "https://production-appsync.degoo.com/graphql"
// API configuration
apiKey = "da2-vs6twz5vnjdavpqndtbzg3prra"
folderChecksum = "CgAQAg"
// Token management
tokenRefreshThreshold = 5 * time.Minute
// Rate limiting
minRequestInterval = 1 * time.Second
// Error messages
errRateLimited = "rate limited (429), please try again later"
errUnauthorized = "unauthorized access"
)
var (
// Global rate limiting - protects against concurrent API calls
lastRequestTime time.Time
requestMutex sync.Mutex
)
// JWT payload structure for token expiration checking
type JWTPayload struct {
UserID string `json:"userID"`
Exp int64 `json:"exp"`
Iat int64 `json:"iat"`
}
// Rate limiting helper functions
// applyRateLimit ensures minimum interval between API requests
func applyRateLimit() {
requestMutex.Lock()
defer requestMutex.Unlock()
if !lastRequestTime.IsZero() {
if elapsed := time.Since(lastRequestTime); elapsed < minRequestInterval {
time.Sleep(minRequestInterval - elapsed)
}
}
lastRequestTime = time.Now()
}
// HTTP request helper functions
// createJSONRequest creates a new HTTP request with JSON body
func createJSONRequest(ctx context.Context, method, url string, body interface{}) (*http.Request, error) {
jsonBody, err := json.Marshal(body)
if err != nil {
return nil, fmt.Errorf("failed to marshal request body: %w", err)
}
req, err := http.NewRequestWithContext(ctx, method, url, bytes.NewBuffer(jsonBody))
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("User-Agent", base.UserAgent)
return req, nil
}
// checkHTTPResponse checks for common HTTP error conditions
func checkHTTPResponse(resp *http.Response, operation string) error {
if resp.StatusCode == http.StatusTooManyRequests {
return fmt.Errorf("%s %s", operation, errRateLimited)
}
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("%s failed: %s", operation, resp.Status)
}
return nil
}
// isTokenExpired checks if the JWT token is expired or will expire soon
func (d *Degoo) isTokenExpired() bool {
if d.AccessToken == "" {
return true
}
payload, err := extractJWTPayload(d.AccessToken)
if err != nil {
return true // Invalid token format
}
// Check if token expires within the threshold
expireTime := time.Unix(payload.Exp, 0)
return time.Now().Add(tokenRefreshThreshold).After(expireTime)
}
// extractJWTPayload extracts and parses JWT payload
func extractJWTPayload(token string) (*JWTPayload, error) {
parts := strings.Split(token, ".")
if len(parts) != 3 {
return nil, fmt.Errorf("invalid JWT format")
}
// Decode the payload (second part)
payload, err := base64.RawURLEncoding.DecodeString(parts[1])
if err != nil {
return nil, fmt.Errorf("failed to decode JWT payload: %w", err)
}
var jwtPayload JWTPayload
if err := json.Unmarshal(payload, &jwtPayload); err != nil {
return nil, fmt.Errorf("failed to parse JWT payload: %w", err)
}
return &jwtPayload, nil
}
// refreshToken attempts to refresh the access token using the refresh token
func (d *Degoo) refreshToken(ctx context.Context) error {
if d.RefreshToken == "" {
return fmt.Errorf("no refresh token available")
}
// Create request
tokenReq := DegooAccessTokenRequest{RefreshToken: d.RefreshToken}
req, err := createJSONRequest(ctx, "POST", accessTokenURL, tokenReq)
if err != nil {
return fmt.Errorf("failed to create refresh token request: %w", err)
}
// Execute request
resp, err := d.client.Do(req)
if err != nil {
return fmt.Errorf("refresh token request failed: %w", err)
}
defer resp.Body.Close()
// Check response
if err := checkHTTPResponse(resp, "refresh token"); err != nil {
return err
}
var accessTokenResp DegooAccessTokenResponse
if err := json.NewDecoder(resp.Body).Decode(&accessTokenResp); err != nil {
return fmt.Errorf("failed to parse access token response: %w", err)
}
if accessTokenResp.AccessToken == "" {
return fmt.Errorf("empty access token received")
}
d.AccessToken = accessTokenResp.AccessToken
// Save the updated token to storage
op.MustSaveDriverStorage(d)
return nil
}
// ensureValidToken ensures we have a valid, non-expired token
func (d *Degoo) ensureValidToken(ctx context.Context) error {
// Check if token is expired or will expire soon
if d.isTokenExpired() {
// Try to refresh token first if we have a refresh token
if d.RefreshToken != "" {
if refreshErr := d.refreshToken(ctx); refreshErr == nil {
return nil // Successfully refreshed
} else {
// If refresh failed, fall back to full login
fmt.Printf("Token refresh failed, falling back to full login: %v\n", refreshErr)
}
}
// Perform full login
if d.Username != "" && d.Password != "" {
return d.login(ctx)
}
}
return nil
}
// login performs the login process and retrieves the access token.
func (d *Degoo) login(ctx context.Context) error {
if d.Username == "" || d.Password == "" {
return fmt.Errorf("username or password not provided")
}
creds := DegooLoginRequest{
GenerateToken: true,
Username: d.Username,
Password: d.Password,
}
jsonCreds, err := json.Marshal(creds)
if err != nil {
return fmt.Errorf("failed to serialize login credentials: %w", err)
}
req, err := http.NewRequestWithContext(ctx, "POST", loginURL, bytes.NewBuffer(jsonCreds))
if err != nil {
return fmt.Errorf("failed to create login request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("User-Agent", base.UserAgent)
req.Header.Set("Origin", "https://app.degoo.com")
resp, err := d.client.Do(req)
if err != nil {
return fmt.Errorf("login request failed: %w", err)
}
defer resp.Body.Close()
// Handle rate limiting (429 Too Many Requests)
if resp.StatusCode == http.StatusTooManyRequests {
return fmt.Errorf("login rate limited (429), please try again later")
}
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("login failed: %s", resp.Status)
}
var loginResp DegooLoginResponse
if err := json.NewDecoder(resp.Body).Decode(&loginResp); err != nil {
return fmt.Errorf("failed to parse login response: %w", err)
}
if loginResp.RefreshToken != "" {
tokenReq := DegooAccessTokenRequest{RefreshToken: loginResp.RefreshToken}
jsonTokenReq, err := json.Marshal(tokenReq)
if err != nil {
return fmt.Errorf("failed to serialize access token request: %w", err)
}
tokenReqHTTP, err := http.NewRequestWithContext(ctx, "POST", accessTokenURL, bytes.NewBuffer(jsonTokenReq))
if err != nil {
return fmt.Errorf("failed to create access token request: %w", err)
}
tokenReqHTTP.Header.Set("User-Agent", base.UserAgent)
tokenResp, err := d.client.Do(tokenReqHTTP)
if err != nil {
return fmt.Errorf("failed to get access token: %w", err)
}
defer tokenResp.Body.Close()
var accessTokenResp DegooAccessTokenResponse
if err := json.NewDecoder(tokenResp.Body).Decode(&accessTokenResp); err != nil {
return fmt.Errorf("failed to parse access token response: %w", err)
}
d.AccessToken = accessTokenResp.AccessToken
d.RefreshToken = loginResp.RefreshToken // Save refresh token
} else if loginResp.Token != "" {
d.AccessToken = loginResp.Token
d.RefreshToken = "" // Direct token, no refresh token available
} else {
return fmt.Errorf("login failed, no valid token returned")
}
// Save the updated tokens to storage
op.MustSaveDriverStorage(d)
return nil
}
// apiCall performs a Degoo GraphQL API request.
func (d *Degoo) apiCall(ctx context.Context, operationName, query string, variables map[string]interface{}) (json.RawMessage, error) {
// Apply rate limiting
applyRateLimit()
// Ensure we have a valid token before making the API call
if err := d.ensureValidToken(ctx); err != nil {
return nil, fmt.Errorf("failed to ensure valid token: %w", err)
}
// Update the Token in variables if it exists (after potential refresh)
d.updateTokenInVariables(variables)
return d.executeGraphQLRequest(ctx, operationName, query, variables)
}
// updateTokenInVariables updates the Token field in GraphQL variables
func (d *Degoo) updateTokenInVariables(variables map[string]interface{}) {
if variables != nil {
if _, hasToken := variables["Token"]; hasToken {
variables["Token"] = d.AccessToken
}
}
}
// executeGraphQLRequest executes a GraphQL request with retry logic
func (d *Degoo) executeGraphQLRequest(ctx context.Context, operationName, query string, variables map[string]interface{}) (json.RawMessage, error) {
reqBody := map[string]interface{}{
"operationName": operationName,
"query": query,
"variables": variables,
}
// Create and configure request
req, err := createJSONRequest(ctx, "POST", apiURL, reqBody)
if err != nil {
return nil, err
}
// Set Degoo-specific headers
req.Header.Set("x-api-key", apiKey)
if d.AccessToken != "" {
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", d.AccessToken))
}
// Execute request
resp, err := d.client.Do(req)
if err != nil {
return nil, fmt.Errorf("GraphQL API request failed: %w", err)
}
defer resp.Body.Close()
// Check for HTTP errors
if err := checkHTTPResponse(resp, "GraphQL API"); err != nil {
return nil, err
}
// Parse GraphQL response
var degooResp DegooGraphqlResponse
if err := json.NewDecoder(resp.Body).Decode(&degooResp); err != nil {
return nil, fmt.Errorf("failed to decode GraphQL response: %w", err)
}
// Handle GraphQL errors
if len(degooResp.Errors) > 0 {
return d.handleGraphQLError(ctx, degooResp.Errors[0], operationName, query, variables)
}
return degooResp.Data, nil
}
// handleGraphQLError handles GraphQL-level errors with retry logic
func (d *Degoo) handleGraphQLError(ctx context.Context, gqlError DegooErrors, operationName, query string, variables map[string]interface{}) (json.RawMessage, error) {
if gqlError.ErrorType == "Unauthorized" {
// Re-login and retry
if err := d.login(ctx); err != nil {
return nil, fmt.Errorf("%s, login failed: %w", errUnauthorized, err)
}
// Update token in variables and retry
d.updateTokenInVariables(variables)
return d.apiCall(ctx, operationName, query, variables)
}
return nil, fmt.Errorf("GraphQL API error: %s", gqlError.Message)
}
// humanReadableTimes converts Degoo timestamps to Go time.Time.
func humanReadableTimes(creation, modification, upload string) (cTime, mTime, uTime time.Time) {
cTime, _ = time.Parse(time.RFC3339, creation)
if modification != "" {
modMillis, _ := strconv.ParseInt(modification, 10, 64)
mTime = time.Unix(0, modMillis*int64(time.Millisecond))
}
if upload != "" {
upMillis, _ := strconv.ParseInt(upload, 10, 64)
uTime = time.Unix(0, upMillis*int64(time.Millisecond))
}
return cTime, mTime, uTime
}
// getDevices fetches and caches top-level devices and folders.
func (d *Degoo) getDevices(ctx context.Context) error {
const query = `query GetFileChildren5($Token: String! $ParentID: String $AllParentIDs: [String] $Limit: Int! $Order: Int! $NextToken: String ) { getFileChildren5(Token: $Token ParentID: $ParentID AllParentIDs: $AllParentIDs Limit: $Limit Order: $Order NextToken: $NextToken) { Items { ParentID } NextToken } }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"ParentID": "0",
"Limit": 10,
"Order": 3,
}
data, err := d.apiCall(ctx, "GetFileChildren5", query, variables)
if err != nil {
return err
}
var resp DegooGetChildren5Data
if err := json.Unmarshal(data, &resp); err != nil {
return fmt.Errorf("failed to parse device list: %w", err)
}
if d.RootFolderID == "0" {
if len(resp.GetFileChildren5.Items) > 0 {
d.RootFolderID = resp.GetFileChildren5.Items[0].ParentID
}
op.MustSaveDriverStorage(d)
}
return nil
}
// getAllFileChildren5 fetches all children of a directory with pagination.
func (d *Degoo) getAllFileChildren5(ctx context.Context, parentID string) ([]DegooFileItem, error) {
const query = `query GetFileChildren5($Token: String! $ParentID: String $AllParentIDs: [String] $Limit: Int! $Order: Int! $NextToken: String ) { getFileChildren5(Token: $Token ParentID: $ParentID AllParentIDs: $AllParentIDs Limit: $Limit Order: $Order NextToken: $NextToken) { Items { ID ParentID Name Category Size CreationTime LastModificationTime LastUploadTime FilePath IsInRecycleBin DeviceID MetadataID } NextToken } }`
var allItems []DegooFileItem
nextToken := ""
for {
variables := map[string]interface{}{
"Token": d.AccessToken,
"ParentID": parentID,
"Limit": 1000,
"Order": 3,
}
if nextToken != "" {
variables["NextToken"] = nextToken
}
data, err := d.apiCall(ctx, "GetFileChildren5", query, variables)
if err != nil {
return nil, err
}
var resp DegooGetChildren5Data
if err := json.Unmarshal(data, &resp); err != nil {
return nil, err
}
allItems = append(allItems, resp.GetFileChildren5.Items...)
if resp.GetFileChildren5.NextToken == "" {
break
}
nextToken = resp.GetFileChildren5.NextToken
}
return allItems, nil
}
// getOverlay4 fetches metadata for a single item by ID.
func (d *Degoo) getOverlay4(ctx context.Context, id string) (DegooFileItem, error) {
const query = `query GetOverlay4($Token: String!, $ID: IDType!) { getOverlay4(Token: $Token, ID: $ID) { ID ParentID Name Category Size CreationTime LastModificationTime LastUploadTime URL FilePath IsInRecycleBin DeviceID MetadataID } }`
variables := map[string]interface{}{
"Token": d.AccessToken,
"ID": map[string]string{
"FileID": id,
},
}
data, err := d.apiCall(ctx, "GetOverlay4", query, variables)
if err != nil {
return DegooFileItem{}, err
}
var resp DegooGetOverlay4Data
if err := json.Unmarshal(data, &resp); err != nil {
return DegooFileItem{}, fmt.Errorf("failed to parse item metadata: %w", err)
}
return resp.GetOverlay4, nil
}

View File

@@ -1,31 +0,0 @@
package google_drive
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootID
RefreshToken string `json:"refresh_token" required:"true"`
OrderBy string `json:"order_by" type:"string" help:"such as: folder,name,modifiedTime"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc"`
UseOnlineAPI bool `json:"use_online_api" default:"true"`
APIAddress string `json:"api_url_address" default:"https://api.oplist.org/googleui/renewapi"`
ClientID string `json:"client_id"`
ClientSecret string `json:"client_secret"`
ChunkSize int64 `json:"chunk_size" type:"number" default:"5" help:"chunk size while uploading (unit: MB)"`
DisableDiskUsage bool `json:"disable_disk_usage" default:"false"`
}
var config = driver.Config{
Name: "GoogleDrive",
OnlyProxy: true,
DefaultRoot: "root",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &GoogleDrive{}
})
}

View File

@@ -1,111 +0,0 @@
package halalcloudopen
import (
"sync"
"time"
sdkUser "github.com/halalcloud/golang-sdk-lite/halalcloud/services/user"
)
var (
slicePostErrorRetryInterval = time.Second * 120
retryTimes = 5
)
type halalCommon struct {
// *AuthService // 登录信息
UserInfo *sdkUser.User // 用户信息
refreshTokenFunc func(token string) error
// serv *AuthService
configs sync.Map
}
func (m *halalCommon) GetAccessToken() (string, error) {
value, exists := m.configs.Load("access_token")
if !exists {
return "", nil // 如果不存在,返回空字符串
}
return value.(string), nil // 返回配置项的值
}
// GetRefreshToken implements ConfigStore.
func (m *halalCommon) GetRefreshToken() (string, error) {
value, exists := m.configs.Load("refresh_token")
if !exists {
return "", nil // 如果不存在,返回空字符串
}
return value.(string), nil // 返回配置项的值
}
// SetAccessToken implements ConfigStore.
func (m *halalCommon) SetAccessToken(token string) error {
m.configs.Store("access_token", token)
return nil
}
// SetRefreshToken implements ConfigStore.
func (m *halalCommon) SetRefreshToken(token string) error {
m.configs.Store("refresh_token", token)
if m.refreshTokenFunc != nil {
return m.refreshTokenFunc(token)
}
return nil
}
// SetToken implements ConfigStore.
func (m *halalCommon) SetToken(accessToken string, refreshToken string, expiresIn int64) error {
m.configs.Store("access_token", accessToken)
m.configs.Store("refresh_token", refreshToken)
m.configs.Store("expires_in", expiresIn)
if m.refreshTokenFunc != nil {
return m.refreshTokenFunc(refreshToken)
}
return nil
}
// ClearConfigs implements ConfigStore.
func (m *halalCommon) ClearConfigs() error {
m.configs = sync.Map{} // 清空map
return nil
}
// DeleteConfig implements ConfigStore.
func (m *halalCommon) DeleteConfig(key string) error {
_, exists := m.configs.Load(key)
if !exists {
return nil // 如果不存在,直接返回
}
m.configs.Delete(key) // 删除指定的配置项
return nil
}
// GetConfig implements ConfigStore.
func (m *halalCommon) GetConfig(key string) (string, error) {
value, exists := m.configs.Load(key)
if !exists {
return "", nil // 如果不存在,返回空字符串
}
return value.(string), nil // 返回配置项的值
}
// ListConfigs implements ConfigStore.
func (m *halalCommon) ListConfigs() (map[string]string, error) {
configs := make(map[string]string)
m.configs.Range(func(key, value interface{}) bool {
configs[key.(string)] = value.(string) // 将每个配置项添加到map中
return true // 继续遍历
})
return configs, nil // 返回所有配置项
}
// SetConfig implements ConfigStore.
func (m *halalCommon) SetConfig(key string, value string) error {
m.configs.Store(key, value) // 使用Store方法设置或更新配置项
return nil // 成功设置配置项后返回nil
}
func NewHalalCommon() *halalCommon {
return &halalCommon{
configs: sync.Map{},
}
}

View File

@@ -1,29 +0,0 @@
package halalcloudopen
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model"
sdkClient "github.com/halalcloud/golang-sdk-lite/halalcloud/apiclient"
sdkUser "github.com/halalcloud/golang-sdk-lite/halalcloud/services/user"
sdkUserFile "github.com/halalcloud/golang-sdk-lite/halalcloud/services/userfile"
)
type HalalCloudOpen struct {
*halalCommon
model.Storage
Addition
sdkClient *sdkClient.Client
sdkUserFileService *sdkUserFile.UserFileService
sdkUserService *sdkUser.UserService
uploadThread int
}
func (d *HalalCloudOpen) Config() driver.Config {
return config
}
func (d *HalalCloudOpen) GetAddition() driver.Additional {
return &d.Addition
}
var _ driver.Driver = (*HalalCloudOpen)(nil)

View File

@@ -1,131 +0,0 @@
package halalcloudopen
import (
"context"
"strconv"
"github.com/OpenListTeam/OpenList/v4/internal/model"
sdkModel "github.com/halalcloud/golang-sdk-lite/halalcloud/model"
sdkUserFile "github.com/halalcloud/golang-sdk-lite/halalcloud/services/userfile"
)
func (d *HalalCloudOpen) getFiles(ctx context.Context, dir model.Obj) ([]model.Obj, error) {
files := make([]model.Obj, 0)
limit := int64(100)
token := ""
for {
result, err := d.sdkUserFileService.List(ctx, &sdkUserFile.FileListRequest{
Parent: &sdkUserFile.File{Path: dir.GetPath()},
ListInfo: &sdkModel.ScanListRequest{
Limit: strconv.FormatInt(limit, 10),
Token: token,
},
})
if err != nil {
return nil, err
}
for i := 0; len(result.Files) > i; i++ {
files = append(files, NewObjFile(result.Files[i]))
}
if result.ListInfo == nil || result.ListInfo.Token == "" {
break
}
token = result.ListInfo.Token
}
return files, nil
}
func (d *HalalCloudOpen) makeDir(ctx context.Context, dir model.Obj, name string) (model.Obj, error) {
_, err := d.sdkUserFileService.Create(ctx, &sdkUserFile.File{
Path: dir.GetPath(),
Name: name,
})
return nil, err
}
func (d *HalalCloudOpen) move(ctx context.Context, obj model.Obj, dir model.Obj) (model.Obj, error) {
oldDir := obj.GetPath()
newDir := dir.GetPath()
_, err := d.sdkUserFileService.Move(ctx, &sdkUserFile.BatchOperationRequest{
Source: []*sdkUserFile.File{
{
Path: oldDir,
},
},
Dest: &sdkUserFile.File{
Path: newDir,
},
})
return nil, err
}
func (d *HalalCloudOpen) rename(ctx context.Context, obj model.Obj, name string) (model.Obj, error) {
_, err := d.sdkUserFileService.Rename(ctx, &sdkUserFile.File{
Path: obj.GetPath(),
Name: name,
})
return nil, err
}
func (d *HalalCloudOpen) copy(ctx context.Context, obj model.Obj, dir model.Obj) (model.Obj, error) {
id := obj.GetID()
sourcePath := obj.GetPath()
if len(id) > 0 {
sourcePath = ""
}
destID := dir.GetID()
destPath := dir.GetPath()
if len(destID) > 0 {
destPath = ""
}
dest := &sdkUserFile.File{
Path: destPath,
Identity: destID,
}
_, err := d.sdkUserFileService.Copy(ctx, &sdkUserFile.BatchOperationRequest{
Source: []*sdkUserFile.File{
{
Path: sourcePath,
Identity: id,
},
},
Dest: dest,
})
return nil, err
}
func (d *HalalCloudOpen) remove(ctx context.Context, obj model.Obj) error {
id := obj.GetID()
_, err := d.sdkUserFileService.Delete(ctx, &sdkUserFile.BatchOperationRequest{
Source: []*sdkUserFile.File{
{
Identity: id,
Path: obj.GetPath(),
},
},
})
return err
}
func (d *HalalCloudOpen) details(ctx context.Context) (*model.StorageDetails, error) {
ret, err := d.sdkUserService.GetStatisticsAndQuota(ctx)
if err != nil {
return nil, err
}
total := uint64(ret.DiskStatisticsQuota.BytesQuota)
free := uint64(ret.DiskStatisticsQuota.BytesFree)
return &model.StorageDetails{
DiskUsage: model.DiskUsage{
TotalSpace: total,
FreeSpace: free,
},
}, nil
}

View File

@@ -1,108 +0,0 @@
package halalcloudopen
import (
"context"
"crypto/sha1"
"io"
"strconv"
"time"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range"
sdkUserFile "github.com/halalcloud/golang-sdk-lite/halalcloud/services/userfile"
"github.com/rclone/rclone/lib/readers"
)
func (d *HalalCloudOpen) getLink(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if args.Redirect {
// return nil, model.ErrUnsupported
fid := file.GetID()
fpath := file.GetPath()
if fid != "" {
fpath = ""
}
fi, err := d.sdkUserFileService.GetDirectDownloadAddress(ctx, &sdkUserFile.DirectDownloadRequest{
Identity: fid,
Path: fpath,
})
if err != nil {
return nil, err
}
expireAt := fi.ExpireAt
duration := time.Until(time.UnixMilli(expireAt))
return &model.Link{
URL: fi.DownloadAddress,
Expiration: &duration,
}, nil
}
result, err := d.sdkUserFileService.ParseFileSlice(ctx, &sdkUserFile.File{
Identity: file.GetID(),
Path: file.GetPath(),
})
if err != nil {
return nil, err
}
fileAddrs := []*sdkUserFile.SliceDownloadInfo{}
var addressDuration int64
nodesNumber := len(result.RawNodes)
nodesIndex := nodesNumber - 1
startIndex, endIndex := 0, nodesIndex
for nodesIndex >= 0 {
if nodesIndex >= 200 {
endIndex = 200
} else {
endIndex = nodesNumber
}
for ; endIndex <= nodesNumber; endIndex += 200 {
if endIndex == 0 {
endIndex = 1
}
sliceAddress, err := d.sdkUserFileService.GetSliceDownloadAddress(ctx, &sdkUserFile.SliceDownloadAddressRequest{
Identity: result.RawNodes[startIndex:endIndex],
Version: 1,
})
if err != nil {
return nil, err
}
addressDuration, _ = strconv.ParseInt(sliceAddress.ExpireAt, 10, 64)
fileAddrs = append(fileAddrs, sliceAddress.Addresses...)
startIndex = endIndex
nodesIndex -= 200
}
}
size, _ := strconv.ParseInt(result.FileSize, 10, 64)
chunks := getChunkSizes(result.Sizes)
resultRangeReader := func(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error) {
length := httpRange.Length
if httpRange.Length < 0 || httpRange.Start+httpRange.Length >= size {
length = size - httpRange.Start
}
oo := &openObject{
ctx: ctx,
d: fileAddrs,
chunk: []byte{},
chunks: chunks,
skip: httpRange.Start,
sha: result.Sha1,
shaTemp: sha1.New(),
}
return readers.NewLimitedReadCloser(oo, length), nil
}
var duration time.Duration
if addressDuration != 0 {
duration = time.Until(time.UnixMilli(addressDuration))
} else {
duration = time.Until(time.Now().Add(time.Hour))
}
return &model.Link{
RangeReader: stream.RateLimitRangeReaderFunc(resultRangeReader),
Expiration: &duration,
}, nil
}

View File

@@ -1,50 +0,0 @@
package halalcloudopen
import (
"context"
"time"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/halalcloud/golang-sdk-lite/halalcloud/apiclient"
sdkUser "github.com/halalcloud/golang-sdk-lite/halalcloud/services/user"
sdkUserFile "github.com/halalcloud/golang-sdk-lite/halalcloud/services/userfile"
)
func (d *HalalCloudOpen) Init(ctx context.Context) error {
if d.uploadThread < 1 || d.uploadThread > 32 {
d.uploadThread, d.UploadThread = 3, 3
}
if d.halalCommon == nil {
d.halalCommon = &halalCommon{
UserInfo: &sdkUser.User{},
refreshTokenFunc: func(token string) error {
d.Addition.RefreshToken = token
op.MustSaveDriverStorage(d)
return nil
},
}
}
if d.Addition.RefreshToken != "" {
d.halalCommon.SetRefreshToken(d.Addition.RefreshToken)
}
timeout := d.Addition.TimeOut
if timeout <= 0 {
timeout = 60
}
host := d.Addition.Host
if host == "" {
host = "openapi.2dland.cn"
}
client := apiclient.NewClient(nil, host, d.Addition.ClientID, d.Addition.ClientSecret, d.halalCommon, apiclient.WithTimeout(time.Second*time.Duration(timeout)))
d.sdkClient = client
d.sdkUserFileService = sdkUserFile.NewUserFileService(client)
d.sdkUserService = sdkUser.NewUserService(client)
userInfo, err := d.sdkUserService.Get(ctx, &sdkUser.User{})
if err != nil {
return err
}
d.halalCommon.UserInfo = userInfo
// 能够获取到用户信息,已经检查了 RefreshToken 的有效性,无需再次检查
return nil
}

View File

@@ -1,48 +0,0 @@
package halalcloudopen
import (
"context"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model"
)
func (d *HalalCloudOpen) Drop(ctx context.Context) error {
return nil
}
func (d *HalalCloudOpen) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
return d.getFiles(ctx, dir)
}
func (d *HalalCloudOpen) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
return d.getLink(ctx, file, args)
}
func (d *HalalCloudOpen) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
return d.makeDir(ctx, parentDir, dirName)
}
func (d *HalalCloudOpen) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
return d.move(ctx, srcObj, dstDir)
}
func (d *HalalCloudOpen) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
return d.rename(ctx, srcObj, newName)
}
func (d *HalalCloudOpen) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
return d.copy(ctx, srcObj, dstDir)
}
func (d *HalalCloudOpen) Remove(ctx context.Context, obj model.Obj) error {
return d.remove(ctx, obj)
}
func (d *HalalCloudOpen) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
return d.put(ctx, dstDir, stream, up)
}
func (d *HalalCloudOpen) GetDetails(ctx context.Context) (*model.StorageDetails, error) {
return d.details(ctx)
}

View File

@@ -1,258 +0,0 @@
package halalcloudopen
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"path"
"strings"
"time"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model"
sdkUserFile "github.com/halalcloud/golang-sdk-lite/halalcloud/services/userfile"
"github.com/ipfs/go-cid"
)
func (d *HalalCloudOpen) put(ctx context.Context, dstDir model.Obj, fileStream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
newPath := path.Join(dstDir.GetPath(), fileStream.GetName())
uploadTask, err := d.sdkUserFileService.CreateUploadTask(ctx, &sdkUserFile.File{
Path: newPath,
Size: fileStream.GetSize(),
})
if err != nil {
return nil, err
}
if uploadTask.Created {
return nil, nil
}
slicesList := make([]string, 0)
codec := uint64(0x55)
if uploadTask.BlockCodec > 0 {
codec = uint64(uploadTask.BlockCodec)
}
blockHashType := uploadTask.BlockHashType
mhType := uint64(0x12)
if blockHashType > 0 {
mhType = uint64(blockHashType)
}
prefix := cid.Prefix{
Codec: codec,
MhLength: -1,
MhType: mhType,
Version: 1,
}
blockSize := uploadTask.BlockSize
useSingleUpload := true
//
if fileStream.GetSize() <= int64(blockSize) || d.uploadThread <= 1 {
useSingleUpload = true
}
// Not sure whether FileStream supports concurrent read and write operations, so currently using single-threaded upload to ensure safety.
// read file
if useSingleUpload {
bufferSize := int(blockSize)
buffer := make([]byte, bufferSize)
reader := driver.NewLimitedUploadStream(ctx, fileStream)
teeReader := io.TeeReader(reader, driver.NewProgress(fileStream.GetSize(), up))
// fileStream.Seek(0, os.SEEK_SET)
for {
n, err := teeReader.Read(buffer)
if n > 0 {
data := buffer[:n]
uploadCid, err := postFileSlice(ctx, data, uploadTask.Task, uploadTask.UploadAddress, prefix, retryTimes)
if err != nil {
return nil, err
}
slicesList = append(slicesList, uploadCid.String())
}
if err == io.EOF || n == 0 {
break
}
}
} else {
// TODO: implement multipart upload, currently using single-threaded upload to ensure safety.
bufferSize := int(blockSize)
buffer := make([]byte, bufferSize)
reader := driver.NewLimitedUploadStream(ctx, fileStream)
teeReader := io.TeeReader(reader, driver.NewProgress(fileStream.GetSize(), up))
for {
n, err := teeReader.Read(buffer)
if n > 0 {
data := buffer[:n]
uploadCid, err := postFileSlice(ctx, data, uploadTask.Task, uploadTask.UploadAddress, prefix, retryTimes)
if err != nil {
return nil, err
}
slicesList = append(slicesList, uploadCid.String())
}
if err == io.EOF || n == 0 {
break
}
}
}
newFile, err := makeFile(ctx, slicesList, uploadTask.Task, uploadTask.UploadAddress, retryTimes)
if err != nil {
return nil, err
}
return NewObjFile(newFile), nil
}
func makeFile(ctx context.Context, fileSlice []string, taskID string, uploadAddress string, retry int) (*sdkUserFile.File, error) {
var lastError error = nil
for range retry {
newFile, err := doMakeFile(fileSlice, taskID, uploadAddress)
if err == nil {
return newFile, nil
}
if ctx.Err() != nil {
return nil, err
}
if strings.Contains(err.Error(), "not found") {
return nil, err
}
lastError = err
time.Sleep(slicePostErrorRetryInterval)
}
return nil, fmt.Errorf("mk file slice failed after %d times, error: %s", retry, lastError.Error())
}
func doMakeFile(fileSlice []string, taskID string, uploadAddress string) (*sdkUserFile.File, error) {
accessUrl := uploadAddress + "/" + taskID
getTimeOut := time.Minute * 2
u, err := url.Parse(accessUrl)
if err != nil {
return nil, err
}
n, _ := json.Marshal(fileSlice)
httpRequest := http.Request{
Method: http.MethodPost,
URL: u,
Header: map[string][]string{
"Accept": {"application/json"},
"Content-Type": {"application/json"},
//"Content-Length": {strconv.Itoa(len(n))},
},
Body: io.NopCloser(bytes.NewReader(n)),
}
httpClient := http.Client{
Timeout: getTimeOut,
}
httpResponse, err := httpClient.Do(&httpRequest)
if err != nil {
return nil, err
}
defer httpResponse.Body.Close()
if httpResponse.StatusCode != http.StatusOK && httpResponse.StatusCode != http.StatusCreated {
b, _ := io.ReadAll(httpResponse.Body)
message := string(b)
return nil, fmt.Errorf("mk file slice failed, status code: %d, message: %s", httpResponse.StatusCode, message)
}
b, _ := io.ReadAll(httpResponse.Body)
var result *sdkUserFile.File
err = json.Unmarshal(b, &result)
if err != nil {
return nil, err
}
return result, nil
}
func postFileSlice(ctx context.Context, fileSlice []byte, taskID string, uploadAddress string, preix cid.Prefix, retry int) (cid.Cid, error) {
var lastError error = nil
for range retry {
newCid, err := doPostFileSlice(fileSlice, taskID, uploadAddress, preix)
if err == nil {
return newCid, nil
}
if ctx.Err() != nil {
return cid.Undef, err
}
time.Sleep(slicePostErrorRetryInterval)
lastError = err
}
return cid.Undef, fmt.Errorf("upload file slice failed after %d times, error: %s", retry, lastError.Error())
}
func doPostFileSlice(fileSlice []byte, taskID string, uploadAddress string, preix cid.Prefix) (cid.Cid, error) {
// 1. sum file slice
newCid, err := preix.Sum(fileSlice)
if err != nil {
return cid.Undef, err
}
// 2. post file slice
sliceCidString := newCid.String()
// /{taskID}/{sliceID}
accessUrl := uploadAddress + "/" + taskID + "/" + sliceCidString
getTimeOut := time.Second * 30
// get {accessUrl} in {getTimeOut}
u, err := url.Parse(accessUrl)
if err != nil {
return cid.Undef, err
}
// header: accept: application/json
// header: content-type: application/octet-stream
// header: content-length: {fileSlice.length}
// header: x-content-cid: {sliceCidString}
// header: x-task-id: {taskID}
httpRequest := http.Request{
Method: http.MethodGet,
URL: u,
Header: map[string][]string{
"Accept": {"application/json"},
},
}
httpClient := http.Client{
Timeout: getTimeOut,
}
httpResponse, err := httpClient.Do(&httpRequest)
if err != nil {
return cid.Undef, err
}
if httpResponse.StatusCode != http.StatusOK {
return cid.Undef, fmt.Errorf("upload file slice failed, status code: %d", httpResponse.StatusCode)
}
var result bool
b, err := io.ReadAll(httpResponse.Body)
if err != nil {
return cid.Undef, err
}
err = json.Unmarshal(b, &result)
if err != nil {
return cid.Undef, err
}
if result {
return newCid, nil
}
httpRequest = http.Request{
Method: http.MethodPost,
URL: u,
Header: map[string][]string{
"Accept": {"application/json"},
"Content-Type": {"application/octet-stream"},
// "Content-Length": {strconv.Itoa(len(fileSlice))},
},
Body: io.NopCloser(bytes.NewReader(fileSlice)),
}
httpResponse, err = httpClient.Do(&httpRequest)
if err != nil {
return cid.Undef, err
}
defer httpResponse.Body.Close()
if httpResponse.StatusCode != http.StatusOK && httpResponse.StatusCode != http.StatusCreated {
b, _ := io.ReadAll(httpResponse.Body)
message := string(b)
return cid.Undef, fmt.Errorf("upload file slice failed, status code: %d, message: %s", httpResponse.StatusCode, message)
}
//
return newCid, nil
}

View File

@@ -1,32 +0,0 @@
package halalcloudopen
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
// Usually one of two
driver.RootPath
// define other
RefreshToken string `json:"refresh_token" required:"false" help:"If using a personal API approach, the RefreshToken is not required."`
UploadThread int `json:"upload_thread" type:"number" default:"3" help:"1 <= thread <= 32"`
ClientID string `json:"client_id" required:"true" default:""`
ClientSecret string `json:"client_secret" required:"true" default:""`
Host string `json:"host" required:"false" default:"openapi.2dland.cn"`
TimeOut int `json:"timeout" type:"number" default:"60" help:"timeout in seconds"`
}
var config = driver.Config{
Name: "HalalCloudOpen",
OnlyProxy: false,
DefaultRoot: "/",
NoLinkURL: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &HalalCloudOpen{}
})
}

View File

@@ -1,60 +0,0 @@
package halalcloudopen
import (
"time"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
sdkUserFile "github.com/halalcloud/golang-sdk-lite/halalcloud/services/userfile"
)
type ObjFile struct {
sdkFile *sdkUserFile.File
fileSize int64
modTime time.Time
createTime time.Time
}
func NewObjFile(f *sdkUserFile.File) model.Obj {
ofile := &ObjFile{sdkFile: f}
ofile.fileSize = f.Size
modTimeTs := f.UpdateTs
ofile.modTime = time.UnixMilli(modTimeTs)
createTimeTs := f.CreateTs
ofile.createTime = time.UnixMilli(createTimeTs)
return ofile
}
func (f *ObjFile) GetSize() int64 {
return f.fileSize
}
func (f *ObjFile) GetName() string {
return f.sdkFile.Name
}
func (f *ObjFile) ModTime() time.Time {
return f.modTime
}
func (f *ObjFile) IsDir() bool {
return f.sdkFile.Dir
}
func (f *ObjFile) GetHash() utils.HashInfo {
return utils.HashInfo{
// TODO: support more hash types
}
}
func (f *ObjFile) GetID() string {
return f.sdkFile.Identity
}
func (f *ObjFile) GetPath() string {
return f.sdkFile.Path
}
func (f *ObjFile) CreateTime() time.Time {
return f.createTime
}

View File

@@ -1,185 +0,0 @@
package halalcloudopen
import (
"context"
"crypto/md5"
"encoding/hex"
"errors"
"fmt"
"hash"
"io"
"net/http"
"sync"
"time"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
sdkUserFile "github.com/halalcloud/golang-sdk-lite/halalcloud/services/userfile"
"github.com/ipfs/go-cid"
)
// get the next chunk
func (oo *openObject) getChunk(_ context.Context) (err error) {
if oo.id >= len(oo.chunks) {
return io.EOF
}
var chunk []byte
err = utils.Retry(3, time.Second, func() (err error) {
chunk, err = getRawFiles(oo.d[oo.id])
return err
})
if err != nil {
return err
}
oo.id++
oo.chunk = chunk
return nil
}
// Read reads up to len(p) bytes into p.
func (oo *openObject) Read(p []byte) (n int, err error) {
oo.mu.Lock()
defer oo.mu.Unlock()
if oo.closed {
return 0, fmt.Errorf("read on closed file")
}
// Skip data at the start if requested
for oo.skip > 0 {
//size := 1024 * 1024
_, size, err := oo.ChunkLocation(oo.id)
if err != nil {
return 0, err
}
if oo.skip < int64(size) {
break
}
oo.id++
oo.skip -= int64(size)
}
if len(oo.chunk) == 0 {
err = oo.getChunk(oo.ctx)
if err != nil {
return 0, err
}
if oo.skip > 0 {
oo.chunk = (oo.chunk)[oo.skip:]
oo.skip = 0
}
}
n = copy(p, oo.chunk)
oo.shaTemp.Write(p[:n])
oo.chunk = (oo.chunk)[n:]
return n, nil
}
// Close closed the file - MAC errors are reported here
func (oo *openObject) Close() (err error) {
oo.mu.Lock()
defer oo.mu.Unlock()
if oo.closed {
return nil
}
// 校验Sha1
if string(oo.shaTemp.Sum(nil)) != oo.sha {
return fmt.Errorf("failed to finish download: SHA mismatch")
}
oo.closed = true
return nil
}
func GetMD5Hash(text string) string {
tHash := md5.Sum([]byte(text))
return hex.EncodeToString(tHash[:])
}
type chunkSize struct {
position int64
size int
}
type openObject struct {
ctx context.Context
mu sync.Mutex
d []*sdkUserFile.SliceDownloadInfo
id int
skip int64
chunk []byte
chunks []chunkSize
closed bool
sha string
shaTemp hash.Hash
}
func getChunkSizes(sliceSize []*sdkUserFile.SliceSize) (chunks []chunkSize) {
chunks = make([]chunkSize, 0)
for _, s := range sliceSize {
// 对最后一个做特殊处理
endIndex := s.EndIndex
startIndex := s.StartIndex
if endIndex == 0 {
endIndex = startIndex
}
for j := startIndex; j <= endIndex; j++ {
size := s.Size
chunks = append(chunks, chunkSize{position: j, size: int(size)})
}
}
return chunks
}
func (oo *openObject) ChunkLocation(id int) (position int64, size int, err error) {
if id < 0 || id >= len(oo.chunks) {
return 0, 0, errors.New("invalid arguments")
}
return (oo.chunks)[id].position, (oo.chunks)[id].size, nil
}
func getRawFiles(addr *sdkUserFile.SliceDownloadInfo) ([]byte, error) {
if addr == nil {
return nil, errors.New("addr is nil")
}
client := http.Client{
Timeout: time.Duration(60 * time.Second), // Set timeout to 60 seconds
}
resp, err := client.Get(addr.DownloadAddress)
if err != nil {
return nil, err
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
}
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("bad status: %s, body: %s", resp.Status, body)
}
if addr.Encrypt > 0 {
cd := uint8(addr.Encrypt)
for idx := 0; idx < len(body); idx++ {
body[idx] = body[idx] ^ cd
}
}
storeType := addr.StoreType
if storeType != 10 {
sourceCid, err := cid.Decode(addr.Identity)
if err != nil {
return nil, err
}
checkCid, err := sourceCid.Prefix().Sum(body)
if err != nil {
return nil, err
}
if !checkCid.Equals(sourceCid) {
return nil, fmt.Errorf("bad cid: %s, body: %s", checkCid.String(), body)
}
}
return body, nil
}

View File

@@ -1,29 +0,0 @@
//go:build !windows
package local
import (
"io/fs"
"strings"
"syscall"
"github.com/OpenListTeam/OpenList/v4/internal/model"
)
func isHidden(f fs.FileInfo, _ string) bool {
return strings.HasPrefix(f.Name(), ".")
}
func getDiskUsage(path string) (model.DiskUsage, error) {
var stat syscall.Statfs_t
err := syscall.Statfs(path, &stat)
if err != nil {
return model.DiskUsage{}, err
}
total := stat.Blocks * uint64(stat.Bsize)
free := stat.Bfree * uint64(stat.Bsize)
return model.DiskUsage{
TotalSpace: total,
FreeSpace: free,
}, nil
}

View File

@@ -1,51 +0,0 @@
//go:build windows
package local
import (
"errors"
"io/fs"
"path/filepath"
"syscall"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"golang.org/x/sys/windows"
)
func isHidden(f fs.FileInfo, fullPath string) bool {
filePath := filepath.Join(fullPath, f.Name())
namePtr, err := syscall.UTF16PtrFromString(filePath)
if err != nil {
return false
}
attrs, err := syscall.GetFileAttributes(namePtr)
if err != nil {
return false
}
return attrs&syscall.FILE_ATTRIBUTE_HIDDEN != 0
}
func getDiskUsage(path string) (model.DiskUsage, error) {
abs, err := filepath.Abs(path)
if err != nil {
return model.DiskUsage{}, err
}
root := filepath.VolumeName(abs)
if len(root) != 2 || root[1] != ':' {
return model.DiskUsage{}, errors.New("cannot get disk label")
}
var freeBytes, totalBytes, totalFreeBytes uint64
err = windows.GetDiskFreeSpaceEx(
windows.StringToUTF16Ptr(root),
&freeBytes,
&totalBytes,
&totalFreeBytes,
)
if err != nil {
return model.DiskUsage{}, err
}
return model.DiskUsage{
TotalSpace: totalBytes,
FreeSpace: freeBytes,
}, nil
}

View File

@@ -1,431 +0,0 @@
package mediafire
/*
Package mediafire
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-11
D@' 3z K!7 - The King Of Cracking
Modifications by ILoveScratch2<ilovescratch@foxmail.com>
Date: 2025-09-21
Date: 2025-09-26
Final opts by @Suyunjing @j2rong4cn @KirCute @Da3zKi7
*/
import (
"context"
"fmt"
"math/rand"
"net/http"
"strconv"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/cron"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"golang.org/x/time/rate"
)
type Mediafire struct {
model.Storage
Addition
cron *cron.Cron
actionToken string
limiter *rate.Limiter
appBase string
apiBase string
hostBase string
maxRetries int
secChUa string
secChUaPlatform string
userAgent string
}
func (d *Mediafire) Config() driver.Config {
return config
}
func (d *Mediafire) GetAddition() driver.Additional {
return &d.Addition
}
// Init initializes the MediaFire driver with session token and cookie validation
func (d *Mediafire) Init(ctx context.Context) error {
if d.SessionToken == "" {
return fmt.Errorf("Init :: [MediaFire] {critical} missing sessionToken")
}
if d.Cookie == "" {
return fmt.Errorf("Init :: [MediaFire] {critical} missing Cookie")
}
// Setup rate limiter if rate limit is configured
if d.LimitRate > 0 {
d.limiter = rate.NewLimiter(rate.Limit(d.LimitRate), 1)
}
// Validate and refresh session token if needed
if _, err := d.getSessionToken(ctx); err != nil {
d.renewToken(ctx)
// Avoids 10 mins token expiry (6- 9)
num := rand.Intn(4) + 6
d.cron = cron.NewCron(time.Minute * time.Duration(num))
d.cron.Do(func() {
// Crazy, but working way to refresh session token
d.renewToken(ctx)
})
}
return nil
}
// Drop cleans up driver resources
func (d *Mediafire) Drop(ctx context.Context) error {
// Clear cached resources
d.actionToken = ""
if d.cron != nil {
d.cron.Stop()
d.cron = nil
}
return nil
}
// List retrieves files and folders from the specified directory
func (d *Mediafire) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
files, err := d.getFiles(ctx, dir.GetID())
if err != nil {
return nil, err
}
return utils.SliceConvert(files, func(src File) (model.Obj, error) {
return d.fileToObj(src), nil
})
}
// Link generates a direct download link for the specified file
func (d *Mediafire) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
downloadUrl, err := d.getDirectDownloadLink(ctx, file.GetID())
if err != nil {
return nil, err
}
res, err := base.NoRedirectClient.R().SetDoNotParseResponse(true).SetContext(ctx).Head(downloadUrl)
if err != nil {
return nil, err
}
defer func() {
_ = res.RawBody().Close()
}()
if res.StatusCode() == 302 {
downloadUrl = res.Header().Get("location")
}
return &model.Link{
URL: downloadUrl,
Header: http.Header{
"Origin": []string{d.appBase},
"Referer": []string{d.appBase + "/"},
"sec-ch-ua": []string{d.secChUa},
"sec-ch-ua-platform": []string{d.secChUaPlatform},
"User-Agent": []string{d.userAgent},
},
}, nil
}
// MakeDir creates a new folder in the specified parent directory
func (d *Mediafire) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
data := map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"parent_key": parentDir.GetID(),
"foldername": dirName,
}
var resp MediafireFolderCreateResponse
_, err := d.postForm(ctx, "/folder/create.php", data, &resp)
if err != nil {
return nil, err
}
if err := checkAPIResult(resp.Response.Result); err != nil {
return nil, err
}
created, _ := time.Parse("2006-01-02T15:04:05Z", resp.Response.CreatedUTC)
return &model.Object{
ID: resp.Response.FolderKey,
Name: resp.Response.Name,
Size: 0,
Modified: created,
Ctime: created,
IsFolder: true,
}, nil
}
// Move relocates a file or folder to a different parent directory
func (d *Mediafire) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
var data map[string]string
var endpoint string
if srcObj.IsDir() {
endpoint = "/folder/move.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"folder_key_src": srcObj.GetID(),
"folder_key_dst": dstDir.GetID(),
}
} else {
endpoint = "/file/move.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"quick_key": srcObj.GetID(),
"folder_key": dstDir.GetID(),
}
}
var resp MediafireMoveResponse
_, err := d.postForm(ctx, endpoint, data, &resp)
if err != nil {
return nil, err
}
if err := checkAPIResult(resp.Response.Result); err != nil {
return nil, err
}
return srcObj, nil
}
// Rename changes the name of a file or folder
func (d *Mediafire) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
var data map[string]string
var endpoint string
if srcObj.IsDir() {
endpoint = "/folder/update.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"folder_key": srcObj.GetID(),
"foldername": newName,
}
} else {
endpoint = "/file/update.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"quick_key": srcObj.GetID(),
"filename": newName,
}
}
var resp MediafireRenameResponse
_, err := d.postForm(ctx, endpoint, data, &resp)
if err != nil {
return nil, err
}
if err := checkAPIResult(resp.Response.Result); err != nil {
return nil, err
}
return &model.Object{
ID: srcObj.GetID(),
Name: newName,
Size: srcObj.GetSize(),
Modified: srcObj.ModTime(),
Ctime: srcObj.CreateTime(),
IsFolder: srcObj.IsDir(),
}, nil
}
// Copy creates a duplicate of a file or folder in the specified destination directory
func (d *Mediafire) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
var data map[string]string
var endpoint string
if srcObj.IsDir() {
endpoint = "/folder/copy.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"folder_key_src": srcObj.GetID(),
"folder_key_dst": dstDir.GetID(),
}
} else {
endpoint = "/file/copy.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"quick_key": srcObj.GetID(),
"folder_key": dstDir.GetID(),
}
}
var resp MediafireCopyResponse
_, err := d.postForm(ctx, endpoint, data, &resp)
if err != nil {
return nil, err
}
if err := checkAPIResult(resp.Response.Result); err != nil {
return nil, err
}
var newID string
if srcObj.IsDir() {
if len(resp.Response.NewFolderKeys) > 0 {
newID = resp.Response.NewFolderKeys[0]
}
} else {
if len(resp.Response.NewQuickKeys) > 0 {
newID = resp.Response.NewQuickKeys[0]
}
}
return &model.Object{
ID: newID,
Name: srcObj.GetName(),
Size: srcObj.GetSize(),
Modified: srcObj.ModTime(),
Ctime: srcObj.CreateTime(),
IsFolder: srcObj.IsDir(),
}, nil
}
// Remove deletes a file or folder permanently
func (d *Mediafire) Remove(ctx context.Context, obj model.Obj) error {
var data map[string]string
var endpoint string
if obj.IsDir() {
endpoint = "/folder/delete.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"folder_key": obj.GetID(),
}
} else {
endpoint = "/file/delete.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"quick_key": obj.GetID(),
}
}
var resp MediafireRemoveResponse
_, err := d.postForm(ctx, endpoint, data, &resp)
if err != nil {
return err
}
return checkAPIResult(resp.Response.Result)
}
// Put uploads a file to the specified directory with support for resumable upload and quick upload
func (d *Mediafire) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
fileHash := file.GetHash().GetHash(utils.SHA256)
var err error
// Try to use existing hash first, cache only if necessary
if len(fileHash) != utils.SHA256.Width {
_, fileHash, err = stream.CacheFullAndHash(file, &up, utils.SHA256)
if err != nil {
return nil, err
}
}
checkResp, err := d.uploadCheck(ctx, file.GetName(), file.GetSize(), fileHash, dstDir.GetID())
if err != nil {
return nil, err
}
if checkResp.Response.HashExists == "yes" && checkResp.Response.InAccount == "yes" {
up(100.0)
existingFile, err := d.getExistingFileInfo(ctx, fileHash, file.GetName(), dstDir.GetID())
if err == nil && existingFile != nil {
// File exists, return existing file info
return &model.Object{
ID: existingFile.GetID(),
Name: file.GetName(),
Size: file.GetSize(),
}, nil
}
// If getExistingFileInfo fails, log and continue with normal upload
// This ensures upload doesn't fail due to search issues
}
var pollKey string
if checkResp.Response.ResumableUpload.AllUnitsReady != "yes" {
pollKey, err = d.uploadUnits(ctx, file, checkResp, file.GetName(), fileHash, dstDir.GetID(), up)
if err != nil {
return nil, err
}
} else {
pollKey = checkResp.Response.ResumableUpload.UploadKey
up(100.0)
}
pollResp, err := d.pollUpload(ctx, pollKey)
if err != nil {
return nil, err
}
return &model.Object{
ID: pollResp.Response.Doupload.QuickKey,
Name: file.GetName(),
Size: file.GetSize(),
}, nil
}
func (d *Mediafire) GetDetails(ctx context.Context) (*model.StorageDetails, error) {
data := map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
}
var resp MediafireUserInfoResponse
_, err := d.postForm(ctx, "/user/get_info.php", data, &resp)
if err != nil {
return nil, err
}
used, err := strconv.ParseUint(resp.Response.UserInfo.UsedStorageSize, 10, 64)
if err != nil {
return nil, err
}
total, err := strconv.ParseUint(resp.Response.UserInfo.StorageLimit, 10, 64)
if err != nil {
return nil, err
}
return &model.StorageDetails{
DiskUsage: model.DiskUsage{
TotalSpace: total,
FreeSpace: total - used,
},
}, nil
}
var _ driver.Driver = (*Mediafire)(nil)

View File

@@ -1,61 +0,0 @@
package mediafire
/*
Package mediafire
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-11
D@' 3z K!7 - The King Of Cracking
Modifications by ILoveScratch2<ilovescratch@foxmail.com>
Date: 2025-09-21
Date: 2025-09-26
Final opts by @Suyunjing @j2rong4cn @KirCute @Da3zKi7
*/
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootPath
//driver.RootID
SessionToken string `json:"session_token" required:"true" type:"string" help:"Required for MediaFire API"`
Cookie string `json:"cookie" required:"true" type:"string" help:"Required for navigation"`
OrderBy string `json:"order_by" type:"select" options:"name,time,size" default:"name"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
ChunkSize int64 `json:"chunk_size" type:"number" default:"100"`
UploadThreads int `json:"upload_threads" type:"number" default:"3" help:"concurrent upload threads"`
LimitRate float64 `json:"limit_rate" type:"float" default:"2" help:"limit all api request rate ([limit]r/1s)"`
}
var config = driver.Config{
Name: "MediaFire",
LocalSort: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "/",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Mediafire{
appBase: "https://app.mediafire.com",
apiBase: "https://www.mediafire.com/api/1.5",
hostBase: "https://www.mediafire.com",
maxRetries: 3,
secChUa: "\"Not)A;Brand\";v=\"8\", \"Chromium\";v=\"139\", \"Google Chrome\";v=\"139\"",
secChUaPlatform: "Windows",
userAgent: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.0.0 Safari/537.36",
}
})
}

View File

@@ -1,246 +0,0 @@
package mediafire
/*
Package mediafire
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-11
D@' 3z K!7 - The King Of Cracking
*/
type MediafireRenewTokenResponse struct {
Response struct {
Action string `json:"action"`
SessionToken string `json:"session_token"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
} `json:"response"`
}
type MediafireResponse struct {
Response struct {
Action string `json:"action"`
FolderContent struct {
ChunkSize string `json:"chunk_size"`
ContentType string `json:"content_type"`
ChunkNumber string `json:"chunk_number"`
FolderKey string `json:"folderkey"`
Folders []MediafireFolder `json:"folders,omitempty"`
Files []MediafireFile `json:"files,omitempty"`
MoreChunks string `json:"more_chunks"`
} `json:"folder_content"`
Result string `json:"result"`
} `json:"response"`
}
type MediafireFolder struct {
FolderKey string `json:"folderkey"`
Name string `json:"name"`
Created string `json:"created"`
CreatedUTC string `json:"created_utc"`
}
type MediafireFile struct {
QuickKey string `json:"quickkey"`
Filename string `json:"filename"`
Size string `json:"size"`
Created string `json:"created"`
CreatedUTC string `json:"created_utc"`
MimeType string `json:"mimetype"`
}
type File struct {
ID string
Name string
Size int64
CreatedUTC string
IsFolder bool
}
type FolderContentResponse struct {
Folders []MediafireFolder
Files []MediafireFile
MoreChunks bool
}
type MediafireLinksResponse struct {
Response struct {
Action string `json:"action"`
Links []struct {
QuickKey string `json:"quickkey"`
View string `json:"view"`
NormalDownload string `json:"normal_download"`
OneTime struct {
Download string `json:"download"`
View string `json:"view"`
} `json:"one_time"`
} `json:"links"`
OneTimeKeyRequestCount string `json:"one_time_key_request_count"`
OneTimeKeyRequestMaxCount string `json:"one_time_key_request_max_count"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
} `json:"response"`
}
type MediafireDirectDownloadResponse struct {
Response struct {
Action string `json:"action"`
Links []struct {
QuickKey string `json:"quickkey"`
DirectDownload string `json:"direct_download"`
} `json:"links"`
DirectDownloadFreeBandwidth string `json:"direct_download_free_bandwidth"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
} `json:"response"`
}
type MediafireFolderCreateResponse struct {
Response struct {
Action string `json:"action"`
FolderKey string `json:"folder_key"`
UploadKey string `json:"upload_key"`
ParentFolderKey string `json:"parent_folderkey"`
Name string `json:"name"`
Description string `json:"description"`
Created string `json:"created"`
CreatedUTC string `json:"created_utc"`
Privacy string `json:"privacy"`
FileCount string `json:"file_count"`
FolderCount string `json:"folder_count"`
Revision string `json:"revision"`
DropboxEnabled string `json:"dropbox_enabled"`
Flag string `json:"flag"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
NewDeviceRevision int `json:"new_device_revision"`
} `json:"response"`
}
type MediafireMoveResponse struct {
Response struct {
Action string `json:"action"`
Asynchronous string `json:"asynchronous,omitempty"`
NewNames []string `json:"new_names"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
NewDeviceRevision int `json:"new_device_revision"`
} `json:"response"`
}
type MediafireRenameResponse struct {
Response struct {
Action string `json:"action"`
Asynchronous string `json:"asynchronous,omitempty"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
NewDeviceRevision int `json:"new_device_revision"`
} `json:"response"`
}
type MediafireCopyResponse struct {
Response struct {
Action string `json:"action"`
Asynchronous string `json:"asynchronous,omitempty"`
NewQuickKeys []string `json:"new_quickkeys,omitempty"`
NewFolderKeys []string `json:"new_folderkeys,omitempty"`
SkippedCount string `json:"skipped_count,omitempty"`
OtherCount string `json:"other_count,omitempty"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
NewDeviceRevision int `json:"new_device_revision"`
} `json:"response"`
}
type MediafireRemoveResponse struct {
Response struct {
Action string `json:"action"`
Asynchronous string `json:"asynchronous,omitempty"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
NewDeviceRevision int `json:"new_device_revision"`
} `json:"response"`
}
type MediafireCheckResponse struct {
Response struct {
Action string `json:"action"`
HashExists string `json:"hash_exists"`
InAccount string `json:"in_account"`
InFolder string `json:"in_folder"`
FileExists string `json:"file_exists"`
ResumableUpload struct {
AllUnitsReady string `json:"all_units_ready"`
NumberOfUnits string `json:"number_of_units"`
UnitSize string `json:"unit_size"`
Bitmap struct {
Count string `json:"count"`
Words []string `json:"words"`
} `json:"bitmap"`
UploadKey string `json:"upload_key"`
} `json:"resumable_upload"`
AvailableSpace string `json:"available_space"`
UsedStorageSize string `json:"used_storage_size"`
StorageLimit string `json:"storage_limit"`
StorageLimitExceeded string `json:"storage_limit_exceeded"`
UploadURL struct {
Simple string `json:"simple"`
SimpleFallback string `json:"simple_fallback"`
Resumable string `json:"resumable"`
ResumableFallback string `json:"resumable_fallback"`
} `json:"upload_url"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
} `json:"response"`
}
type MediafireActionTokenResponse struct {
Response struct {
Action string `json:"action"`
ActionToken string `json:"action_token"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
} `json:"response"`
}
type MediafirePollResponse struct {
Response struct {
Action string `json:"action"`
Doupload struct {
Result string `json:"result"`
Status string `json:"status"`
Description string `json:"description"`
QuickKey string `json:"quickkey"`
Hash string `json:"hash"`
Filename string `json:"filename"`
Size string `json:"size"`
Created string `json:"created"`
CreatedUTC string `json:"created_utc"`
Revision string `json:"revision"`
} `json:"doupload"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
} `json:"response"`
}
type MediafireFileSearchResponse struct {
Response struct {
Action string `json:"action"`
FileInfo []File `json:"file_info"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
} `json:"response"`
}
type MediafireUserInfoResponse struct {
Response struct {
Action string `json:"action"`
UserInfo struct {
Email string `json:"string"`
DisplayName string `json:"display_name"`
UsedStorageSize string `json:"used_storage_size"`
StorageLimit string `json:"storage_limit"`
} `json:"user_info"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
} `json:"response"`
}

View File

@@ -1,729 +0,0 @@
package mediafire
/*
Package mediafire
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-11
D@' 3z K!7 - The King Of Cracking
Modifications by ILoveScratch2<ilovescratch@foxmail.com>
Date: 2025-09-21
Date: 2025-09-26
Final opts by @Suyunjing @j2rong4cn @KirCute @Da3zKi7
*/
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"sync"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/errgroup"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/avast/retry-go"
"github.com/go-resty/resty/v2"
)
// checkAPIResult validates MediaFire API response result and returns error if not successful
func checkAPIResult(result string) error {
if result != "Success" {
return fmt.Errorf("MediaFire API error: %s", result)
}
return nil
}
// getSessionToken retrieves and validates session token from MediaFire
func (d *Mediafire) getSessionToken(ctx context.Context) (string, error) {
if d.limiter != nil {
if err := d.limiter.Wait(ctx); err != nil {
return "", fmt.Errorf("rate limit wait failed: %w", err)
}
}
tokenURL := d.hostBase + "/application/get_session_token.php"
req, err := http.NewRequestWithContext(ctx, http.MethodPost, tokenURL, nil)
if err != nil {
return "", err
}
req.Header.Set("Accept", "*/*")
req.Header.Set("Accept-Encoding", "gzip, deflate, br, zstd")
req.Header.Set("Accept-Language", "en-US,en;q=0.9")
req.Header.Set("Content-Length", "0")
req.Header.Set("Cookie", d.Cookie)
req.Header.Set("DNT", "1")
req.Header.Set("Origin", d.hostBase)
req.Header.Set("Priority", "u=1, i")
req.Header.Set("Referer", (d.hostBase + "/"))
req.Header.Set("Sec-Ch-Ua", d.secChUa)
req.Header.Set("Sec-Ch-Ua-Mobile", "?0")
req.Header.Set("Sec-Ch-Ua-Platform", d.secChUaPlatform)
req.Header.Set("Sec-Fetch-Dest", "empty")
req.Header.Set("Sec-Fetch-Mode", "cors")
req.Header.Set("Sec-Fetch-Site", "same-site")
req.Header.Set("User-Agent", d.userAgent)
// req.Header.Set("Connection", "keep-alive")
resp, err := base.HttpClient.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return "", err
}
// fmt.Printf("getSessionToken :: Raw response: %s\n", string(body))
// fmt.Printf("getSessionToken :: Parsed response: %+v\n", resp)
var tokenResp struct {
Response struct {
SessionToken string `json:"session_token"`
} `json:"response"`
}
if resp.StatusCode == 200 {
if err := json.Unmarshal(body, &tokenResp); err != nil {
return "", err
}
if tokenResp.Response.SessionToken == "" {
return "", fmt.Errorf("empty session token received")
}
cookieMap := make(map[string]string)
for _, cookie := range resp.Cookies() {
cookieMap[cookie.Name] = cookie.Value
}
if len(cookieMap) > 0 {
var cookies []string
for name, value := range cookieMap {
cookies = append(cookies, fmt.Sprintf("%s=%s", name, value))
}
d.Cookie = strings.Join(cookies, "; ")
op.MustSaveDriverStorage(d)
// fmt.Printf("getSessionToken :: Captured cookies: %s\n", d.Cookie)
}
} else {
return "", fmt.Errorf("getSessionToken :: failed to get session token, status code: %d", resp.StatusCode)
}
d.SessionToken = tokenResp.Response.SessionToken
// fmt.Printf("Init :: Obtain Session Token %v", d.SessionToken)
op.MustSaveDriverStorage(d)
return d.SessionToken, nil
}
// renewToken refreshes the current session token when expired
func (d *Mediafire) renewToken(ctx context.Context) error {
query := map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
}
var resp MediafireRenewTokenResponse
_, err := d.postForm(ctx, "/user/renew_session_token.php", query, &resp)
if err != nil {
return fmt.Errorf("failed to renew token: %w", err)
}
// fmt.Printf("getInfo :: Raw response: %s\n", string(body))
// fmt.Printf("getInfo :: Parsed response: %+v\n", resp)
if resp.Response.Result != "Success" {
return fmt.Errorf("MediaFire token renewal failed: %s", resp.Response.Result)
}
d.SessionToken = resp.Response.SessionToken
// fmt.Printf("Init :: Renew Session Token: %s", resp.Response.Result)
op.MustSaveDriverStorage(d)
return nil
}
func (d *Mediafire) getFiles(ctx context.Context, folderKey string) ([]File, error) {
// Pre-allocate slice with reasonable capacity to reduce memory allocations
files := make([]File, 0, d.ChunkSize*2) // Estimate: ChunkSize for files + folders
hasMore := true
chunkNumber := 1
for hasMore {
resp, err := d.getFolderContent(ctx, folderKey, chunkNumber)
if err != nil {
return nil, err
}
// Process folders and files in single loop to improve cache locality
totalItems := len(resp.Folders) + len(resp.Files)
if cap(files)-len(files) < totalItems {
// Grow slice if needed
newFiles := make([]File, len(files), len(files)+totalItems+int(d.ChunkSize))
copy(newFiles, files)
files = newFiles
}
for _, folder := range resp.Folders {
files = append(files, File{
ID: folder.FolderKey,
Name: folder.Name,
Size: 0,
CreatedUTC: folder.CreatedUTC,
IsFolder: true,
})
}
for _, file := range resp.Files {
size, _ := strconv.ParseInt(file.Size, 10, 64)
files = append(files, File{
ID: file.QuickKey,
Name: file.Filename,
Size: size,
CreatedUTC: file.CreatedUTC,
IsFolder: false,
})
}
hasMore = resp.MoreChunks
chunkNumber++
}
return files, nil
}
func (d *Mediafire) getFolderContent(ctx context.Context, folderKey string, chunkNumber int) (*FolderContentResponse, error) {
foldersResp, err := d.getFolderContentByType(ctx, folderKey, "folders", chunkNumber)
if err != nil {
return nil, err
}
filesResp, err := d.getFolderContentByType(ctx, folderKey, "files", chunkNumber)
if err != nil {
return nil, err
}
return &FolderContentResponse{
Folders: foldersResp.Response.FolderContent.Folders,
Files: filesResp.Response.FolderContent.Files,
MoreChunks: foldersResp.Response.FolderContent.MoreChunks == "yes" || filesResp.Response.FolderContent.MoreChunks == "yes",
}, nil
}
func (d *Mediafire) getFolderContentByType(ctx context.Context, folderKey, contentType string, chunkNumber int) (*MediafireResponse, error) {
data := map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"folder_key": folderKey,
"content_type": contentType,
"chunk": strconv.Itoa(chunkNumber),
"chunk_size": strconv.FormatInt(d.ChunkSize, 10),
"details": "yes",
"order_direction": d.OrderDirection,
"order_by": d.OrderBy,
"filter": "",
}
var resp MediafireResponse
_, err := d.postForm(ctx, "/folder/get_content.php", data, &resp)
if err != nil {
return nil, err
}
if err := checkAPIResult(resp.Response.Result); err != nil {
return nil, err
}
return &resp, nil
}
// fileToObj converts MediaFire file data to model.ObjThumb with thumbnail support
func (d *Mediafire) fileToObj(f File) *model.ObjThumb {
created, _ := time.Parse("2006-01-02T15:04:05Z", f.CreatedUTC)
var thumbnailURL string
if !f.IsFolder && f.ID != "" {
thumbnailURL = d.hostBase + "/convkey/acaa/" + f.ID + "3g.jpg"
}
return &model.ObjThumb{
Object: model.Object{
ID: f.ID,
// Path: "",
Name: f.Name,
Size: f.Size,
Modified: created,
Ctime: created,
IsFolder: f.IsFolder,
},
Thumbnail: model.Thumbnail{
Thumbnail: thumbnailURL,
},
}
}
func (d *Mediafire) setCommonHeaders(req *resty.Request) {
req.SetHeaders(map[string]string{
"Cookie": d.Cookie,
"User-Agent": d.userAgent,
"Origin": d.appBase,
"Referer": d.appBase + "/",
})
}
// apiRequest performs HTTP request to MediaFire API with rate limiting and common headers
func (d *Mediafire) apiRequest(ctx context.Context, method, endpoint string, queryParams, formData map[string]string, resp interface{}) ([]byte, error) {
if d.limiter != nil {
if err := d.limiter.Wait(ctx); err != nil {
return nil, fmt.Errorf("rate limit wait failed: %w", err)
}
}
req := base.RestyClient.R()
req.SetContext(ctx)
d.setCommonHeaders(req)
// Set query parameters for GET requests
if queryParams != nil {
req.SetQueryParams(queryParams)
}
// Set form data for POST requests
if formData != nil {
req.SetFormData(formData)
req.SetHeader("Content-Type", "application/x-www-form-urlencoded")
}
// Set response object if provided
if resp != nil {
req.SetResult(resp)
}
var res *resty.Response
var err error
// Execute request based on method
switch method {
case "GET":
res, err = req.Get(d.apiBase + endpoint)
case "POST":
res, err = req.Post(d.apiBase + endpoint)
default:
return nil, fmt.Errorf("unsupported HTTP method: %s", method)
}
if err != nil {
return nil, err
}
return res.Body(), nil
}
func (d *Mediafire) getForm(ctx context.Context, endpoint string, query map[string]string, resp interface{}) ([]byte, error) {
return d.apiRequest(ctx, "GET", endpoint, query, nil, resp)
}
func (d *Mediafire) postForm(ctx context.Context, endpoint string, data map[string]string, resp interface{}) ([]byte, error) {
return d.apiRequest(ctx, "POST", endpoint, nil, data, resp)
}
func (d *Mediafire) getDirectDownloadLink(ctx context.Context, fileID string) (string, error) {
data := map[string]string{
"session_token": d.SessionToken,
"quick_key": fileID,
"link_type": "direct_download",
"response_format": "json",
}
var resp MediafireDirectDownloadResponse
_, err := d.getForm(ctx, "/file/get_links.php", data, &resp)
if err != nil {
return "", err
}
if err := checkAPIResult(resp.Response.Result); err != nil {
return "", err
}
if len(resp.Response.Links) == 0 {
return "", fmt.Errorf("no download links found")
}
return resp.Response.Links[0].DirectDownload, nil
}
func (d *Mediafire) uploadCheck(ctx context.Context, filename string, filesize int64, filehash, folderKey string) (*MediafireCheckResponse, error) {
actionToken, err := d.getActionToken(ctx)
if err != nil {
return nil, fmt.Errorf("failed to get action token: %w", err)
}
query := map[string]string{
"session_token": actionToken, /* d.SessionToken */
"filename": filename,
"size": strconv.FormatInt(filesize, 10),
"hash": filehash,
"folder_key": folderKey,
"resumable": "yes",
"response_format": "json",
}
var resp MediafireCheckResponse
_, err = d.postForm(ctx, "/upload/check.php", query, &resp)
if err != nil {
return nil, err
}
// fmt.Printf("uploadCheck :: Raw response: %s\n", string(body))
// fmt.Printf("uploadCheck :: Parsed response: %+v\n", resp)
// fmt.Printf("uploadCheck :: ResumableUpload section: %+v\n", resp.Response.ResumableUpload)
// fmt.Printf("uploadCheck :: Upload key specifically: '%s'\n", resp.Response.ResumableUpload.UploadKey)
if err := checkAPIResult(resp.Response.Result); err != nil {
return nil, err
}
return &resp, nil
}
func (d *Mediafire) uploadUnits(ctx context.Context, file model.FileStreamer, checkResp *MediafireCheckResponse, filename, fileHash, folderKey string, up driver.UpdateProgress) (string, error) {
unitSize, _ := strconv.ParseInt(checkResp.Response.ResumableUpload.UnitSize, 10, 64)
numUnits, _ := strconv.Atoi(checkResp.Response.ResumableUpload.NumberOfUnits)
uploadKey := checkResp.Response.ResumableUpload.UploadKey
stringWords := checkResp.Response.ResumableUpload.Bitmap.Words
intWords := make([]int, 0, len(stringWords))
for _, word := range stringWords {
if intWord, err := strconv.Atoi(word); err == nil {
intWords = append(intWords, intWord)
}
}
// Intelligent buffer sizing for large files
bufferSize := int(unitSize)
fileSize := file.GetSize()
// Split in chunks
if fileSize > d.ChunkSize*1024*1024 {
// Large, use ChunkSize (default = 100MB)
bufferSize = min(int(fileSize), int(d.ChunkSize)*1024*1024)
} else if fileSize > 10*1024*1024 {
// Medium, use full file size for concurrent access
bufferSize = int(fileSize)
}
// Create stream section reader for efficient chunking
ss, err := stream.NewStreamSectionReader(file, bufferSize, &up)
if err != nil {
return "", err
}
// Cal minimal parallel upload threads, allows MediaFire resumable upload to rule it over custom value
// If file is big, likely will respect d.UploadThreads instead of MediaFire's suggestion i.e. 5 threads
thread := min(numUnits, d.UploadThreads)
// Create ordered group for sequential upload processing with retry logic
threadG, uploadCtx := errgroup.NewOrderedGroupWithContext(ctx, thread,
retry.Attempts(3),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
var finalUploadKey string
var keyMutex sync.Mutex
fileSize = file.GetSize()
for unitID := range numUnits {
if utils.IsCanceled(uploadCtx) {
break
}
start := int64(unitID) * unitSize
size := unitSize
if start+size > fileSize {
size = fileSize - start
}
var reader io.ReadSeeker
var rateLimitedRd io.Reader
var unitHash string
// Use lifecycle pattern for proper resource management
threadG.GoWithLifecycle(errgroup.Lifecycle{
Before: func(ctx context.Context) error {
// Skip already uploaded units
if d.isUnitUploaded(intWords, unitID) {
return ss.DiscardSection(start, size)
}
var err error
reader, err = ss.GetSectionReader(start, size)
if err != nil {
return err
}
rateLimitedRd = driver.NewLimitedUploadStream(ctx, reader)
return nil
},
Do: func(ctx context.Context) error {
if reader == nil {
return nil // Skip if reader is not initialized (already uploaded)
}
if unitHash == "" {
reader.Seek(0, io.SeekStart)
var err error
unitHash, err = utils.HashReader(utils.SHA256, reader)
if err != nil {
return err
}
}
reader.Seek(0, io.SeekStart)
// Perform upload
actionToken, err := d.getActionToken(ctx)
if err != nil {
return err
}
if d.limiter != nil {
if err := d.limiter.Wait(ctx); err != nil {
return fmt.Errorf("rate limit wait failed: %w", err)
}
}
url := d.apiBase + "/upload/resumable.php"
req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, rateLimitedRd)
if err != nil {
return err
}
q := req.URL.Query()
q.Add("folder_key", folderKey)
q.Add("response_format", "json")
q.Add("session_token", actionToken)
q.Add("key", uploadKey)
req.URL.RawQuery = q.Encode()
req.Header.Set("x-filehash", fileHash)
req.Header.Set("x-filesize", strconv.FormatInt(fileSize, 10))
req.Header.Set("x-unit-id", strconv.Itoa(unitID))
req.Header.Set("x-unit-size", strconv.FormatInt(size, 10))
req.Header.Set("x-unit-hash", unitHash)
req.Header.Set("x-filename", filename)
req.Header.Set("Content-Type", "application/octet-stream")
req.ContentLength = size
/* fmt.Printf("Debug resumable upload request:\n")
fmt.Printf(" URL: %s\n", req.URL.String())
fmt.Printf(" Headers: %+v\n", req.Header)
fmt.Printf(" Unit ID: %d\n", unitID)
fmt.Printf(" Unit Size: %d\n", len(unitData))
fmt.Printf(" Upload Key: %s\n", uploadKey)
fmt.Printf(" Action Token: %s\n", actionToken) */
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
body, err := io.ReadAll(res.Body)
if err != nil {
return fmt.Errorf("failed to read response body: %v", err)
}
// fmt.Printf("MediaFire resumable upload response (status %d): %s\n", res.StatusCode, string(body))
var uploadResp struct {
Response struct {
Doupload struct {
Key string `json:"key"`
} `json:"doupload"`
Result string `json:"result"`
} `json:"response"`
}
if err := json.Unmarshal(body, &uploadResp); err != nil {
return fmt.Errorf("failed to parse response: %v", err)
}
if res.StatusCode != 200 {
return fmt.Errorf("resumable upload failed with status %d", res.StatusCode)
}
// Thread-safe update of final upload key
keyMutex.Lock()
finalUploadKey = uploadResp.Response.Doupload.Key
keyMutex.Unlock()
return nil
},
After: func(err error) {
up(float64(threadG.Success()) * 100 / float64(numUnits))
if reader != nil {
// Cleanup resources
ss.FreeSectionReader(reader)
}
},
})
}
if err := threadG.Wait(); err != nil {
return "", err
}
return finalUploadKey, nil
}
/*func (d *Mediafire) uploadSingleUnit(ctx context.Context, file model.FileStreamer, unitID int, unitSize int64, fileHash, filename, uploadKey, folderKey string, fileSize int64) (string, error) {
start := int64(unitID) * unitSize
size := unitSize
if start+size > fileSize {
size = fileSize - start
}
unitData := make([]byte, size)
_, err := file.Read(unitData)
if err != nil {
return "", err
}
return d.resumableUpload(ctx, folderKey, uploadKey, unitData, unitID, fileHash, filename, fileSize)
}*/
func (d *Mediafire) getActionToken(ctx context.Context) (string, error) {
if d.actionToken != "" {
return d.actionToken, nil
}
data := map[string]string{
"type": "upload",
"lifespan": "1440",
"response_format": "json",
"session_token": d.SessionToken,
}
var resp MediafireActionTokenResponse
_, err := d.postForm(ctx, "/user/get_action_token.php", data, &resp)
if err != nil {
return "", err
}
if resp.Response.Result != "Success" {
return "", fmt.Errorf("MediaFire action token failed: %s", resp.Response.Result)
}
return resp.Response.ActionToken, nil
}
func (d *Mediafire) pollUpload(ctx context.Context, key string) (*MediafirePollResponse, error) {
actionToken, err := d.getActionToken(ctx)
if err != nil {
return nil, fmt.Errorf("failed to get action token: %w", err)
}
// fmt.Printf("Debug Key: %+v\n", key)
query := map[string]string{
"key": key,
"response_format": "json",
"session_token": actionToken, /* d.SessionToken */
}
var resp MediafirePollResponse
_, err = d.postForm(ctx, "/upload/poll_upload.php", query, &resp)
if err != nil {
return nil, err
}
// fmt.Printf("pollUpload :: Raw response: %s\n", string(body))
// fmt.Printf("pollUpload :: Parsed response: %+v\n", resp)
// fmt.Printf("pollUpload :: Debug Result: %+v\n", resp.Response.Result)
if err := checkAPIResult(resp.Response.Result); err != nil {
return nil, err
}
return &resp, nil
}
func (d *Mediafire) isUnitUploaded(words []int, unitID int) bool {
wordIndex := unitID / 16
bitIndex := unitID % 16
if wordIndex >= len(words) {
return false
}
return (words[wordIndex]>>bitIndex)&1 == 1
}
func (d *Mediafire) getExistingFileInfo(ctx context.Context, fileHash, filename, folderKey string) (*model.ObjThumb, error) {
// First try to find by hash directly (most efficient)
if fileInfo, err := d.getFileByHash(ctx, fileHash); err == nil && fileInfo != nil {
return fileInfo, nil
}
// If hash search fails, search in the target folder
// This is a fallback method in case the file exists but hash search doesn't work
files, err := d.getFiles(ctx, folderKey)
if err != nil {
return nil, err
}
for _, file := range files {
if file.Name == filename && !file.IsFolder {
return d.fileToObj(file), nil
}
}
return nil, fmt.Errorf("existing file not found")
}
func (d *Mediafire) getFileByHash(ctx context.Context, hash string) (*model.ObjThumb, error) {
query := map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"hash": hash,
}
var resp MediafireFileSearchResponse
_, err := d.postForm(ctx, "/file/get_info.php", query, &resp)
if err != nil {
return nil, err
}
if resp.Response.Result != "Success" {
return nil, fmt.Errorf("MediaFire file search failed: %s", resp.Response.Result)
}
if len(resp.Response.FileInfo) == 0 {
return nil, fmt.Errorf("file not found by hash")
}
file := resp.Response.FileInfo[0]
return d.fileToObj(file), nil
}

View File

@@ -1,34 +0,0 @@
package onedrive
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootPath
Region string `json:"region" type:"select" required:"true" options:"global,cn,us,de" default:"global"`
IsSharepoint bool `json:"is_sharepoint"`
UseOnlineAPI bool `json:"use_online_api" default:"true"`
APIAddress string `json:"api_url_address" default:"https://api.oplist.org/onedrive/renewapi"`
ClientID string `json:"client_id"`
ClientSecret string `json:"client_secret"`
RedirectUri string `json:"redirect_uri" required:"true" default:"https://api.oplist.org/onedrive/callback"`
RefreshToken string `json:"refresh_token" required:"true"`
SiteId string `json:"site_id"`
ChunkSize int64 `json:"chunk_size" type:"number" default:"5"`
CustomHost string `json:"custom_host" help:"Custom host for onedrive download link"`
DisableDiskUsage bool `json:"disable_disk_usage" default:"false"`
}
var config = driver.Config{
Name: "Onedrive",
LocalSort: true,
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Onedrive{}
})
}

View File

@@ -1,30 +0,0 @@
package onedrive_app
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootPath
Region string `json:"region" type:"select" required:"true" options:"global,cn,us,de" default:"global"`
ClientID string `json:"client_id" required:"true"`
ClientSecret string `json:"client_secret" required:"true"`
TenantID string `json:"tenant_id"`
Email string `json:"email"`
ChunkSize int64 `json:"chunk_size" type:"number" default:"5"`
CustomHost string `json:"custom_host" help:"Custom host for onedrive download link"`
DisableDiskUsage bool `json:"disable_disk_usage" default:"false"`
}
var config = driver.Config{
Name: "OnedriveAPP",
LocalSort: true,
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &OnedriveAPP{}
})
}

View File

@@ -1,181 +0,0 @@
package openlist_share
import (
"context"
"fmt"
"net/http"
"net/url"
stdpath "path"
"strings"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server/common"
"github.com/go-resty/resty/v2"
)
type OpenListShare struct {
model.Storage
Addition
serverArchivePreview bool
}
func (d *OpenListShare) Config() driver.Config {
return config
}
func (d *OpenListShare) GetAddition() driver.Additional {
return &d.Addition
}
func (d *OpenListShare) Init(ctx context.Context) error {
d.Addition.Address = strings.TrimSuffix(d.Addition.Address, "/")
var settings common.Resp[map[string]string]
_, _, err := d.request("/public/settings", http.MethodGet, func(req *resty.Request) {
req.SetResult(&settings)
})
if err != nil {
return err
}
d.serverArchivePreview = settings.Data["share_archive_preview"] == "true"
return nil
}
func (d *OpenListShare) Drop(ctx context.Context) error {
return nil
}
func (d *OpenListShare) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var resp common.Resp[FsListResp]
_, _, err := d.request("/fs/list", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ListReq{
PageReq: model.PageReq{
Page: 1,
PerPage: 0,
},
Path: stdpath.Join(fmt.Sprintf("/@s/%s", d.ShareId), dir.GetPath()),
Password: d.Pwd,
Refresh: false,
})
})
if err != nil {
return nil, err
}
var files []model.Obj
for _, f := range resp.Data.Content {
file := model.ObjThumb{
Object: model.Object{
Name: f.Name,
Modified: f.Modified,
Ctime: f.Created,
Size: f.Size,
IsFolder: f.IsDir,
HashInfo: utils.FromString(f.HashInfo),
},
Thumbnail: model.Thumbnail{Thumbnail: f.Thumb},
}
files = append(files, &file)
}
return files, nil
}
func (d *OpenListShare) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
path := utils.FixAndCleanPath(stdpath.Join(d.ShareId, file.GetPath()))
u := fmt.Sprintf("%s/sd%s?pwd=%s", d.Address, path, d.Pwd)
return &model.Link{URL: u}, nil
}
func (d *OpenListShare) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
if !d.serverArchivePreview || !d.ForwardArchiveReq {
return nil, errs.NotImplement
}
var resp common.Resp[ArchiveMetaResp]
_, code, err := d.request("/fs/archive/meta", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveMetaReq{
ArchivePass: args.Password,
Path: stdpath.Join(fmt.Sprintf("/@s/%s", d.ShareId), obj.GetPath()),
Password: d.Pwd,
Refresh: false,
})
})
if code == 202 {
return nil, errs.WrongArchivePassword
}
if err != nil {
return nil, err
}
var tree []model.ObjTree
if resp.Data.Content != nil {
tree = make([]model.ObjTree, 0, len(resp.Data.Content))
for _, content := range resp.Data.Content {
tree = append(tree, &content)
}
}
return &model.ArchiveMetaInfo{
Comment: resp.Data.Comment,
Encrypted: resp.Data.Encrypted,
Tree: tree,
}, nil
}
func (d *OpenListShare) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
if !d.serverArchivePreview || !d.ForwardArchiveReq {
return nil, errs.NotImplement
}
var resp common.Resp[ArchiveListResp]
_, code, err := d.request("/fs/archive/list", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveListReq{
ArchiveMetaReq: ArchiveMetaReq{
ArchivePass: args.Password,
Path: stdpath.Join(fmt.Sprintf("/@s/%s", d.ShareId), obj.GetPath()),
Password: d.Pwd,
Refresh: false,
},
PageReq: model.PageReq{
Page: 1,
PerPage: 0,
},
InnerPath: args.InnerPath,
})
})
if code == 202 {
return nil, errs.WrongArchivePassword
}
if err != nil {
return nil, err
}
var files []model.Obj
for _, f := range resp.Data.Content {
file := model.ObjThumb{
Object: model.Object{
Name: f.Name,
Modified: f.Modified,
Ctime: f.Created,
Size: f.Size,
IsFolder: f.IsDir,
HashInfo: utils.FromString(f.HashInfo),
},
Thumbnail: model.Thumbnail{Thumbnail: f.Thumb},
}
files = append(files, &file)
}
return files, nil
}
func (d *OpenListShare) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
if !d.serverArchivePreview || !d.ForwardArchiveReq {
return nil, errs.NotSupport
}
path := utils.FixAndCleanPath(stdpath.Join(d.ShareId, obj.GetPath()))
u := fmt.Sprintf("%s/sad%s?pwd=%s&inner=%s&pass=%s",
d.Address,
path,
d.Pwd,
utils.EncodePath(args.InnerPath, true),
url.QueryEscape(args.Password))
return &model.Link{URL: u}, nil
}
var _ driver.Driver = (*OpenListShare)(nil)

View File

@@ -1,27 +0,0 @@
package openlist_share
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootPath
Address string `json:"url" required:"true"`
ShareId string `json:"sid" required:"true"`
Pwd string `json:"pwd"`
ForwardArchiveReq bool `json:"forward_archive_requests" default:"true"`
}
var config = driver.Config{
Name: "OpenListShare",
LocalSort: true,
NoUpload: true,
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &OpenListShare{}
})
}

View File

@@ -1,111 +0,0 @@
package openlist_share
import (
"time"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
)
type ListReq struct {
model.PageReq
Path string `json:"path" form:"path"`
Password string `json:"password" form:"password"`
Refresh bool `json:"refresh"`
}
type ObjResp struct {
Name string `json:"name"`
Size int64 `json:"size"`
IsDir bool `json:"is_dir"`
Modified time.Time `json:"modified"`
Created time.Time `json:"created"`
Sign string `json:"sign"`
Thumb string `json:"thumb"`
Type int `json:"type"`
HashInfo string `json:"hashinfo"`
}
type FsListResp struct {
Content []ObjResp `json:"content"`
Total int64 `json:"total"`
Readme string `json:"readme"`
Write bool `json:"write"`
Provider string `json:"provider"`
}
type ArchiveMetaReq struct {
ArchivePass string `json:"archive_pass"`
Password string `json:"password"`
Path string `json:"path"`
Refresh bool `json:"refresh"`
}
type TreeResp struct {
ObjResp
Children []TreeResp `json:"children"`
hashCache *utils.HashInfo
}
func (t *TreeResp) GetSize() int64 {
return t.Size
}
func (t *TreeResp) GetName() string {
return t.Name
}
func (t *TreeResp) ModTime() time.Time {
return t.Modified
}
func (t *TreeResp) CreateTime() time.Time {
return t.Created
}
func (t *TreeResp) IsDir() bool {
return t.ObjResp.IsDir
}
func (t *TreeResp) GetHash() utils.HashInfo {
return utils.FromString(t.HashInfo)
}
func (t *TreeResp) GetID() string {
return ""
}
func (t *TreeResp) GetPath() string {
return ""
}
func (t *TreeResp) GetChildren() []model.ObjTree {
ret := make([]model.ObjTree, 0, len(t.Children))
for _, child := range t.Children {
ret = append(ret, &child)
}
return ret
}
func (t *TreeResp) Thumb() string {
return t.ObjResp.Thumb
}
type ArchiveMetaResp struct {
Comment string `json:"comment"`
Encrypted bool `json:"encrypted"`
Content []TreeResp `json:"content"`
RawURL string `json:"raw_url"`
Sign string `json:"sign"`
}
type ArchiveListReq struct {
model.PageReq
ArchiveMetaReq
InnerPath string `json:"inner_path"`
}
type ArchiveListResp struct {
Content []ObjResp `json:"content"`
Total int64 `json:"total"`
}

View File

@@ -1,32 +0,0 @@
package openlist_share
import (
"fmt"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
)
func (d *OpenListShare) request(api, method string, callback base.ReqCallback) ([]byte, int, error) {
url := d.Address + "/api" + api
req := base.RestyClient.R()
if callback != nil {
callback(req)
}
res, err := req.Execute(method, url)
if err != nil {
code := 0
if res != nil {
code = res.StatusCode()
}
return nil, code, err
}
if res.StatusCode() >= 400 {
return nil, res.StatusCode(), fmt.Errorf("request failed, status: %s", res.Status())
}
code := utils.Json.Get(res.Body(), "code").ToInt()
if code != 200 {
return nil, code, fmt.Errorf("request failed, code: %d, message: %s", code, utils.Json.Get(res.Body(), "message").ToString())
}
return res.Body(), 200, nil
}

View File

@@ -1,290 +0,0 @@
package protondrive
/*
Package protondrive
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-18
Thanks to @henrybear327 for modded go-proton-api & Proton-API-Bridge
The power of open-source, the force of teamwork and the magic of reverse engineering!
D@' 3z K!7 - The King Of Cracking
Да здравствует Родина))
*/
import (
"context"
"fmt"
"io"
"time"
"github.com/OpenListTeam/OpenList/v4/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/setting"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/ProtonMail/gopenpgp/v2/crypto"
proton_api_bridge "github.com/henrybear327/Proton-API-Bridge"
"github.com/henrybear327/Proton-API-Bridge/common"
"github.com/henrybear327/go-proton-api"
)
type ProtonDrive struct {
model.Storage
Addition
protonDrive *proton_api_bridge.ProtonDrive
apiBase string
appVersion string
protonJson string
userAgent string
sdkVersion string
webDriveAV string
c *proton.Client
// userKR *crypto.KeyRing
addrKRs map[string]*crypto.KeyRing
addrData map[string]proton.Address
MainShare *proton.Share
DefaultAddrKR *crypto.KeyRing
MainShareKR *crypto.KeyRing
}
func (d *ProtonDrive) Config() driver.Config {
return config
}
func (d *ProtonDrive) GetAddition() driver.Additional {
return &d.Addition
}
func (d *ProtonDrive) Init(ctx context.Context) (err error) {
defer func() {
if r := recover(); err == nil && r != nil {
err = fmt.Errorf("ProtonDrive initialization panic: %v", r)
}
}()
if d.Email == "" {
return fmt.Errorf("email is required")
}
if d.Password == "" {
return fmt.Errorf("password is required")
}
config := &common.Config{
AppVersion: d.appVersion,
UserAgent: d.userAgent,
FirstLoginCredential: &common.FirstLoginCredentialData{
Username: d.Email,
Password: d.Password,
TwoFA: d.TwoFACode,
},
EnableCaching: true,
ConcurrentBlockUploadCount: setting.GetInt(conf.TaskUploadThreadsNum, conf.Conf.Tasks.Upload.Workers),
//ConcurrentFileCryptoCount: 2,
UseReusableLogin: d.UseReusableLogin && d.ReusableCredential != (common.ReusableCredentialData{}),
ReplaceExistingDraft: true,
ReusableCredential: &d.ReusableCredential,
}
protonDrive, _, err := proton_api_bridge.NewProtonDrive(
ctx,
config,
d.authHandler,
func() {},
)
if err != nil && config.UseReusableLogin {
config.UseReusableLogin = false
protonDrive, _, err = proton_api_bridge.NewProtonDrive(ctx,
config,
d.authHandler,
func() {},
)
if err == nil {
op.MustSaveDriverStorage(d)
}
}
if err != nil {
return fmt.Errorf("failed to initialize ProtonDrive: %w", err)
}
if err := d.initClient(ctx); err != nil {
return err
}
d.protonDrive = protonDrive
d.MainShare = protonDrive.MainShare
if d.RootFolderID == "root" || d.RootFolderID == "" {
d.RootFolderID = protonDrive.RootLink.LinkID
}
d.MainShareKR = protonDrive.MainShareKR
d.DefaultAddrKR = protonDrive.DefaultAddrKR
return nil
}
func (d *ProtonDrive) Drop(ctx context.Context) error {
return nil
}
func (d *ProtonDrive) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
entries, err := d.protonDrive.ListDirectory(ctx, dir.GetID())
if err != nil {
return nil, fmt.Errorf("failed to list directory: %w", err)
}
objects := make([]model.Obj, 0, len(entries))
for _, entry := range entries {
obj := &model.Object{
ID: entry.Link.LinkID,
Name: entry.Name,
Size: entry.Link.Size,
Modified: time.Unix(entry.Link.ModifyTime, 0),
IsFolder: entry.IsFolder,
}
objects = append(objects, obj)
}
return objects, nil
}
func (d *ProtonDrive) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
link, err := d.getLink(ctx, file.GetID())
if err != nil {
return nil, fmt.Errorf("failed get file link: %+v", err)
}
fileSystemAttrs, err := d.protonDrive.GetActiveRevisionAttrs(ctx, link)
if err != nil {
return nil, fmt.Errorf("failed get file revision: %+v", err)
}
// 解密后的文件大小
size := fileSystemAttrs.Size
rangeReaderFunc := func(rangeCtx context.Context, httpRange http_range.Range) (io.ReadCloser, error) {
length := httpRange.Length
if length < 0 || httpRange.Start+length > size {
length = size - httpRange.Start
}
reader, _, _, err := d.protonDrive.DownloadFile(rangeCtx, link, httpRange.Start)
if err != nil {
return nil, fmt.Errorf("failed start download: %+v", err)
}
return utils.ReadCloser{
Reader: io.LimitReader(reader, length),
Closer: reader,
}, nil
}
expiration := time.Minute
return &model.Link{
RangeReader: &model.FileRangeReader{
RangeReaderIF: stream.RateLimitRangeReaderFunc(rangeReaderFunc),
},
ContentLength: size,
Expiration: &expiration,
}, nil
}
func (d *ProtonDrive) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
id, err := d.protonDrive.CreateNewFolderByID(ctx, parentDir.GetID(), dirName)
if err != nil {
return nil, fmt.Errorf("failed to create directory: %w", err)
}
newDir := &model.Object{
ID: id,
Name: dirName,
IsFolder: true,
Modified: time.Now(),
}
return newDir, nil
}
func (d *ProtonDrive) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
return d.DirectMove(ctx, srcObj, dstDir)
}
func (d *ProtonDrive) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
if d.protonDrive == nil {
return nil, fmt.Errorf("protonDrive bridge is nil")
}
return d.DirectRename(ctx, srcObj, newName)
}
func (d *ProtonDrive) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if srcObj.IsDir() {
return nil, fmt.Errorf("directory copy not supported")
}
srcLink, err := d.getLink(ctx, srcObj.GetID())
if err != nil {
return nil, err
}
reader, linkSize, fileSystemAttrs, err := d.protonDrive.DownloadFile(ctx, srcLink, 0)
if err != nil {
return nil, fmt.Errorf("failed to download source file: %w", err)
}
defer reader.Close()
actualSize := linkSize
if fileSystemAttrs != nil && fileSystemAttrs.Size > 0 {
actualSize = fileSystemAttrs.Size
}
file := &stream.FileStream{
Ctx: ctx,
Obj: &model.Object{
Name: srcObj.GetName(),
// Use the accurate and real size
Size: actualSize,
Modified: srcObj.ModTime(),
},
Reader: reader,
}
defer file.Close()
return d.Put(ctx, dstDir, file, func(percentage float64) {})
}
func (d *ProtonDrive) Remove(ctx context.Context, obj model.Obj) error {
if obj.IsDir() {
return d.protonDrive.MoveFolderToTrashByID(ctx, obj.GetID(), false)
} else {
return d.protonDrive.MoveFileToTrashByID(ctx, obj.GetID())
}
}
func (d *ProtonDrive) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
return d.uploadFile(ctx, dstDir.GetID(), file, up)
}
func (d *ProtonDrive) GetDetails(ctx context.Context) (*model.StorageDetails, error) {
about, err := d.protonDrive.About(ctx)
if err != nil {
return nil, err
}
total := uint64(about.MaxSpace)
free := total - uint64(about.UsedSpace)
return &model.StorageDetails{
DiskUsage: model.DiskUsage{
TotalSpace: total,
FreeSpace: free,
},
}, nil
}
var _ driver.Driver = (*ProtonDrive)(nil)

View File

@@ -1,56 +0,0 @@
package protondrive
/*
Package protondrive
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-18
Thanks to @henrybear327 for modded go-proton-api & Proton-API-Bridge
The power of open-source, the force of teamwork and the magic of reverse engineering!
D@' 3z K!7 - The King Of Cracking
Да здравствует Родина))
*/
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/henrybear327/Proton-API-Bridge/common"
)
type Addition struct {
driver.RootID
Email string `json:"email" required:"true" type:"string"`
Password string `json:"password" required:"true" type:"string"`
TwoFACode string `json:"two_fa_code" type:"string"`
ChunkSize int64 `json:"chunk_size" type:"number" default:"100"`
UseReusableLogin bool `json:"use_reusable_login" type:"bool" default:"true" help:"Use reusable login credentials instead of username/password"`
ReusableCredential common.ReusableCredentialData
}
var config = driver.Config{
Name: "ProtonDrive",
LocalSort: true,
OnlyProxy: true,
DefaultRoot: "root",
NoLinkURL: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &ProtonDrive{
Addition: Addition{
UseReusableLogin: true,
},
apiBase: "https://drive.proton.me/api",
appVersion: "windows-drive@1.11.3+rclone+proton",
protonJson: "application/vnd.protonmail.v1+json",
sdkVersion: "js@0.3.0",
userAgent: "ProtonDrive/v1.70.0 (Windows NT 10.0.22000; Win64; x64)",
webDriveAV: "web-drive@5.2.0+0f69f7a8",
}
})
}

View File

@@ -1,38 +0,0 @@
package protondrive
/*
Package protondrive
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-18
Thanks to @henrybear327 for modded go-proton-api & Proton-API-Bridge
The power of open-source, the force of teamwork and the magic of reverse engineering!
D@' 3z K!7 - The King Of Cracking
Да здравствует Родина))
*/
type MoveRequest struct {
ParentLinkID string `json:"ParentLinkID"`
NodePassphrase string `json:"NodePassphrase"`
NodePassphraseSignature *string `json:"NodePassphraseSignature"`
Name string `json:"Name"`
NameSignatureEmail string `json:"NameSignatureEmail"`
Hash string `json:"Hash"`
OriginalHash string `json:"OriginalHash"`
ContentHash *string `json:"ContentHash"` // Maybe null
}
type RenameRequest struct {
Name string `json:"Name"` // PGP encrypted name
NameSignatureEmail string `json:"NameSignatureEmail"` // User's signature email
Hash string `json:"Hash"` // New name hash
OriginalHash string `json:"OriginalHash"` // Current name hash
}
type RenameResponse struct {
Code int `json:"Code"`
}

View File

@@ -1,670 +0,0 @@
package protondrive
/*
Package protondrive
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-18
Thanks to @henrybear327 for modded go-proton-api & Proton-API-Bridge
The power of open-source, the force of teamwork and the magic of reverse engineering!
D@' 3z K!7 - The King Of Cracking
Да здравствует Родина))
*/
import (
"bufio"
"bytes"
"context"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/ProtonMail/gopenpgp/v2/crypto"
"github.com/henrybear327/go-proton-api"
)
func (d *ProtonDrive) uploadFile(ctx context.Context, parentLinkID string, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
_, err := d.getLink(ctx, parentLinkID)
if err != nil {
return nil, fmt.Errorf("failed to get parent link: %w", err)
}
var reader io.Reader
// Use buffered reader with larger buffer for better performance
var bufferSize int
// File > 100MB (default)
if file.GetSize() > d.ChunkSize*1024*1024 {
// 256KB for large files
bufferSize = 256 * 1024
// File > 10MB
} else if file.GetSize() > 10*1024*1024 {
// 128KB for medium files
bufferSize = 128 * 1024
} else {
// 64KB for small files
bufferSize = 64 * 1024
}
// reader = bufio.NewReader(file)
reader = bufio.NewReaderSize(file, bufferSize)
reader = &driver.ReaderUpdatingProgress{
Reader: &stream.SimpleReaderWithSize{
Reader: reader,
Size: file.GetSize(),
},
UpdateProgress: up,
}
reader = driver.NewLimitedUploadStream(ctx, reader)
id, _, err := d.protonDrive.UploadFileByReader(ctx, parentLinkID, file.GetName(), file.ModTime(), reader, 0)
if err != nil {
return nil, fmt.Errorf("failed to upload file: %w", err)
}
return &model.Object{
ID: id,
Name: file.GetName(),
Size: file.GetSize(),
Modified: file.ModTime(),
IsFolder: false,
}, nil
}
func (d *ProtonDrive) encryptFileName(ctx context.Context, name string, parentLinkID string) (string, error) {
parentLink, err := d.getLink(ctx, parentLinkID)
if err != nil {
return "", fmt.Errorf("failed to get parent link: %w", err)
}
// Get parent node keyring
parentNodeKR, err := d.getLinkKR(ctx, parentLink)
if err != nil {
return "", fmt.Errorf("failed to get parent keyring: %w", err)
}
// Temporary file (request)
tempReq := proton.CreateFileReq{
SignatureAddress: d.MainShare.Creator,
}
// Encrypt the filename
err = tempReq.SetName(name, d.DefaultAddrKR, parentNodeKR)
if err != nil {
return "", fmt.Errorf("failed to encrypt filename: %w", err)
}
return tempReq.Name, nil
}
func (d *ProtonDrive) generateFileNameHash(ctx context.Context, name string, parentLinkID string) (string, error) {
parentLink, err := d.getLink(ctx, parentLinkID)
if err != nil {
return "", fmt.Errorf("failed to get parent link: %w", err)
}
// Get parent node keyring
parentNodeKR, err := d.getLinkKR(ctx, parentLink)
if err != nil {
return "", fmt.Errorf("failed to get parent keyring: %w", err)
}
signatureVerificationKR, err := d.getSignatureVerificationKeyring([]string{parentLink.SignatureEmail}, parentNodeKR)
if err != nil {
return "", fmt.Errorf("failed to get signature verification keyring: %w", err)
}
parentHashKey, err := parentLink.GetHashKey(parentNodeKR, signatureVerificationKR)
if err != nil {
return "", fmt.Errorf("failed to get parent hash key: %w", err)
}
nameHash, err := proton.GetNameHash(name, parentHashKey)
if err != nil {
return "", fmt.Errorf("failed to generate name hash: %w", err)
}
return nameHash, nil
}
func (d *ProtonDrive) getOriginalNameHash(link *proton.Link) (string, error) {
if link == nil {
return "", fmt.Errorf("link cannot be nil")
}
if link.Hash == "" {
return "", fmt.Errorf("link hash is empty")
}
return link.Hash, nil
}
func (d *ProtonDrive) getLink(ctx context.Context, linkID string) (*proton.Link, error) {
if linkID == "" {
return nil, fmt.Errorf("linkID cannot be empty")
}
link, err := d.c.GetLink(ctx, d.MainShare.ShareID, linkID)
if err != nil {
return nil, err
}
return &link, nil
}
func (d *ProtonDrive) getLinkKR(ctx context.Context, link *proton.Link) (*crypto.KeyRing, error) {
if link == nil {
return nil, fmt.Errorf("link cannot be nil")
}
// Root Link or Root Dir
if link.ParentLinkID == "" {
signatureVerificationKR, err := d.getSignatureVerificationKeyring([]string{link.SignatureEmail})
if err != nil {
return nil, err
}
return link.GetKeyRing(d.MainShareKR, signatureVerificationKR)
}
// Get parent keyring recursively
parentLink, err := d.getLink(ctx, link.ParentLinkID)
if err != nil {
return nil, err
}
parentNodeKR, err := d.getLinkKR(ctx, parentLink)
if err != nil {
return nil, err
}
signatureVerificationKR, err := d.getSignatureVerificationKeyring([]string{link.SignatureEmail})
if err != nil {
return nil, err
}
return link.GetKeyRing(parentNodeKR, signatureVerificationKR)
}
var (
ErrKeyPassOrSaltedKeyPassMustBeNotNil = errors.New("either keyPass or saltedKeyPass must be not nil")
ErrFailedToUnlockUserKeys = errors.New("failed to unlock user keys")
)
func getAccountKRs(ctx context.Context, c *proton.Client, keyPass, saltedKeyPass []byte) (*crypto.KeyRing, map[string]*crypto.KeyRing, map[string]proton.Address, []byte, error) {
user, err := c.GetUser(ctx)
if err != nil {
return nil, nil, nil, nil, err
}
// fmt.Printf("user %#v", user)
addrsArr, err := c.GetAddresses(ctx)
if err != nil {
return nil, nil, nil, nil, err
}
// fmt.Printf("addr %#v", addr)
if saltedKeyPass == nil {
if keyPass == nil {
return nil, nil, nil, nil, ErrKeyPassOrSaltedKeyPassMustBeNotNil
}
// Due to limitations, salts are stored using cacheCredentialToFile
salts, err := c.GetSalts(ctx)
if err != nil {
return nil, nil, nil, nil, err
}
// fmt.Printf("salts %#v", salts)
saltedKeyPass, err = salts.SaltForKey(keyPass, user.Keys.Primary().ID)
if err != nil {
return nil, nil, nil, nil, err
}
// fmt.Printf("saltedKeyPass ok")
}
userKR, addrKRs, err := proton.Unlock(user, addrsArr, saltedKeyPass, nil)
if err != nil {
return nil, nil, nil, nil, err
} else if userKR.CountDecryptionEntities() == 0 {
return nil, nil, nil, nil, ErrFailedToUnlockUserKeys
}
addrs := make(map[string]proton.Address)
for _, addr := range addrsArr {
addrs[addr.Email] = addr
}
return userKR, addrKRs, addrs, saltedKeyPass, nil
}
func (d *ProtonDrive) getSignatureVerificationKeyring(emailAddresses []string, verificationAddrKRs ...*crypto.KeyRing) (*crypto.KeyRing, error) {
ret, err := crypto.NewKeyRing(nil)
if err != nil {
return nil, err
}
for _, emailAddress := range emailAddresses {
if addr, ok := d.addrData[emailAddress]; ok {
if addrKR, exists := d.addrKRs[addr.ID]; exists {
err = d.addKeysFromKR(ret, addrKR)
if err != nil {
return nil, err
}
}
}
}
for _, kr := range verificationAddrKRs {
err = d.addKeysFromKR(ret, kr)
if err != nil {
return nil, err
}
}
if ret.CountEntities() == 0 {
return nil, fmt.Errorf("no keyring for signature verification")
}
return ret, nil
}
func (d *ProtonDrive) addKeysFromKR(kr *crypto.KeyRing, newKRs ...*crypto.KeyRing) error {
for i := range newKRs {
for _, key := range newKRs[i].GetKeys() {
err := kr.AddKey(key)
if err != nil {
return err
}
}
}
return nil
}
func (d *ProtonDrive) DirectRename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
// fmt.Printf("DEBUG DirectRename: path=%s, newName=%s", srcObj.GetPath(), newName)
if d.MainShare == nil || d.DefaultAddrKR == nil {
return nil, fmt.Errorf("missing required fields: MainShare=%v, DefaultAddrKR=%v",
d.MainShare != nil, d.DefaultAddrKR != nil)
}
if d.protonDrive == nil {
return nil, fmt.Errorf("protonDrive bridge is nil")
}
srcLink, err := d.getLink(ctx, srcObj.GetID())
if err != nil {
return nil, fmt.Errorf("failed to find source: %w", err)
}
parentLinkID := srcLink.ParentLinkID
if parentLinkID == "" {
return nil, fmt.Errorf("cannot rename root folder")
}
encryptedName, err := d.encryptFileName(ctx, newName, parentLinkID)
if err != nil {
return nil, fmt.Errorf("failed to encrypt filename: %w", err)
}
newHash, err := d.generateFileNameHash(ctx, newName, parentLinkID)
if err != nil {
return nil, fmt.Errorf("failed to generate new hash: %w", err)
}
originalHash, err := d.getOriginalNameHash(srcLink)
if err != nil {
return nil, fmt.Errorf("failed to get original hash: %w", err)
}
renameReq := RenameRequest{
Name: encryptedName,
NameSignatureEmail: d.MainShare.Creator,
Hash: newHash,
OriginalHash: originalHash,
}
err = d.executeRenameAPI(ctx, srcLink.LinkID, renameReq)
if err != nil {
return nil, fmt.Errorf("rename API call failed: %w", err)
}
return &model.Object{
ID: srcLink.LinkID,
Name: newName,
Size: srcObj.GetSize(),
Modified: srcObj.ModTime(),
IsFolder: srcObj.IsDir(),
}, nil
}
func (d *ProtonDrive) executeRenameAPI(ctx context.Context, linkID string, req RenameRequest) error {
renameURL := fmt.Sprintf(d.apiBase+"/drive/v2/volumes/%s/links/%s/rename",
d.MainShare.VolumeID, linkID)
reqBody, err := json.Marshal(req)
if err != nil {
return fmt.Errorf("failed to marshal rename request: %w", err)
}
httpReq, err := http.NewRequestWithContext(ctx, "PUT", renameURL, bytes.NewReader(reqBody))
if err != nil {
return fmt.Errorf("failed to create HTTP request: %w", err)
}
httpReq.Header.Set("Content-Type", "application/json")
httpReq.Header.Set("Accept", d.protonJson)
httpReq.Header.Set("X-Pm-Appversion", d.webDriveAV)
httpReq.Header.Set("X-Pm-Drive-Sdk-Version", d.sdkVersion)
httpReq.Header.Set("X-Pm-Uid", d.ReusableCredential.UID)
httpReq.Header.Set("Authorization", "Bearer "+d.ReusableCredential.AccessToken)
client := &http.Client{}
resp, err := client.Do(httpReq)
if err != nil {
return fmt.Errorf("failed to execute rename request: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("rename failed with status %d", resp.StatusCode)
}
var renameResp RenameResponse
if err := json.NewDecoder(resp.Body).Decode(&renameResp); err != nil {
return fmt.Errorf("failed to decode rename response: %w", err)
}
if renameResp.Code != 1000 {
return fmt.Errorf("rename failed with code %d", renameResp.Code)
}
return nil
}
func (d *ProtonDrive) executeMoveAPI(ctx context.Context, linkID string, req MoveRequest) error {
// fmt.Printf("DEBUG Move Request - Name: %s\n", req.Name)
// fmt.Printf("DEBUG Move Request - Hash: %s\n", req.Hash)
// fmt.Printf("DEBUG Move Request - OriginalHash: %s\n", req.OriginalHash)
// fmt.Printf("DEBUG Move Request - ParentLinkID: %s\n", req.ParentLinkID)
// fmt.Printf("DEBUG Move Request - Name length: %d\n", len(req.Name))
// fmt.Printf("DEBUG Move Request - NameSignatureEmail: %s\n", req.NameSignatureEmail)
// fmt.Printf("DEBUG Move Request - ContentHash: %v\n", req.ContentHash)
// fmt.Printf("DEBUG Move Request - NodePassphrase length: %d\n", len(req.NodePassphrase))
// fmt.Printf("DEBUG Move Request - NodePassphraseSignature length: %d\n", len(req.NodePassphraseSignature))
// fmt.Printf("DEBUG Move Request - SrcLinkID: %s\n", linkID)
// fmt.Printf("DEBUG Move Request - DstParentLinkID: %s\n", req.ParentLinkID)
// fmt.Printf("DEBUG Move Request - ShareID: %s\n", d.MainShare.ShareID)
srcLink, _ := d.getLink(ctx, linkID)
if srcLink != nil && srcLink.ParentLinkID == req.ParentLinkID {
return fmt.Errorf("cannot move to same parent directory")
}
moveURL := fmt.Sprintf(d.apiBase+"/drive/v2/volumes/%s/links/%s/move",
d.MainShare.VolumeID, linkID)
reqBody, err := json.Marshal(req)
if err != nil {
return fmt.Errorf("failed to marshal move request: %w", err)
}
httpReq, err := http.NewRequestWithContext(ctx, "PUT", moveURL, bytes.NewReader(reqBody))
if err != nil {
return fmt.Errorf("failed to create HTTP request: %w", err)
}
httpReq.Header.Set("Authorization", "Bearer "+d.ReusableCredential.AccessToken)
httpReq.Header.Set("Accept", d.protonJson)
httpReq.Header.Set("X-Pm-Appversion", d.webDriveAV)
httpReq.Header.Set("X-Pm-Drive-Sdk-Version", d.sdkVersion)
httpReq.Header.Set("X-Pm-Uid", d.ReusableCredential.UID)
httpReq.Header.Set("Content-Type", "application/json")
client := &http.Client{}
resp, err := client.Do(httpReq)
if err != nil {
return fmt.Errorf("failed to execute move request: %w", err)
}
defer resp.Body.Close()
var moveResp RenameResponse
if err := json.NewDecoder(resp.Body).Decode(&moveResp); err != nil {
return fmt.Errorf("failed to decode move response: %w", err)
}
if moveResp.Code != 1000 {
return fmt.Errorf("move operation failed with code: %d", moveResp.Code)
}
return nil
}
func (d *ProtonDrive) DirectMove(ctx context.Context, srcObj model.Obj, dstDir model.Obj) (model.Obj, error) {
// fmt.Printf("DEBUG DirectMove: srcPath=%s, dstPath=%s", srcObj.GetPath(), dstDir.GetPath())
srcLink, err := d.getLink(ctx, srcObj.GetID())
if err != nil {
return nil, fmt.Errorf("failed to find source: %w", err)
}
dstParentLinkID := dstDir.GetID()
if srcObj.IsDir() {
// Check if destination is a descendant of source
if err := d.checkCircularMove(ctx, srcLink.LinkID, dstParentLinkID); err != nil {
return nil, err
}
}
// Encrypt the filename for the new location
encryptedName, err := d.encryptFileName(ctx, srcObj.GetName(), dstParentLinkID)
if err != nil {
return nil, fmt.Errorf("failed to encrypt filename: %w", err)
}
newHash, err := d.generateNameHash(ctx, srcObj.GetName(), dstParentLinkID)
if err != nil {
return nil, fmt.Errorf("failed to generate new hash: %w", err)
}
originalHash, err := d.getOriginalNameHash(srcLink)
if err != nil {
return nil, fmt.Errorf("failed to get original hash: %w", err)
}
// Re-encrypt node passphrase for new parent context
reencryptedPassphrase, err := d.reencryptNodePassphrase(ctx, srcLink, dstParentLinkID)
if err != nil {
return nil, fmt.Errorf("failed to re-encrypt node passphrase: %w", err)
}
moveReq := MoveRequest{
ParentLinkID: dstParentLinkID,
NodePassphrase: reencryptedPassphrase,
Name: encryptedName,
NameSignatureEmail: d.MainShare.Creator,
Hash: newHash,
OriginalHash: originalHash,
ContentHash: nil,
// *** Causes rejection ***
/* NodePassphraseSignature: srcLink.NodePassphraseSignature, */
}
//fmt.Printf("DEBUG MoveRequest validation:\n")
//fmt.Printf(" Name length: %d\n", len(moveReq.Name))
//fmt.Printf(" Hash: %s\n", moveReq.Hash)
//fmt.Printf(" OriginalHash: %s\n", moveReq.OriginalHash)
//fmt.Printf(" NodePassphrase length: %d\n", len(moveReq.NodePassphrase))
/* fmt.Printf(" NodePassphraseSignature length: %d\n", len(moveReq.NodePassphraseSignature)) */
//fmt.Printf(" NameSignatureEmail: %s\n", moveReq.NameSignatureEmail)
err = d.executeMoveAPI(ctx, srcLink.LinkID, moveReq)
if err != nil {
return nil, fmt.Errorf("move API call failed: %w", err)
}
return &model.Object{
ID: srcLink.LinkID,
Name: srcObj.GetName(),
Size: srcObj.GetSize(),
Modified: srcObj.ModTime(),
IsFolder: srcObj.IsDir(),
}, nil
}
func (d *ProtonDrive) reencryptNodePassphrase(ctx context.Context, srcLink *proton.Link, dstParentLinkID string) (string, error) {
// Get source parent link with metadata
srcParentLink, err := d.getLink(ctx, srcLink.ParentLinkID)
if err != nil {
return "", fmt.Errorf("failed to get source parent link: %w", err)
}
// Get source parent keyring using link object
srcParentKR, err := d.getLinkKR(ctx, srcParentLink)
if err != nil {
return "", fmt.Errorf("failed to get source parent keyring: %w", err)
}
// Get destination parent link with metadata
dstParentLink, err := d.getLink(ctx, dstParentLinkID)
if err != nil {
return "", fmt.Errorf("failed to get destination parent link: %w", err)
}
// Get destination parent keyring using link object
dstParentKR, err := d.getLinkKR(ctx, dstParentLink)
if err != nil {
return "", fmt.Errorf("failed to get destination parent keyring: %w", err)
}
// Re-encrypt the node passphrase from source parent context to destination parent context
reencryptedPassphrase, err := reencryptKeyPacket(srcParentKR, dstParentKR, d.DefaultAddrKR, srcLink.NodePassphrase)
if err != nil {
return "", fmt.Errorf("failed to re-encrypt key packet: %w", err)
}
return reencryptedPassphrase, nil
}
func (d *ProtonDrive) generateNameHash(ctx context.Context, name string, parentLinkID string) (string, error) {
parentLink, err := d.getLink(ctx, parentLinkID)
if err != nil {
return "", fmt.Errorf("failed to get parent link: %w", err)
}
// Get parent node keyring
parentNodeKR, err := d.getLinkKR(ctx, parentLink)
if err != nil {
return "", fmt.Errorf("failed to get parent keyring: %w", err)
}
// Get signature verification keyring
signatureVerificationKR, err := d.getSignatureVerificationKeyring([]string{parentLink.SignatureEmail}, parentNodeKR)
if err != nil {
return "", fmt.Errorf("failed to get signature verification keyring: %w", err)
}
parentHashKey, err := parentLink.GetHashKey(parentNodeKR, signatureVerificationKR)
if err != nil {
return "", fmt.Errorf("failed to get parent hash key: %w", err)
}
nameHash, err := proton.GetNameHash(name, parentHashKey)
if err != nil {
return "", fmt.Errorf("failed to generate name hash: %w", err)
}
return nameHash, nil
}
func reencryptKeyPacket(srcKR, dstKR, _ *crypto.KeyRing, passphrase string) (string, error) { // addrKR (3)
oldSplitMessage, err := crypto.NewPGPSplitMessageFromArmored(passphrase)
if err != nil {
return "", err
}
sessionKey, err := srcKR.DecryptSessionKey(oldSplitMessage.KeyPacket)
if err != nil {
return "", err
}
newKeyPacket, err := dstKR.EncryptSessionKey(sessionKey)
if err != nil {
return "", err
}
newSplitMessage := crypto.NewPGPSplitMessage(newKeyPacket, oldSplitMessage.DataPacket)
return newSplitMessage.GetArmored()
}
func (d *ProtonDrive) checkCircularMove(ctx context.Context, srcLinkID, dstParentLinkID string) error {
currentLinkID := dstParentLinkID
for currentLinkID != "" && currentLinkID != d.RootFolderID {
if currentLinkID == srcLinkID {
return fmt.Errorf("cannot move folder into itself or its subfolder")
}
currentLink, err := d.getLink(ctx, currentLinkID)
if err != nil {
return err
}
currentLinkID = currentLink.ParentLinkID
}
return nil
}
func (d *ProtonDrive) authHandler(auth proton.Auth) {
if auth.AccessToken != d.ReusableCredential.AccessToken || auth.RefreshToken != d.ReusableCredential.RefreshToken {
d.ReusableCredential.UID = auth.UID
d.ReusableCredential.AccessToken = auth.AccessToken
d.ReusableCredential.RefreshToken = auth.RefreshToken
if err := d.initClient(context.Background()); err != nil {
fmt.Printf("ProtonDrive: failed to reinitialize client after auth refresh: %v\n", err)
}
op.MustSaveDriverStorage(d)
}
}
func (d *ProtonDrive) initClient(ctx context.Context) error {
clientOptions := []proton.Option{
proton.WithAppVersion(d.appVersion),
proton.WithUserAgent(d.userAgent),
}
manager := proton.New(clientOptions...)
d.c = manager.NewClient(d.ReusableCredential.UID, d.ReusableCredential.AccessToken, d.ReusableCredential.RefreshToken)
saltedKeyPassBytes, err := base64.StdEncoding.DecodeString(d.ReusableCredential.SaltedKeyPass)
if err != nil {
return fmt.Errorf("failed to decode salted key pass: %w", err)
}
_, addrKRs, addrs, _, err := getAccountKRs(ctx, d.c, nil, saltedKeyPassBytes)
if err != nil {
return fmt.Errorf("failed to get account keyrings: %w", err)
}
d.addrKRs = addrKRs
d.addrData = addrs
return nil
}

View File

@@ -1,137 +0,0 @@
package teldrive
import (
"fmt"
"net/http"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/go-resty/resty/v2"
"golang.org/x/net/context"
"golang.org/x/sync/errgroup"
"golang.org/x/sync/semaphore"
)
func NewCopyManager(ctx context.Context, concurrent int, d *Teldrive) *CopyManager {
g, ctx := errgroup.WithContext(ctx)
return &CopyManager{
TaskChan: make(chan CopyTask, concurrent*2),
Sem: semaphore.NewWeighted(int64(concurrent)),
G: g,
Ctx: ctx,
d: d,
}
}
func (cm *CopyManager) startWorkers() {
workerCount := cap(cm.TaskChan) / 2
for i := 0; i < workerCount; i++ {
cm.G.Go(func() error {
return cm.worker()
})
}
}
func (cm *CopyManager) worker() error {
for {
select {
case task, ok := <-cm.TaskChan:
if !ok {
return nil
}
if err := cm.Sem.Acquire(cm.Ctx, 1); err != nil {
return err
}
var err error
err = cm.processFile(task)
cm.Sem.Release(1)
if err != nil {
return fmt.Errorf("task processing failed: %w", err)
}
case <-cm.Ctx.Done():
return cm.Ctx.Err()
}
}
}
func (cm *CopyManager) generateTasks(ctx context.Context, srcObj, dstDir model.Obj) error {
if srcObj.IsDir() {
return cm.generateFolderTasks(ctx, srcObj, dstDir)
} else {
// add single file task directly
select {
case cm.TaskChan <- CopyTask{SrcObj: srcObj, DstDir: dstDir}:
return nil
case <-ctx.Done():
return ctx.Err()
}
}
}
func (cm *CopyManager) generateFolderTasks(ctx context.Context, srcDir, dstDir model.Obj) error {
objs, err := cm.d.List(ctx, srcDir, model.ListArgs{})
if err != nil {
return fmt.Errorf("failed to list directory %s: %w", srcDir.GetPath(), err)
}
err = cm.d.MakeDir(cm.Ctx, dstDir, srcDir.GetName())
if err != nil || len(objs) == 0 {
return err
}
newDstDir := &model.Object{
ID: dstDir.GetID(),
Path: dstDir.GetPath() + "/" + srcDir.GetName(),
Name: srcDir.GetName(),
IsFolder: true,
}
for _, file := range objs {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
srcFile := &model.Object{
ID: file.GetID(),
Path: srcDir.GetPath() + "/" + file.GetName(),
Name: file.GetName(),
IsFolder: file.IsDir(),
}
// 递归生成任务
if err := cm.generateTasks(ctx, srcFile, newDstDir); err != nil {
return err
}
}
return nil
}
func (cm *CopyManager) processFile(task CopyTask) error {
return cm.copySingleFile(cm.Ctx, task.SrcObj, task.DstDir)
}
func (cm *CopyManager) copySingleFile(ctx context.Context, srcObj, dstDir model.Obj) error {
// `override copy mode` should delete the existing file
if obj, err := cm.d.getFile(dstDir.GetPath(), srcObj.GetName(), srcObj.IsDir()); err == nil {
if err := cm.d.Remove(ctx, obj); err != nil {
return fmt.Errorf("failed to remove existing file: %w", err)
}
}
// Do copy
return cm.d.request(http.MethodPost, "/api/files/{id}/copy", func(req *resty.Request) {
req.SetPathParam("id", srcObj.GetID())
req.SetBody(base.Json{
"newName": srcObj.GetName(),
"destination": dstDir.GetPath(),
})
}, nil)
}

View File

@@ -1,217 +0,0 @@
package teldrive
import (
"context"
"fmt"
"math"
"net/http"
"net/url"
"strings"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
)
type Teldrive struct {
model.Storage
Addition
}
func (d *Teldrive) Config() driver.Config {
return config
}
func (d *Teldrive) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Teldrive) Init(ctx context.Context) error {
d.Address = strings.TrimSuffix(d.Address, "/")
if d.Cookie == "" || !strings.HasPrefix(d.Cookie, "access_token=") {
return fmt.Errorf("cookie must start with 'access_token='")
}
if d.UploadConcurrency == 0 {
d.UploadConcurrency = 4
}
if d.ChunkSize == 0 {
d.ChunkSize = 10
}
op.MustSaveDriverStorage(d)
return nil
}
func (d *Teldrive) Drop(ctx context.Context) error {
return nil
}
func (d *Teldrive) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var listResp ListResp
err := d.request(http.MethodGet, "/api/files", func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"path": dir.GetPath(),
"limit": "1000", // overide default 500, TODO pagination
})
}, &listResp)
if err != nil {
return nil, err
}
return utils.SliceConvert(listResp.Items, func(src Object) (model.Obj, error) {
return &model.Object{
ID: src.ID,
Name: src.Name,
Size: func() int64 {
if src.Type == "folder" {
return 0
}
return src.Size
}(),
IsFolder: src.Type == "folder",
Modified: src.UpdatedAt,
}, nil
})
}
func (d *Teldrive) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if d.UseShareLink {
shareObj, err := d.getShareFileById(file.GetID())
if err != nil || shareObj == nil {
if err := d.createShareFile(file.GetID()); err != nil {
return nil, err
}
shareObj, err = d.getShareFileById(file.GetID())
if err != nil {
return nil, err
}
}
return &model.Link{
URL: d.Address + "/api/shares/" + url.PathEscape(shareObj.Id) + "/files/" + url.PathEscape(file.GetID()) + "/" + url.PathEscape(file.GetName()),
}, nil
}
return &model.Link{
URL: d.Address + "/api/files/" + url.PathEscape(file.GetID()) + "/" + url.PathEscape(file.GetName()),
Header: http.Header{
"Cookie": {d.Cookie},
},
}, nil
}
func (d *Teldrive) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
return d.request(http.MethodPost, "/api/files/mkdir", func(req *resty.Request) {
req.SetBody(map[string]interface{}{
"path": parentDir.GetPath() + "/" + dirName,
})
}, nil)
}
func (d *Teldrive) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
body := base.Json{
"ids": []string{srcObj.GetID()},
"destinationParent": dstDir.GetID(),
}
return d.request(http.MethodPost, "/api/files/move", func(req *resty.Request) {
req.SetBody(body)
}, nil)
}
func (d *Teldrive) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
body := base.Json{
"name": newName,
}
return d.request(http.MethodPatch, "/api/files/{id}", func(req *resty.Request) {
req.SetPathParam("id", srcObj.GetID())
req.SetBody(body)
}, nil)
}
func (d *Teldrive) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
copyConcurrentLimit := 4
copyManager := NewCopyManager(ctx, copyConcurrentLimit, d)
copyManager.startWorkers()
copyManager.G.Go(func() error {
defer close(copyManager.TaskChan)
return copyManager.generateTasks(ctx, srcObj, dstDir)
})
return copyManager.G.Wait()
}
func (d *Teldrive) Remove(ctx context.Context, obj model.Obj) error {
body := base.Json{
"ids": []string{obj.GetID()},
}
return d.request(http.MethodPost, "/api/files/delete", func(req *resty.Request) {
req.SetBody(body)
}, nil)
}
func (d *Teldrive) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
fileId := uuid.New().String()
chunkSizeInMB := d.ChunkSize
chunkSize := chunkSizeInMB * 1024 * 1024 // Convert MB to bytes
totalSize := file.GetSize()
totalParts := int(math.Ceil(float64(totalSize) / float64(chunkSize)))
maxRetried := 3
// delete the upload task when finished or failed
defer func() {
_ = d.request(http.MethodDelete, "/api/uploads/{id}", func(req *resty.Request) {
req.SetPathParam("id", fileId)
}, nil)
}()
if obj, err := d.getFile(dstDir.GetPath(), file.GetName(), file.IsDir()); err == nil {
if err = d.Remove(ctx, obj); err != nil {
return err
}
}
// start the upload process
if err := d.request(http.MethodGet, "/api/uploads/fileId", func(req *resty.Request) {
req.SetPathParam("id", fileId)
}, nil); err != nil {
return err
}
if totalSize == 0 {
return d.touch(file.GetName(), dstDir.GetPath())
}
if totalParts <= 1 {
return d.doSingleUpload(ctx, dstDir, file, up, totalParts, chunkSize, fileId)
}
return d.doMultiUpload(ctx, dstDir, file, up, maxRetried, totalParts, chunkSize, fileId)
}
func (d *Teldrive) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Teldrive) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Teldrive) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Teldrive) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// return errs.NotImplement to use an internal archive tool
return nil, errs.NotImplement
}
//func (d *Teldrive) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Teldrive)(nil)

View File

@@ -1,26 +0,0 @@
package teldrive
import (
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/op"
)
type Addition struct {
driver.RootPath
Address string `json:"url" required:"true"`
Cookie string `json:"cookie" type:"string" required:"true" help:"access_token=xxx"`
UseShareLink bool `json:"use_share_link" type:"bool" default:"false" help:"Create share link when getting link to support 302. If disabled, you need to enable web proxy."`
ChunkSize int64 `json:"chunk_size" type:"number" default:"10" help:"Chunk size in MiB"`
UploadConcurrency int64 `json:"upload_concurrency" type:"number" default:"4" help:"Concurrency upload requests"`
}
var config = driver.Config{
Name: "Teldrive",
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Teldrive{}
})
}

View File

@@ -1,78 +0,0 @@
package teldrive
import (
"context"
"io"
"time"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"golang.org/x/sync/errgroup"
"golang.org/x/sync/semaphore"
)
type ErrResp struct {
Code int `json:"code"`
Message string `json:"message"`
}
type Object struct {
ID string `json:"id"`
Name string `json:"name"`
Type string `json:"type"`
MimeType string `json:"mimeType"`
Category string `json:"category,omitempty"`
ParentId string `json:"parentId"`
Size int64 `json:"size"`
Encrypted bool `json:"encrypted"`
UpdatedAt time.Time `json:"updatedAt"`
}
type ListResp struct {
Items []Object `json:"items"`
Meta struct {
Count int `json:"count"`
TotalPages int `json:"totalPages"`
CurrentPage int `json:"currentPage"`
} `json:"meta"`
}
type FilePart struct {
Name string `json:"name"`
PartId int `json:"partId"`
PartNo int `json:"partNo"`
ChannelId int `json:"channelId"`
Size int `json:"size"`
Encrypted bool `json:"encrypted"`
Salt string `json:"salt"`
}
type chunkTask struct {
chunkIdx int
fileName string
chunkSize int64
reader io.ReadSeeker
ss stream.StreamSectionReaderIF
}
type CopyManager struct {
TaskChan chan CopyTask
Sem *semaphore.Weighted
G *errgroup.Group
Ctx context.Context
d *Teldrive
}
type CopyTask struct {
SrcObj model.Obj
DstDir model.Obj
}
type ShareObj struct {
Id string `json:"id"`
Protected bool `json:"protected"`
UserId int `json:"userId"`
Type string `json:"type"`
Name string `json:"name"`
ExpiresAt time.Time `json:"expiresAt"`
}

View File

@@ -1,373 +0,0 @@
package teldrive
import (
"fmt"
"io"
"net/http"
"sort"
"strconv"
"sync"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/stream"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/avast/retry-go"
"github.com/go-resty/resty/v2"
"github.com/pkg/errors"
"golang.org/x/net/context"
"golang.org/x/sync/errgroup"
"golang.org/x/sync/semaphore"
)
// create empty file
func (d *Teldrive) touch(name, path string) error {
uploadBody := base.Json{
"name": name,
"type": "file",
"path": path,
}
if err := d.request(http.MethodPost, "/api/files", func(req *resty.Request) {
req.SetBody(uploadBody)
}, nil); err != nil {
return err
}
return nil
}
func (d *Teldrive) createFileOnUploadSuccess(name, id, path string, uploadedFileParts []FilePart, totalSize int64) error {
remoteFileParts, err := d.getFilePart(id)
if err != nil {
return err
}
// check if the uploaded file parts match the remote file parts
if len(remoteFileParts) != len(uploadedFileParts) {
return fmt.Errorf("[Teldrive] file parts count mismatch: expected %d, got %d", len(uploadedFileParts), len(remoteFileParts))
}
formatParts := make([]base.Json, 0)
for _, p := range remoteFileParts {
formatParts = append(formatParts, base.Json{
"id": p.PartId,
"salt": p.Salt,
})
}
uploadBody := base.Json{
"name": name,
"type": "file",
"path": path,
"parts": formatParts,
"size": totalSize,
}
// create file here
if err := d.request(http.MethodPost, "/api/files", func(req *resty.Request) {
req.SetBody(uploadBody)
}, nil); err != nil {
return err
}
return nil
}
func (d *Teldrive) checkFilePartExist(fileId string, partId int) (FilePart, error) {
var uploadedParts []FilePart
var filePart FilePart
if err := d.request(http.MethodGet, "/api/uploads/{id}", func(req *resty.Request) {
req.SetPathParam("id", fileId)
}, &uploadedParts); err != nil {
return filePart, err
}
for _, part := range uploadedParts {
if part.PartId == partId {
return part, nil
}
}
return filePart, nil
}
func (d *Teldrive) getFilePart(fileId string) ([]FilePart, error) {
var uploadedParts []FilePart
if err := d.request(http.MethodGet, "/api/uploads/{id}", func(req *resty.Request) {
req.SetPathParam("id", fileId)
}, &uploadedParts); err != nil {
return nil, err
}
return uploadedParts, nil
}
func (d *Teldrive) singleUploadRequest(fileId string, callback base.ReqCallback, resp interface{}) error {
url := d.Address + "/api/uploads/" + fileId
client := resty.New().SetTimeout(0)
ctx := context.Background()
req := client.R().
SetContext(ctx)
req.SetHeader("Cookie", d.Cookie)
req.SetHeader("Content-Type", "application/octet-stream")
req.SetContentLength(true)
req.AddRetryCondition(func(r *resty.Response, err error) bool {
return false
})
if callback != nil {
callback(req)
}
if resp != nil {
req.SetResult(resp)
}
var e ErrResp
req.SetError(&e)
_req, err := req.Execute(http.MethodPost, url)
if err != nil {
return err
}
if _req.IsError() {
return &e
}
return nil
}
func (d *Teldrive) doSingleUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up model.UpdateProgress,
totalParts int, chunkSize int64, fileId string) error {
totalSize := file.GetSize()
var fileParts []FilePart
var uploaded int64 = 0
ss, err := stream.NewStreamSectionReader(file, int(totalSize), &up)
if err != nil {
return err
}
for uploaded < totalSize {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
curChunkSize := min(totalSize-uploaded, chunkSize)
rd, err := ss.GetSectionReader(uploaded, curChunkSize)
if err != nil {
return err
}
filePart := &FilePart{}
if err := retry.Do(func() error {
if _, err := rd.Seek(0, io.SeekStart); err != nil {
return err
}
if err := d.singleUploadRequest(fileId, func(req *resty.Request) {
uploadParams := map[string]string{
"partName": func() string {
digits := len(strconv.Itoa(totalParts))
return file.GetName() + fmt.Sprintf(".%0*d", digits, 1)
}(),
"partNo": strconv.Itoa(1),
"fileName": file.GetName(),
}
req.SetQueryParams(uploadParams)
req.SetBody(driver.NewLimitedUploadStream(ctx, rd))
req.SetHeader("Content-Length", strconv.FormatInt(curChunkSize, 10))
}, filePart); err != nil {
return err
}
return nil
},
retry.Attempts(3),
retry.DelayType(retry.BackOffDelay),
retry.Delay(time.Second)); err != nil {
return err
}
if filePart.Name != "" {
fileParts = append(fileParts, *filePart)
uploaded += curChunkSize
up(float64(uploaded) / float64(totalSize))
ss.FreeSectionReader(rd)
}
}
return d.createFileOnUploadSuccess(file.GetName(), fileId, dstDir.GetPath(), fileParts, totalSize)
}
func (d *Teldrive) doMultiUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up model.UpdateProgress,
maxRetried, totalParts int, chunkSize int64, fileId string) error {
concurrent := d.UploadConcurrency
g, ctx := errgroup.WithContext(ctx)
sem := semaphore.NewWeighted(int64(concurrent))
chunkChan := make(chan chunkTask, concurrent*2)
resultChan := make(chan FilePart, concurrent)
totalSize := file.GetSize()
ss, err := stream.NewStreamSectionReader(file, int(totalSize), &up)
if err != nil {
return err
}
ssLock := sync.Mutex{}
g.Go(func() error {
defer close(chunkChan)
chunkIdx := 0
for chunkIdx < totalParts {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
offset := int64(chunkIdx) * chunkSize
curChunkSize := min(totalSize-offset, chunkSize)
ssLock.Lock()
reader, err := ss.GetSectionReader(offset, curChunkSize)
ssLock.Unlock()
if err != nil {
return err
}
task := chunkTask{
chunkIdx: chunkIdx + 1,
chunkSize: curChunkSize,
fileName: file.GetName(),
reader: reader,
ss: ss,
}
// freeSectionReader will be called in d.uploadSingleChunk
select {
case chunkChan <- task:
chunkIdx++
case <-ctx.Done():
return ctx.Err()
}
}
return nil
})
for i := 0; i < int(concurrent); i++ {
g.Go(func() error {
for task := range chunkChan {
if err := sem.Acquire(ctx, 1); err != nil {
return err
}
filePart, err := d.uploadSingleChunk(ctx, fileId, task, totalParts, maxRetried)
sem.Release(1)
if err != nil {
return fmt.Errorf("upload chunk %d failed: %w", task.chunkIdx, err)
}
select {
case resultChan <- *filePart:
case <-ctx.Done():
return ctx.Err()
}
}
return nil
})
}
var fileParts []FilePart
var collectErr error
collectDone := make(chan struct{})
go func() {
defer close(collectDone)
fileParts = make([]FilePart, 0, totalParts)
done := make(chan error, 1)
go func() {
done <- g.Wait()
close(resultChan)
}()
for {
select {
case filePart, ok := <-resultChan:
if !ok {
collectErr = <-done
return
}
fileParts = append(fileParts, filePart)
case err := <-done:
collectErr = err
return
}
}
}()
<-collectDone
if collectErr != nil {
return fmt.Errorf("multi-upload failed: %w", collectErr)
}
sort.Slice(fileParts, func(i, j int) bool {
return fileParts[i].PartNo < fileParts[j].PartNo
})
return d.createFileOnUploadSuccess(file.GetName(), fileId, dstDir.GetPath(), fileParts, totalSize)
}
func (d *Teldrive) uploadSingleChunk(ctx context.Context, fileId string, task chunkTask, totalParts, maxRetried int) (*FilePart, error) {
filePart := &FilePart{}
retryCount := 0
defer task.ss.FreeSectionReader(task.reader)
for {
select {
case <-ctx.Done():
return nil, ctx.Err()
default:
}
if existingPart, err := d.checkFilePartExist(fileId, task.chunkIdx); err == nil && existingPart.Name != "" {
return &existingPart, nil
}
err := d.singleUploadRequest(fileId, func(req *resty.Request) {
uploadParams := map[string]string{
"partName": func() string {
digits := len(strconv.Itoa(totalParts))
return task.fileName + fmt.Sprintf(".%0*d", digits, task.chunkIdx)
}(),
"partNo": strconv.Itoa(task.chunkIdx),
"fileName": task.fileName,
}
req.SetQueryParams(uploadParams)
req.SetBody(driver.NewLimitedUploadStream(ctx, task.reader))
req.SetHeader("Content-Length", strconv.Itoa(int(task.chunkSize)))
}, filePart)
if err == nil {
return filePart, nil
}
if retryCount >= maxRetried {
return nil, fmt.Errorf("upload failed after %d retries: %w", maxRetried, err)
}
if errors.Is(err, context.DeadlineExceeded) || errors.Is(err, context.Canceled) {
continue
}
retryCount++
utils.Log.Errorf("[Teldrive] upload error: %v, retrying %d times", err, retryCount)
backoffDuration := time.Duration(retryCount*retryCount) * time.Second
if backoffDuration > 30*time.Second {
backoffDuration = 30 * time.Second
}
select {
case <-time.After(backoffDuration):
case <-ctx.Done():
return nil, ctx.Err()
}
}
}

View File

@@ -1,109 +0,0 @@
package teldrive
import (
"fmt"
"net/http"
"time"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/go-resty/resty/v2"
)
// do others that not defined in Driver interface
func (d *Teldrive) request(method string, pathname string, callback base.ReqCallback, resp interface{}) error {
url := d.Address + pathname
req := base.RestyClient.R()
req.SetHeader("Cookie", d.Cookie)
if callback != nil {
callback(req)
}
if resp != nil {
req.SetResult(resp)
}
var e ErrResp
req.SetError(&e)
_req, err := req.Execute(method, url)
if err != nil {
return err
}
if _req.IsError() {
return &e
}
return nil
}
func (d *Teldrive) getFile(path, name string, isFolder bool) (model.Obj, error) {
resp := &ListResp{}
err := d.request(http.MethodGet, "/api/files", func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"path": path,
"name": name,
"type": func() string {
if isFolder {
return "folder"
}
return "file"
}(),
"operation": "find",
})
}, resp)
if err != nil {
return nil, err
}
if len(resp.Items) == 0 {
return nil, fmt.Errorf("file not found: %s/%s", path, name)
}
obj := resp.Items[0]
return &model.Object{
ID: obj.ID,
Name: obj.Name,
Size: obj.Size,
IsFolder: obj.Type == "folder",
}, err
}
func (err *ErrResp) Error() string {
if err == nil {
return ""
}
return fmt.Sprintf("[Teldrive] message:%s Error code:%d", err.Message, err.Code)
}
func (d *Teldrive) createShareFile(fileId string) error {
var errResp ErrResp
if err := d.request(http.MethodPost, "/api/files/{id}/share", func(req *resty.Request) {
req.SetPathParam("id", fileId)
req.SetBody(base.Json{
"expiresAt": getDateTime(),
})
}, &errResp); err != nil {
return err
}
if errResp.Message != "" {
return &errResp
}
return nil
}
func (d *Teldrive) getShareFileById(fileId string) (*ShareObj, error) {
var shareObj ShareObj
if err := d.request(http.MethodGet, "/api/files/{id}/share", func(req *resty.Request) {
req.SetPathParam("id", fileId)
}, &shareObj); err != nil {
return nil, err
}
return &shareObj, nil
}
func getDateTime() string {
now := time.Now().UTC()
formattedWithMs := now.Add(time.Hour * 1).Format("2006-01-02T15:04:05.000Z")
return formattedWithMs
}

View File

@@ -1,39 +0,0 @@
#!/bin/sh
umask ${UMASK}
if [ "$1" = "version" ]; then
./openlist version
else
# Check file of /opt/openlist/data permissions for current user
# 检查当前用户是否有当前目录的写和执行权限
if [ -d ./data ]; then
if ! [ -w ./data ] || ! [ -x ./data ]; then
cat <<EOF
Error: Current user does not have write and/or execute permissions for the ./data directory: $(pwd)/data
Please visit https://doc.oplist.org/guide/installation/docker#for-version-after-v4-1-0 for more information.
错误:当前用户没有 ./data 目录($(pwd)/data的写和/或执行权限。
请访问 https://doc.oplist.org/guide/installation/docker#v4-1-0-%E4%BB%A5%E5%90%8E%E7%89%88%E6%9C%AC 获取更多信息。
Exiting...
EOF
exit 1
fi
fi
# Define the target directory path for aria2 service
ARIA2_DIR="/opt/service/start/aria2"
if [ "$RUN_ARIA2" = "true" ]; then
# If aria2 should run and target directory doesn't exist, copy it
if [ ! -d "$ARIA2_DIR" ]; then
mkdir -p "$ARIA2_DIR"
cp -r /opt/service/stop/aria2/* "$ARIA2_DIR" 2>/dev/null
fi
runsvdir /opt/service/start &
else
# If aria2 should NOT run and target directory exists, remove it
if [ -d "$ARIA2_DIR" ]; then
rm -rf "$ARIA2_DIR"
fi
fi
exec ./openlist server --no-prefix
fi

298
go.mod
View File

@@ -1,304 +1,50 @@
module github.com/OpenListTeam/OpenList/v4
module github.com/OpenListTeam/OpenList/v5
go 1.23.4
go 1.24
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.1
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.2
github.com/OpenListTeam/go-cache v0.1.0
github.com/OpenListTeam/sftpd-openlist v1.0.1
github.com/OpenListTeam/tache v0.2.0
github.com/OpenListTeam/times v0.1.0
github.com/OpenListTeam/wopan-sdk-go v0.1.5
github.com/ProtonMail/go-crypto v1.3.0
github.com/ProtonMail/gopenpgp/v2 v2.9.0
github.com/SheltonZhu/115driver v1.1.1
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible
github.com/avast/retry-go v3.0.0+incompatible
github.com/aws/aws-sdk-go v1.55.7
github.com/blevesearch/bleve/v2 v2.5.2
github.com/caarlos0/env/v9 v9.0.0
github.com/charmbracelet/bubbles v0.21.0
github.com/charmbracelet/bubbletea v1.3.6
github.com/charmbracelet/lipgloss v1.1.0
github.com/city404/v6-public-rpc-proto/go v0.0.0-20240817070657-90f8e24b653e
github.com/cloudsoda/go-smb2 v0.0.0-20250228001242-d4c70e6251cc
github.com/coreos/go-oidc v2.3.0+incompatible
github.com/deckarep/golang-set/v2 v2.8.0
github.com/dhowden/tag v0.0.0-20240417053706-3d75831295e8
github.com/disintegration/imaging v1.6.2
github.com/dlclark/regexp2 v1.11.5
github.com/dustinxie/ecc v0.0.0-20210511000915-959544187564
github.com/fclairamb/ftpserverlib v0.26.1-0.20250709223522-4a925d79caf6
github.com/foxxorcat/mopan-sdk-go v0.1.6
github.com/foxxorcat/weiyun-sdk-go v0.1.3
github.com/gin-contrib/cors v1.7.6
github.com/gin-gonic/gin v1.10.1
github.com/go-resty/resty/v2 v2.16.5
github.com/go-webauthn/webauthn v0.13.4
github.com/golang-jwt/jwt/v4 v4.5.2
github.com/google/uuid v1.6.0
github.com/gorilla/websocket v1.5.3
github.com/halalcloud/golang-sdk-lite v0.0.0-20251006164234-3c629727c499
github.com/hekmon/transmissionrpc/v3 v3.0.0
github.com/henrybear327/go-proton-api v1.0.0
github.com/ipfs/go-ipfs-api v0.7.0
github.com/itsHenry35/gofakes3 v0.0.8
github.com/jlaffaye/ftp v0.2.1-0.20250831012827-3f092e051c94
github.com/hashicorp/go-plugin v1.7.0
github.com/json-iterator/go v1.1.12
github.com/kdomanski/iso9660 v0.4.0
github.com/maruel/natural v1.1.1
github.com/meilisearch/meilisearch-go v0.32.0
github.com/mholt/archives v0.1.3
github.com/natefinch/lumberjack v2.0.0+incompatible
github.com/ncw/swift/v2 v2.0.4
github.com/pkg/errors v0.9.1
github.com/pkg/sftp v1.13.9
github.com/pquerna/otp v1.5.0
github.com/quic-go/quic-go v0.54.1
github.com/rclone/rclone v1.70.3
github.com/saintfish/chardet v0.0.0-20230101081208-5e3ef4b5456d
github.com/shirou/gopsutil/v4 v4.25.5
github.com/sirupsen/logrus v1.9.3
github.com/spf13/afero v1.14.0
github.com/spf13/cobra v1.9.1
github.com/stretchr/testify v1.11.1
github.com/t3rm1n4l/go-mega v0.0.0-20241213151442-a19cff0ec7b5
github.com/tchap/go-patricia/v2 v2.3.3
github.com/u2takey/ffmpeg-go v0.5.0
github.com/upyun/go-sdk/v3 v3.0.4
github.com/winfsp/cgofuse v1.6.0
github.com/yeka/zip v0.0.0-20231116150916-03d6312748a9
github.com/zzzhr1990/go-common-entity v0.0.0-20250202070650-1a200048f0d3
golang.org/x/crypto v0.40.0
golang.org/x/image v0.29.0
golang.org/x/net v0.42.0
golang.org/x/oauth2 v0.30.0
golang.org/x/time v0.12.0
google.golang.org/appengine v1.6.8
gopkg.in/ldap.v3 v3.1.0
gorm.io/driver/mysql v1.5.7
gorm.io/driver/postgres v1.5.9
gorm.io/driver/sqlite v1.5.6
gorm.io/gorm v1.25.11
golang.org/x/net v0.43.0
google.golang.org/grpc v1.74.2
google.golang.org/protobuf v1.36.7
)
require (
cloud.google.com/go/compute/metadata v0.7.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 // indirect
github.com/ProtonMail/bcrypt v0.0.0-20211005172633-e235017c1baf // indirect
github.com/ProtonMail/gluon v0.17.1-0.20230724134000-308be39be96e // indirect
github.com/ProtonMail/go-mime v0.0.0-20230322103455-7d82a3887f2f // indirect
github.com/ProtonMail/go-srp v0.0.7 // indirect
github.com/PuerkitoBio/goquery v1.10.3 // indirect
github.com/RoaringBitmap/roaring/v2 v2.4.5 // indirect
github.com/andybalholm/cascadia v1.3.3 // indirect
github.com/bradenaw/juniper v0.15.3 // indirect
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd // indirect
github.com/cloudsoda/sddl v0.0.0-20250224235906-926454e91efc // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/cronokirby/saferith v0.33.0 // indirect
github.com/ebitengine/purego v0.8.4 // indirect
github.com/emersion/go-message v0.18.2 // indirect
github.com/emersion/go-vcard v0.0.0-20241024213814-c9703dde27ff // indirect
github.com/geoffgarside/ber v1.2.0 // indirect
github.com/hashicorp/go-uuid v1.0.3 // indirect
github.com/jcmturner/aescts/v2 v2.0.0 // indirect
github.com/jcmturner/dnsutils/v2 v2.0.0 // indirect
github.com/jcmturner/gofork v1.7.6 // indirect
github.com/jcmturner/goidentity/v6 v6.0.1 // indirect
github.com/jcmturner/gokrb5/v8 v8.4.4 // indirect
github.com/jcmturner/rpc/v2 v2.0.3 // indirect
github.com/lanrat/extsort v1.0.2 // indirect
github.com/mikelolasagasti/xz v1.0.1 // indirect
github.com/minio/minlz v1.0.0 // indirect
github.com/minio/xxml v0.0.3 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/otiai10/mint v1.6.3 // indirect
github.com/quic-go/qpack v0.5.1 // indirect
github.com/relvacode/iso8601 v1.6.0 // indirect
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
go.uber.org/mock v0.5.0 // indirect
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476 // indirect
golang.org/x/mod v0.27.0 // indirect
gopkg.in/go-jose/go-jose.v2 v2.6.3 // indirect
)
require (
github.com/OpenListTeam/115-sdk-go v0.2.2
github.com/STARRY-S/zip v0.2.1 // indirect
github.com/aymerick/douceur v0.2.0 // indirect
github.com/blevesearch/go-faiss v1.0.25 // indirect
github.com/blevesearch/zapx/v16 v16.2.4 // indirect
github.com/bodgit/plumbing v1.3.0 // indirect
github.com/bodgit/sevenzip v1.6.1
github.com/bodgit/windows v1.0.1 // indirect
github.com/bytedance/sonic/loader v0.2.4 // indirect
github.com/charmbracelet/x/ansi v0.9.3 // indirect
github.com/charmbracelet/x/term v0.2.1 // indirect
github.com/cloudflare/circl v1.6.1 // indirect
github.com/cloudwego/base64x v0.1.5 // indirect
github.com/dsnet/compress v0.0.2-0.20230904184137-39efe44ab707 // indirect
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
github.com/fclairamb/go-log v0.6.0 // indirect
github.com/gorilla/css v1.0.1 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/hekmon/cunits/v2 v2.1.0 // indirect
github.com/ipfs/boxo v0.12.0 // indirect
github.com/jackc/puddle/v2 v2.2.1 // indirect
github.com/klauspost/pgzip v1.2.6 // indirect
github.com/matoous/go-nanoid/v2 v2.1.0 // indirect
github.com/microcosm-cc/bluemonday v1.0.27
github.com/nwaples/rardecode/v2 v2.1.1
github.com/sorairolake/lzip-go v0.3.5 // indirect
github.com/taruti/bytepool v0.0.0-20160310082835-5e3a9ea56543 // indirect
github.com/ulikunitz/xz v0.5.12 // indirect
github.com/yuin/goldmark v1.7.13
go4.org v0.0.0-20230225012048-214862532bf5
resty.dev/v3 v3.0.0-beta.2 // indirect
)
require (
github.com/Max-Sum/base32768 v0.0.0-20230304063302-18e6ce5945fd // indirect
github.com/OpenListTeam/gsync v0.1.0 // indirect
github.com/abbot/go-http-auth v0.4.0 // indirect
github.com/aead/ecdh v0.2.0 // indirect
github.com/andreburgaud/crypt2go v1.8.0 // indirect
github.com/andybalholm/brotli v1.1.2-0.20250424173009-453214e765f3 // indirect
github.com/axgle/mahonia v0.0.0-20180208002826-3358181d7394
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
github.com/benbjohnson/clock v1.3.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bits-and-blooms/bitset v1.22.0 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/blevesearch/bleve_index_api v1.2.8 // indirect
github.com/blevesearch/geo v0.2.3 // indirect
github.com/blevesearch/go-porterstemmer v1.0.3 // indirect
github.com/blevesearch/gtreap v0.1.1 // indirect
github.com/blevesearch/mmap-go v1.0.4 // indirect
github.com/blevesearch/scorch_segment_api/v2 v2.3.10 // indirect
github.com/blevesearch/segment v0.9.1 // indirect
github.com/blevesearch/snowballstem v0.9.0 // indirect
github.com/blevesearch/upsidedown_store_api v1.0.2 // indirect
github.com/blevesearch/vellum v1.1.0 // indirect
github.com/blevesearch/zapx/v11 v11.4.2 // indirect
github.com/blevesearch/zapx/v12 v12.4.2 // indirect
github.com/blevesearch/zapx/v13 v13.4.2 // indirect
github.com/blevesearch/zapx/v14 v14.4.2 // indirect
github.com/blevesearch/zapx/v15 v15.4.2 // indirect
github.com/boombuler/barcode v1.0.1-0.20190219062509-6c824513bacc // indirect
github.com/bytedance/sonic v1.13.3 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/crackcomm/go-gitignore v0.0.0-20170627025303-887ab5e44cc3 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.1.0 // indirect
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
github.com/bytedance/sonic v1.14.0 // indirect
github.com/bytedance/sonic/loader v0.3.0 // indirect
github.com/cloudwego/base64x v0.1.6 // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.9 // indirect
github.com/gin-contrib/sse v1.1.0 // indirect
github.com/go-chi/chi/v5 v5.2.2 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.26.0 // indirect
github.com/go-sql-driver/mysql v1.7.0 // indirect
github.com/go-webauthn/x v0.1.23 // indirect
github.com/go-playground/validator/v10 v10.27.0 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/golang-jwt/jwt/v5 v5.2.3 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/go-tpm v0.9.5 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/go-version v1.6.0 // indirect
github.com/henrybear327/Proton-API-Bridge v1.0.0
github.com/hashicorp/go-hclog v1.6.3 // indirect
github.com/hashicorp/yamux v0.1.2 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/ipfs/go-cid v0.5.0
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a // indirect
github.com/jackc/pgx/v5 v5.5.5 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/jzelinskie/whirlpool v0.0.0-20201016144138-0675e54bb004 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/kr/fs v0.1.0 // indirect
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
github.com/libp2p/go-flow-metrics v0.1.0 // indirect
github.com/libp2p/go-libp2p v0.27.8 // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/lufia/plan9stats v0.0.0-20250317134145-8bc96cf8fc35 // indirect
github.com/mailru/easyjson v0.9.0 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-localereader v0.0.1 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/mattn/go-sqlite3 v1.14.22 // indirect
github.com/minio/sha256-simd v1.0.1 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect
github.com/mschoch/smat v0.2.0 // indirect
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
github.com/muesli/cancelreader v0.2.2 // indirect
github.com/muesli/termenv v0.16.0 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect
github.com/multiformats/go-base36 v0.2.0 // indirect
github.com/multiformats/go-multiaddr v0.9.0 // indirect
github.com/multiformats/go-multibase v0.2.0 // indirect
github.com/multiformats/go-multicodec v0.9.0 // indirect
github.com/multiformats/go-multihash v0.2.3 // indirect
github.com/multiformats/go-multistream v0.4.1 // indirect
github.com/multiformats/go-varint v0.0.7 // indirect
github.com/otiai10/copy v1.14.1
github.com/oklog/run v1.2.0 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/pierrec/lz4/v4 v4.1.22 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
github.com/pquerna/cachecontrol v0.1.0 // indirect
github.com/prometheus/client_golang v1.22.0 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.64.0 // indirect
github.com/prometheus/procfs v0.16.1 // indirect
github.com/rfjakob/eme v1.1.2 // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/ryszard/goskiplist v0.0.0-20150312221310-2dfbae5fcf46 // indirect
github.com/shabbyrobe/gocovmerge v0.0.0-20230507112040-c3350d9342df // indirect
github.com/skip2/go-qrcode v0.0.0-20200617195104-da1b6568686e
github.com/spaolacci/murmur3 v1.1.0 // indirect
github.com/spf13/pflag v1.0.6 // indirect
github.com/tklauser/go-sysconf v0.3.15 // indirect
github.com/tklauser/numcpus v0.10.0 // indirect
github.com/spf13/pflag v1.0.7 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/u2takey/go-utils v0.3.1 // indirect
github.com/ugorji/go/codec v1.3.0 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.etcd.io/bbolt v1.4.0 // indirect
golang.org/x/arch v0.18.0 // indirect
golang.org/x/sync v0.16.0
golang.org/x/sys v0.34.0
golang.org/x/term v0.33.0 // indirect
golang.org/x/text v0.27.0
golang.org/x/tools v0.35.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 // indirect
google.golang.org/grpc v1.73.0
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/asn1-ber.v1 v1.0.0-20181015200546-f715ec2f112d // indirect
gopkg.in/natefinch/lumberjack.v2 v2.0.0 // indirect
golang.org/x/arch v0.20.0 // indirect
golang.org/x/crypto v0.41.0 // indirect
golang.org/x/sys v0.35.0 // indirect
golang.org/x/text v0.28.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250811230008-5f3141c8851a // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
lukechampine.com/blake3 v1.1.7 // indirect
)
replace github.com/ProtonMail/go-proton-api => github.com/henrybear327/go-proton-api v1.0.0
replace github.com/cronokirby/saferith => github.com/Da3zKi7/saferith v0.33.0-fixed
// replace github.com/OpenListTeam/115-sdk-go => ../../OpenListTeam/115-sdk-go

1026
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -6,162 +6,68 @@ import (
"path/filepath"
"strings"
"github.com/OpenListTeam/OpenList/v4/cmd/flags"
"github.com/OpenListTeam/OpenList/v4/drivers/base"
"github.com/OpenListTeam/OpenList/v4/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/net"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/caarlos0/env/v9"
"github.com/shirou/gopsutil/v4/mem"
"github.com/OpenListTeam/OpenList/v5/cmd/flags"
"github.com/OpenListTeam/OpenList/v5/internal/conf"
"github.com/OpenListTeam/OpenList/v5/pkg/utils"
log "github.com/sirupsen/logrus"
)
// Program working directory
func PWD() string {
if flags.ForceBinDir {
ex, err := os.Executable()
if err != nil {
log.Fatal(err)
}
pwd := filepath.Dir(ex)
return pwd
}
d, err := os.Getwd()
if err != nil {
d = "."
}
return d
}
func InitConfig() {
pwd := PWD()
dataDir := flags.DataDir
if !filepath.IsAbs(dataDir) {
flags.DataDir = filepath.Join(pwd, flags.DataDir)
if !filepath.IsAbs(flags.ConfigFile) {
flags.ConfigFile = filepath.Join(flags.PWD(), flags.ConfigFile)
}
configPath := filepath.Join(flags.DataDir, "config.json")
log.Infof("reading config file: %s", configPath)
if !utils.Exists(configPath) {
log.Infof("config file not exists, creating default config file")
_, err := utils.CreateNestedFile(configPath)
log.Infoln("reading config file", "@", flags.ConfigFile)
if !utils.Exists(flags.ConfigFile) {
log.Infoln("config file not exists, creating default config file")
_, err := utils.CreateNestedFile(flags.ConfigFile)
if err != nil {
log.Fatalf("failed to create config file: %+v", err)
log.Fatalln("create config file", ":", err)
}
conf.Conf = conf.DefaultConfig(dataDir)
LastLaunchedVersion = conf.Version
conf.Conf.LastLaunchedVersion = conf.Version
if !utils.WriteJsonToFile(configPath, conf.Conf) {
log.Fatalf("failed to create default config file")
conf.Conf = conf.DefaultConfig()
err = utils.WriteJsonToFile(flags.ConfigFile, conf.Conf)
if err != nil {
log.Fatalln("save default config file", ":", err)
}
} else {
configBytes, err := os.ReadFile(configPath)
configBytes, err := os.ReadFile(flags.ConfigFile)
if err != nil {
log.Fatalf("reading config file error: %+v", err)
log.Fatalln("reading config file", ":", err)
}
conf.Conf = conf.DefaultConfig(dataDir)
conf.Conf = conf.DefaultConfig()
err = utils.Json.Unmarshal(configBytes, conf.Conf)
if err != nil {
log.Fatalf("load config error: %+v", err)
log.Fatalln("unmarshal config", ":", err)
}
LastLaunchedVersion = conf.Conf.LastLaunchedVersion
if strings.HasPrefix(conf.Version, "v") || LastLaunchedVersion == "" {
conf.Conf.LastLaunchedVersion = conf.Version
}
// update config.json struct
confBody, err := utils.Json.MarshalIndent(conf.Conf, "", " ")
err = utils.WriteJsonToFile(flags.ConfigFile, conf.Conf)
if err != nil {
log.Fatalf("marshal config error: %+v", err)
log.Fatalln("update config file", ":", err)
}
err = os.WriteFile(configPath, confBody, 0o777)
if err != nil {
log.Fatalf("update config struct error: %+v", err)
}
}
if !conf.Conf.Force {
confFromEnv()
}
if conf.Conf.MaxConcurrency > 0 {
net.DefaultConcurrencyLimit = &net.ConcurrencyLimit{Limit: conf.Conf.MaxConcurrency}
}
if conf.Conf.MaxBufferLimit < 0 {
m, _ := mem.VirtualMemory()
if m != nil {
conf.MaxBufferLimit = max(int(float64(m.Total)*0.05), 4*utils.MB)
conf.MaxBufferLimit -= conf.MaxBufferLimit % utils.MB
} else {
conf.MaxBufferLimit = 16 * utils.MB
}
} else {
conf.MaxBufferLimit = conf.Conf.MaxBufferLimit * utils.MB
}
log.Infof("max buffer limit: %dMB", conf.MaxBufferLimit/utils.MB)
if conf.Conf.MmapThreshold > 0 {
conf.MmapThreshold = conf.Conf.MmapThreshold * utils.MB
} else {
conf.MmapThreshold = 0
}
log.Infof("mmap threshold: %dMB", conf.Conf.MmapThreshold)
if len(conf.Conf.Log.Filter.Filters) == 0 {
conf.Conf.Log.Filter.Enable = false
}
// convert abs path
configDir := filepath.Dir(flags.ConfigFile)
convertAbsPath := func(path *string) {
if *path != "" && !filepath.IsAbs(*path) {
*path = filepath.Join(pwd, *path)
*path = filepath.Join(configDir, *path)
}
}
convertAbsPath(&conf.Conf.Database.DBFile)
convertAbsPath(&conf.Conf.TempDir)
convertAbsPath(&conf.Conf.Scheme.CertFile)
convertAbsPath(&conf.Conf.Scheme.KeyFile)
convertAbsPath(&conf.Conf.Scheme.UnixFile)
convertAbsPath(&conf.Conf.Log.Name)
convertAbsPath(&conf.Conf.TempDir)
convertAbsPath(&conf.Conf.BleveDir)
convertAbsPath(&conf.Conf.DistDir)
err := os.MkdirAll(conf.Conf.TempDir, 0o777)
if err != nil {
log.Fatalf("create temp dir error: %+v", err)
}
log.Debugf("config: %+v", conf.Conf)
base.InitClient()
initURL()
initSitePath()
}
func confFromEnv() {
prefix := "OPENLIST_"
if flags.NoPrefix {
prefix = ""
}
log.Infof("load config from env with prefix: %s", prefix)
if err := env.ParseWithOptions(conf.Conf, env.Options{
Prefix: prefix,
}); err != nil {
log.Fatalf("load config from env error: %+v", err)
}
}
func initURL() {
func initSitePath() {
if !strings.Contains(conf.Conf.SiteURL, "://") {
conf.Conf.SiteURL = utils.FixAndCleanPath(conf.Conf.SiteURL)
}
u, err := url.Parse(conf.Conf.SiteURL)
if err != nil {
utils.Log.Fatalf("can't parse site_url: %+v", err)
}
conf.URL = u
}
func CleanTempDir() {
files, err := os.ReadDir(conf.Conf.TempDir)
if err != nil {
log.Errorln("failed list temp file: ", err)
}
for _, file := range files {
if err := os.RemoveAll(filepath.Join(conf.Conf.TempDir, file.Name())); err != nil {
log.Errorln("failed delete temp file: ", err)
}
log.Fatalln("parse site_url", ":", err)
}
conf.SitePath = u.Path
}

View File

@@ -0,0 +1,13 @@
package bootstrap
import (
"github.com/OpenListTeam/OpenList/v5/internal/driver"
driverS "github.com/OpenListTeam/OpenList/v5/shared/driver"
"github.com/hashicorp/go-plugin"
)
func InitDriverPlugins() {
driver.PluginMap = map[string]plugin.Plugin{
"grpc": &driverS.Plugin{},
}
}

View File

@@ -1,101 +0,0 @@
package cache
import (
"sync"
"time"
)
type KeyedCache[T any] struct {
entries map[string]*CacheEntry[T]
mu sync.RWMutex
ttl time.Duration
}
func NewKeyedCache[T any](ttl time.Duration) *KeyedCache[T] {
c := &KeyedCache[T]{
entries: make(map[string]*CacheEntry[T]),
ttl: ttl,
}
gcFuncs = append(gcFuncs, c.GC)
return c
}
func (c *KeyedCache[T]) Set(key string, value T) {
c.SetWithExpirable(key, value, ExpirationTime(time.Now().Add(c.ttl)))
}
func (c *KeyedCache[T]) SetWithTTL(key string, value T, ttl time.Duration) {
c.SetWithExpirable(key, value, ExpirationTime(time.Now().Add(ttl)))
}
func (c *KeyedCache[T]) SetWithExpirable(key string, value T, exp Expirable) {
c.mu.Lock()
defer c.mu.Unlock()
c.entries[key] = &CacheEntry[T]{
data: value,
Expirable: exp,
}
}
func (c *KeyedCache[T]) Get(key string) (T, bool) {
c.mu.RLock()
entry, exists := c.entries[key]
if !exists {
c.mu.RUnlock()
return *new(T), false
}
expired := entry.Expired()
c.mu.RUnlock()
if !expired {
return entry.data, true
}
c.mu.Lock()
if c.entries[key] == entry {
delete(c.entries, key)
c.mu.Unlock()
return *new(T), false
}
c.mu.Unlock()
return *new(T), false
}
func (c *KeyedCache[T]) Delete(key string) {
c.mu.Lock()
defer c.mu.Unlock()
delete(c.entries, key)
}
func (c *KeyedCache[T]) Take(key string) (T, bool) {
c.mu.Lock()
defer c.mu.Unlock()
if entry, exists := c.entries[key]; exists {
delete(c.entries, key)
return entry.data, true
}
return *new(T), false
}
func (c *KeyedCache[T]) Clear() {
c.mu.Lock()
defer c.mu.Unlock()
c.entries = make(map[string]*CacheEntry[T])
}
func (c *KeyedCache[T]) GC() {
c.mu.Lock()
defer c.mu.Unlock()
expiredKeys := make([]string, 0, len(c.entries))
for key, entry := range c.entries {
if entry.Expired() {
expiredKeys = append(expiredKeys, key)
}
}
for _, key := range expiredKeys {
delete(c.entries, key)
}
}

View File

@@ -1,18 +0,0 @@
package cache
import "time"
type Expirable interface {
Expired() bool
}
type ExpirationTime time.Time
func (e ExpirationTime) Expired() bool {
return time.Now().After(time.Time(e))
}
type CacheEntry[T any] struct {
Expirable
data T
}

View File

@@ -1,122 +0,0 @@
package cache
import (
"sync"
"time"
)
type TypedCache[T any] struct {
entries map[string]map[string]*CacheEntry[T]
mu sync.RWMutex
ttl time.Duration
}
func NewTypedCache[T any](ttl time.Duration) *TypedCache[T] {
c := &TypedCache[T]{
entries: make(map[string]map[string]*CacheEntry[T]),
ttl: ttl,
}
gcFuncs = append(gcFuncs, c.GC)
return c
}
func (c *TypedCache[T]) SetType(key, typeKey string, value T) {
c.SetTypeWithExpirable(key, typeKey, value, ExpirationTime(time.Now().Add(c.ttl)))
}
func (c *TypedCache[T]) SetTypeWithTTL(key, typeKey string, value T, ttl time.Duration) {
c.SetTypeWithExpirable(key, typeKey, value, ExpirationTime(time.Now().Add(ttl)))
}
func (c *TypedCache[T]) SetTypeWithExpirable(key, typeKey string, value T, exp Expirable) {
c.mu.Lock()
defer c.mu.Unlock()
cache, exists := c.entries[key]
if !exists {
cache = make(map[string]*CacheEntry[T])
c.entries[key] = cache
}
cache[typeKey] = &CacheEntry[T]{
data: value,
Expirable: exp,
}
}
// Prefer to use typeKeys for lookup; if none match, use fallbackTypeKey for lookup
func (c *TypedCache[T]) GetType(key, fallbackTypeKey string, typeKeys ...string) (T, bool) {
c.mu.RLock()
cache, exists := c.entries[key]
if !exists {
c.mu.RUnlock()
return *new(T), false
}
entry, exists := cache[fallbackTypeKey]
if len(typeKeys) > 0 {
for _, tk := range typeKeys {
if entry, exists = cache[tk]; exists {
fallbackTypeKey = tk
break
}
}
}
if !exists {
c.mu.RUnlock()
return *new(T), false
}
expired := entry.Expired()
c.mu.RUnlock()
if !expired {
return entry.data, true
}
c.mu.Lock()
if cache[fallbackTypeKey] == entry {
delete(cache, fallbackTypeKey)
if len(cache) == 0 {
delete(c.entries, key)
}
c.mu.Unlock()
return *new(T), false
}
c.mu.Unlock()
return *new(T), false
}
func (c *TypedCache[T]) DeleteKey(key string) {
c.mu.Lock()
defer c.mu.Unlock()
delete(c.entries, key)
}
func (c *TypedCache[T]) Clear() {
c.mu.Lock()
defer c.mu.Unlock()
c.entries = make(map[string]map[string]*CacheEntry[T])
}
func (c *TypedCache[T]) GC() {
c.mu.Lock()
defer c.mu.Unlock()
expiredKeys := make(map[string][]string)
for tk, entries := range c.entries {
for key, entry := range entries {
if !entry.Expired() {
continue
}
if _, ok := expiredKeys[tk]; !ok {
expiredKeys[tk] = make([]string, 0, len(entries))
}
expiredKeys[tk] = append(expiredKeys[tk], key)
}
}
for tk, keys := range expiredKeys {
for _, key := range keys {
delete(c.entries[tk], key)
}
if len(c.entries[tk]) == 0 {
delete(c.entries, tk)
}
}
}

View File

@@ -1,24 +0,0 @@
package cache
import (
"time"
"github.com/OpenListTeam/OpenList/v4/pkg/cron"
log "github.com/sirupsen/logrus"
)
var (
cacheGcCron *cron.Cron
gcFuncs []func()
)
func init() {
// TODO Move to bootstrap
cacheGcCron = cron.NewCron(time.Hour)
cacheGcCron.Do(func() {
log.Infof("Start cache GC")
for _, f := range gcFuncs {
f()
}
})
}

View File

@@ -1,248 +1,40 @@
package conf
import (
"path/filepath"
"github.com/OpenListTeam/OpenList/v4/pkg/utils/random"
)
type Database struct {
Type string `json:"type" env:"TYPE"`
Host string `json:"host" env:"HOST"`
Port int `json:"port" env:"PORT"`
User string `json:"user" env:"USER"`
Password string `json:"password" env:"PASS"`
Name string `json:"name" env:"NAME"`
DBFile string `json:"db_file" env:"FILE"`
TablePrefix string `json:"table_prefix" env:"TABLE_PREFIX"`
SSLMode string `json:"ssl_mode" env:"SSL_MODE"`
DSN string `json:"dsn" env:"DSN"`
}
type Meilisearch struct {
Host string `json:"host" env:"HOST"`
APIKey string `json:"api_key" env:"API_KEY"`
Index string `json:"index" env:"INDEX"`
}
type Scheme struct {
Address string `json:"address" env:"ADDR"`
HttpPort int `json:"http_port" env:"HTTP_PORT"`
HttpsPort int `json:"https_port" env:"HTTPS_PORT"`
HttpPort uint16 `json:"http_port" env:"HTTP_PORT"`
HttpsPort uint16 `json:"https_port" env:"HTTPS_PORT"`
ForceHttps bool `json:"force_https" env:"FORCE_HTTPS"`
CertFile string `json:"cert_file" env:"CERT_FILE"`
KeyFile string `json:"key_file" env:"KEY_FILE"`
UnixFile string `json:"unix_file" env:"UNIX_FILE"`
UnixFilePerm string `json:"unix_file_perm" env:"UNIX_FILE_PERM"`
EnableH2c bool `json:"enable_h2c" env:"ENABLE_H2C"`
EnableH3 bool `json:"enable_h3" env:"ENABLE_H3"`
}
type LogConfig struct {
Enable bool `json:"enable" env:"ENABLE"`
Name string `json:"name" env:"NAME"`
MaxSize int `json:"max_size" env:"MAX_SIZE"`
MaxBackups int `json:"max_backups" env:"MAX_BACKUPS"`
MaxAge int `json:"max_age" env:"MAX_AGE"`
Compress bool `json:"compress" env:"COMPRESS"`
Filter LogFilterConfig `json:"filter" envPrefix:"FILTER_"`
}
type LogFilterConfig struct {
Enable bool `json:"enable" env:"ENABLE"`
Filters []Filter `json:"filters"`
}
type Filter struct {
CIDR string `json:"cidr"`
Path string `json:"path"`
Method string `json:"method"`
}
type TaskConfig struct {
Workers int `json:"workers" env:"WORKERS"`
MaxRetry int `json:"max_retry" env:"MAX_RETRY"`
TaskPersistant bool `json:"task_persistant" env:"TASK_PERSISTANT"`
}
type TasksConfig struct {
Download TaskConfig `json:"download" envPrefix:"DOWNLOAD_"`
Transfer TaskConfig `json:"transfer" envPrefix:"TRANSFER_"`
Upload TaskConfig `json:"upload" envPrefix:"UPLOAD_"`
Copy TaskConfig `json:"copy" envPrefix:"COPY_"`
Move TaskConfig `json:"move" envPrefix:"MOVE_"`
Decompress TaskConfig `json:"decompress" envPrefix:"DECOMPRESS_"`
DecompressUpload TaskConfig `json:"decompress_upload" envPrefix:"DECOMPRESS_UPLOAD_"`
AllowRetryCanceled bool `json:"allow_retry_canceled" env:"ALLOW_RETRY_CANCELED"`
}
type Cors struct {
AllowOrigins []string `json:"allow_origins" env:"ALLOW_ORIGINS"`
AllowMethods []string `json:"allow_methods" env:"ALLOW_METHODS"`
AllowHeaders []string `json:"allow_headers" env:"ALLOW_HEADERS"`
}
type S3 struct {
Enable bool `json:"enable" env:"ENABLE"`
Port int `json:"port" env:"PORT"`
SSL bool `json:"ssl" env:"SSL"`
}
type FTP struct {
Enable bool `json:"enable" env:"ENABLE"`
Listen string `json:"listen" env:"LISTEN"`
FindPasvPortAttempts int `json:"find_pasv_port_attempts" env:"FIND_PASV_PORT_ATTEMPTS"`
ActiveTransferPortNon20 bool `json:"active_transfer_port_non_20" env:"ACTIVE_TRANSFER_PORT_NON_20"`
IdleTimeout int `json:"idle_timeout" env:"IDLE_TIMEOUT"`
ConnectionTimeout int `json:"connection_timeout" env:"CONNECTION_TIMEOUT"`
DisableActiveMode bool `json:"disable_active_mode" env:"DISABLE_ACTIVE_MODE"`
DefaultTransferBinary bool `json:"default_transfer_binary" env:"DEFAULT_TRANSFER_BINARY"`
EnableActiveConnIPCheck bool `json:"enable_active_conn_ip_check" env:"ENABLE_ACTIVE_CONN_IP_CHECK"`
EnablePasvConnIPCheck bool `json:"enable_pasv_conn_ip_check" env:"ENABLE_PASV_CONN_IP_CHECK"`
}
type SFTP struct {
Enable bool `json:"enable" env:"ENABLE"`
Listen string `json:"listen" env:"LISTEN"`
}
type Config struct {
Force bool `json:"force" env:"FORCE"`
SiteURL string `json:"site_url" env:"SITE_URL"`
Cdn string `json:"cdn" env:"CDN"`
JwtSecret string `json:"jwt_secret" env:"JWT_SECRET"`
TokenExpiresIn int `json:"token_expires_in" env:"TOKEN_EXPIRES_IN"`
Database Database `json:"database" envPrefix:"DB_"`
Meilisearch Meilisearch `json:"meilisearch" envPrefix:"MEILISEARCH_"`
Scheme Scheme `json:"scheme"`
TempDir string `json:"temp_dir" env:"TEMP_DIR"`
BleveDir string `json:"bleve_dir" env:"BLEVE_DIR"`
DistDir string `json:"dist_dir"`
Log LogConfig `json:"log" envPrefix:"LOG_"`
DelayedStart int `json:"delayed_start" env:"DELAYED_START"`
MaxBufferLimit int `json:"max_buffer_limitMB" env:"MAX_BUFFER_LIMIT_MB"`
MmapThreshold int `json:"mmap_thresholdMB" env:"MMAP_THRESHOLD_MB"`
MaxConnections int `json:"max_connections" env:"MAX_CONNECTIONS"`
MaxConcurrency int `json:"max_concurrency" env:"MAX_CONCURRENCY"`
TlsInsecureSkipVerify bool `json:"tls_insecure_skip_verify" env:"TLS_INSECURE_SKIP_VERIFY"`
Tasks TasksConfig `json:"tasks" envPrefix:"TASKS_"`
SiteURL string `json:"site_url" env:"SITE_URL"`
Scheme Scheme `json:"scheme"`
Cors Cors `json:"cors" envPrefix:"CORS_"`
S3 S3 `json:"s3" envPrefix:"S3_"`
FTP FTP `json:"ftp" envPrefix:"FTP_"`
SFTP SFTP `json:"sftp" envPrefix:"SFTP_"`
LastLaunchedVersion string `json:"last_launched_version"`
}
func DefaultConfig(dataDir string) *Config {
tempDir := filepath.Join(dataDir, "temp")
indexDir := filepath.Join(dataDir, "bleve")
logPath := filepath.Join(dataDir, "log/log.log")
dbPath := filepath.Join(dataDir, "data.db")
func DefaultConfig() *Config {
return &Config{
TempDir: "temp",
Scheme: Scheme{
Address: "0.0.0.0",
UnixFile: "",
HttpPort: 5244,
HttpsPort: -1,
ForceHttps: false,
CertFile: "",
KeyFile: "",
},
JwtSecret: random.String(16),
TokenExpiresIn: 48,
TempDir: tempDir,
Database: Database{
Type: "sqlite3",
Port: 0,
TablePrefix: "x_",
DBFile: dbPath,
},
Meilisearch: Meilisearch{
Host: "http://localhost:7700",
Index: "openlist",
},
BleveDir: indexDir,
Log: LogConfig{
Enable: true,
Name: logPath,
MaxSize: 50,
MaxBackups: 30,
MaxAge: 28,
Filter: LogFilterConfig{
Enable: false,
Filters: []Filter{
{Path: "/ping"},
{Method: "HEAD"},
{Path: "/dav/", Method: "PROPFIND"},
},
},
},
MaxBufferLimit: -1,
MmapThreshold: 4,
MaxConnections: 0,
MaxConcurrency: 64,
TlsInsecureSkipVerify: true,
Tasks: TasksConfig{
Download: TaskConfig{
Workers: 5,
MaxRetry: 1,
// TaskPersistant: true,
},
Transfer: TaskConfig{
Workers: 5,
MaxRetry: 2,
// TaskPersistant: true,
},
Upload: TaskConfig{
Workers: 5,
},
Copy: TaskConfig{
Workers: 5,
MaxRetry: 2,
// TaskPersistant: true,
},
Move: TaskConfig{
Workers: 5,
MaxRetry: 2,
// TaskPersistant: true,
},
Decompress: TaskConfig{
Workers: 5,
MaxRetry: 2,
// TaskPersistant: true,
},
DecompressUpload: TaskConfig{
Workers: 5,
MaxRetry: 2,
},
AllowRetryCanceled: false,
},
Cors: Cors{
AllowOrigins: []string{"*"},
AllowMethods: []string{"*"},
AllowHeaders: []string{"*"},
},
S3: S3{
Enable: false,
Port: 5246,
SSL: false,
},
FTP: FTP{
Enable: false,
Listen: ":5221",
FindPasvPortAttempts: 50,
ActiveTransferPortNon20: false,
IdleTimeout: 900,
ConnectionTimeout: 30,
DisableActiveMode: false,
DefaultTransferBinary: false,
EnableActiveConnIPCheck: true,
EnablePasvConnIPCheck: true,
},
SFTP: SFTP{
Enable: false,
Listen: ":5222",
},
LastLaunchedVersion: "",
}
}

View File

@@ -1,72 +1,10 @@
package conf
import (
"net/url"
"regexp"
"sync"
)
var (
BuiltAt string = "unknown"
GitAuthor string = "unknown"
GitCommit string = "unknown"
Version string = "dev"
WebVersion string = "rolling"
)
import "regexp"
var (
Conf *Config
URL *url.URL
SitePath string
)
var SlicesMap = make(map[string][]string)
var FilenameCharMap = make(map[string]string)
var PrivacyReg []*regexp.Regexp
var (
// 单个Buffer最大限制
MaxBufferLimit = 16 * 1024 * 1024
// 超过该阈值的Buffer将使用 mmap 分配,可主动释放内存
MmapThreshold = 4 * 1024 * 1024
)
var (
RawIndexHtml string
ManageHtml string
IndexHtml string
)
var (
// StoragesLoaded loaded success if empty
StoragesLoaded = false
storagesLoadMu sync.RWMutex
storagesLoadSignal chan struct{} = make(chan struct{})
)
func StoragesLoadSignal() <-chan struct{} {
storagesLoadMu.RLock()
ch := storagesLoadSignal
storagesLoadMu.RUnlock()
return ch
}
func SendStoragesLoadedSignal() {
storagesLoadMu.Lock()
select {
case <-storagesLoadSignal:
// already closed
default:
StoragesLoaded = true
close(storagesLoadSignal)
}
storagesLoadMu.Unlock()
}
func ResetStoragesLoadSignal() {
storagesLoadMu.Lock()
select {
case <-storagesLoadSignal:
StoragesLoaded = false
storagesLoadSignal = make(chan struct{})
default:
// not closed -> nothing to do
}
storagesLoadMu.Unlock()
}

View File

@@ -1,62 +0,0 @@
package db
import (
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/utils/random"
"github.com/pkg/errors"
)
func GetSharingById(id string) (*model.SharingDB, error) {
s := model.SharingDB{ID: id}
if err := db.Where(s).First(&s).Error; err != nil {
return nil, errors.Wrapf(err, "failed get sharing")
}
return &s, nil
}
func GetSharings(pageIndex, pageSize int) (sharings []model.SharingDB, count int64, err error) {
sharingDB := db.Model(&model.SharingDB{})
if err := sharingDB.Count(&count).Error; err != nil {
return nil, 0, errors.Wrapf(err, "failed get sharings count")
}
if err := sharingDB.Order(columnName("id")).Offset((pageIndex - 1) * pageSize).Limit(pageSize).Find(&sharings).Error; err != nil {
return nil, 0, errors.Wrapf(err, "failed get find sharings")
}
return sharings, count, nil
}
func GetSharingsByCreatorId(creator uint, pageIndex, pageSize int) (sharings []model.SharingDB, count int64, err error) {
sharingDB := db.Model(&model.SharingDB{})
cond := model.SharingDB{CreatorId: creator}
if err := sharingDB.Where(cond).Count(&count).Error; err != nil {
return nil, 0, errors.Wrapf(err, "failed get sharings count")
}
if err := sharingDB.Where(cond).Order(columnName("id")).Offset((pageIndex - 1) * pageSize).Limit(pageSize).Find(&sharings).Error; err != nil {
return nil, 0, errors.Wrapf(err, "failed get find sharings")
}
return sharings, count, nil
}
func CreateSharing(s *model.SharingDB) (string, error) {
id := random.String(8)
for len(id) < 12 {
old := model.SharingDB{
ID: id,
}
if err := db.Where(old).First(&old).Error; err != nil {
s.ID = id
return id, errors.WithStack(db.Create(s).Error)
}
id += random.String(1)
}
return "", errors.New("failed find valid id")
}
func UpdateSharing(s *model.SharingDB) error {
return errors.WithStack(db.Save(s).Error)
}
func DeleteSharingById(id string) error {
s := model.SharingDB{ID: id}
return errors.WithStack(db.Where(s).Delete(&s).Error)
}

9
internal/driver/var.go Normal file
View File

@@ -0,0 +1,9 @@
package driver
import (
"github.com/hashicorp/go-plugin"
)
var (
PluginMap map[string]plugin.Plugin
)

View File

@@ -1,11 +0,0 @@
package errs
func UnwrapOrSelf(err error) error {
u, ok := err.(interface {
Unwrap() error
})
if !ok {
return err
}
return u.Unwrap()
}

View File

@@ -1,47 +0,0 @@
package model
import "time"
type SharingDB struct {
ID string `json:"id" gorm:"type:char(12);primaryKey"`
FilesRaw string `json:"-" gorm:"type:text"`
Expires *time.Time `json:"expires"`
Pwd string `json:"pwd"`
Accessed int `json:"accessed"`
MaxAccessed int `json:"max_accessed"`
CreatorId uint `json:"-"`
Disabled bool `json:"disabled"`
Remark string `json:"remark"`
Readme string `json:"readme" gorm:"type:text"`
Header string `json:"header" gorm:"type:text"`
Sort
}
type Sharing struct {
*SharingDB
Files []string `json:"files"`
Creator *User `json:"-"`
}
func (s *Sharing) Valid() bool {
if s.Disabled {
return false
}
if s.MaxAccessed > 0 && s.Accessed >= s.MaxAccessed {
return false
}
if len(s.Files) == 0 {
return false
}
if !s.Creator.CanShare() {
return false
}
if s.Expires != nil && !s.Expires.IsZero() && s.Expires.Before(time.Now()) {
return false
}
return true
}
func (s *Sharing) Verify(pwd string) bool {
return s.Pwd == "" || s.Pwd == pwd
}

View File

@@ -1,119 +0,0 @@
package _123_open
import (
"context"
"fmt"
"strconv"
_123_open "github.com/OpenListTeam/OpenList/v4/drivers/123_open"
"github.com/OpenListTeam/OpenList/v4/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/offline_download/tool"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/internal/setting"
)
type Open123 struct{}
func (*Open123) Name() string {
return "123 Open"
}
func (*Open123) Items() []model.SettingItem {
return nil
}
func (*Open123) Run(_ *tool.DownloadTask) error {
return errs.NotSupport
}
func (*Open123) Init() (string, error) {
return "ok", nil
}
func (*Open123) IsReady() bool {
tempDir := setting.GetStr(conf.Pan123OpenTempDir)
if tempDir == "" {
return false
}
storage, _, err := op.GetStorageAndActualPath(tempDir)
if err != nil {
return false
}
if _, ok := storage.(*_123_open.Open123); !ok {
return false
}
return true
}
func (*Open123) AddURL(args *tool.AddUrlArgs) (string, error) {
storage, actualPath, err := op.GetStorageAndActualPath(args.TempDir)
if err != nil {
return "", err
}
driver123Open, ok := storage.(*_123_open.Open123)
if !ok {
return "", fmt.Errorf("unsupported storage driver for offline download, only 123 Open is supported")
}
ctx := context.Background()
if err := op.MakeDir(ctx, storage, actualPath); err != nil {
return "", err
}
parentDir, err := op.GetUnwrap(ctx, storage, actualPath)
if err != nil {
return "", err
}
cb := setting.GetStr(conf.Pan123OpenOfflineDownloadCallbackUrl)
taskID, err := driver123Open.OfflineDownload(ctx, args.Url, parentDir, cb)
if err != nil {
return "", fmt.Errorf("failed to add offline download task: %w", err)
}
return strconv.Itoa(taskID), nil
}
func (*Open123) Remove(_ *tool.DownloadTask) error {
return errs.NotSupport
}
func (*Open123) Status(task *tool.DownloadTask) (*tool.Status, error) {
taskID, err := strconv.Atoi(task.GID)
if err != nil {
return nil, fmt.Errorf("failed to parse task ID: %s", task.GID)
}
storage, _, err := op.GetStorageAndActualPath(task.TempDir)
if err != nil {
return nil, err
}
driver123Open, ok := storage.(*_123_open.Open123)
if !ok {
return nil, fmt.Errorf("unsupported storage driver for offline download, only 123 Open is supported")
}
process, status, err := driver123Open.OfflineDownloadProcess(context.Background(), taskID)
if err != nil {
return nil, err
}
var statusStr string
switch status {
case 0:
statusStr = "downloading"
case 1:
err = fmt.Errorf("offline download failed")
case 2:
statusStr = "succeed"
case 3:
statusStr = "retrying"
}
return &tool.Status{
Progress: process,
Completed: status == 2,
Status: statusStr,
Err: err,
}, nil
}
var _ tool.Tool = (*Open123)(nil)
func init() {
tool.Tools.Add(&Open123{})
}

View File

@@ -1,257 +0,0 @@
package op
import (
stdpath "path"
"sync"
"time"
"github.com/OpenListTeam/OpenList/v4/internal/cache"
"github.com/OpenListTeam/OpenList/v4/internal/driver"
"github.com/OpenListTeam/OpenList/v4/internal/model"
)
type CacheManager struct {
dirCache *cache.KeyedCache[*directoryCache] // Cache for directory listings
linkCache *cache.TypedCache[*objWithLink] // Cache for file links
userCache *cache.KeyedCache[*model.User] // Cache for user data
settingCache *cache.KeyedCache[any] // Cache for settings
detailCache *cache.KeyedCache[*model.StorageDetails] // Cache for storage details
}
func NewCacheManager() *CacheManager {
return &CacheManager{
dirCache: cache.NewKeyedCache[*directoryCache](time.Minute * 5),
linkCache: cache.NewTypedCache[*objWithLink](time.Minute * 30),
userCache: cache.NewKeyedCache[*model.User](time.Hour),
settingCache: cache.NewKeyedCache[any](time.Hour),
detailCache: cache.NewKeyedCache[*model.StorageDetails](time.Minute * 30),
}
}
// global instance
var Cache = NewCacheManager()
func Key(storage driver.Driver, path string) string {
return stdpath.Join(storage.GetStorage().MountPath, path)
}
// update object in dirCache.
// if it's a directory, remove all its children from dirCache too.
// if it's a file, remove its link from linkCache.
func (cm *CacheManager) updateDirectoryObject(storage driver.Driver, dirPath string, oldObj model.Obj, newObj model.Obj) {
key := Key(storage, dirPath)
if !oldObj.IsDir() {
cm.linkCache.DeleteKey(stdpath.Join(key, oldObj.GetName()))
cm.linkCache.DeleteKey(stdpath.Join(key, newObj.GetName()))
}
if storage.Config().NoCache {
return
}
if cache, exist := cm.dirCache.Get(key); exist {
if oldObj.IsDir() {
cm.deleteDirectoryTree(stdpath.Join(key, oldObj.GetName()))
}
cache.UpdateObject(oldObj.GetName(), newObj)
}
}
// add new object to dirCache
func (cm *CacheManager) addDirectoryObject(storage driver.Driver, dirPath string, newObj model.Obj) {
if storage.Config().NoCache {
return
}
cache, exist := cm.dirCache.Get(Key(storage, dirPath))
if exist {
cache.UpdateObject(newObj.GetName(), newObj)
}
}
// recursively delete directory and its children from dirCache
func (cm *CacheManager) DeleteDirectoryTree(storage driver.Driver, dirPath string) {
if storage.Config().NoCache {
return
}
cm.deleteDirectoryTree(Key(storage, dirPath))
}
func (cm *CacheManager) deleteDirectoryTree(key string) {
if dirCache, exists := cm.dirCache.Take(key); exists {
for _, obj := range dirCache.objs {
if obj.IsDir() {
cm.deleteDirectoryTree(stdpath.Join(key, obj.GetName()))
}
}
}
}
// remove directory from dirCache
func (cm *CacheManager) DeleteDirectory(storage driver.Driver, dirPath string) {
if storage.Config().NoCache {
return
}
cm.dirCache.Delete(Key(storage, dirPath))
}
// remove object from dirCache.
// if it's a directory, remove all its children from dirCache too.
// if it's a file, remove its link from linkCache.
func (cm *CacheManager) removeDirectoryObject(storage driver.Driver, dirPath string, obj model.Obj) {
key := Key(storage, dirPath)
if !obj.IsDir() {
cm.linkCache.DeleteKey(stdpath.Join(key, obj.GetName()))
}
if storage.Config().NoCache {
return
}
if cache, exist := cm.dirCache.Get(key); exist {
if obj.IsDir() {
cm.deleteDirectoryTree(stdpath.Join(key, obj.GetName()))
}
cache.RemoveObject(obj.GetName())
}
}
// cache user data
func (cm *CacheManager) SetUser(username string, user *model.User) {
cm.userCache.Set(username, user)
}
// cached user data
func (cm *CacheManager) GetUser(username string) (*model.User, bool) {
return cm.userCache.Get(username)
}
// remove user data from cache
func (cm *CacheManager) DeleteUser(username string) {
cm.userCache.Delete(username)
}
// caches setting
func (cm *CacheManager) SetSetting(key string, setting *model.SettingItem) {
cm.settingCache.Set(key, setting)
}
// cached setting
func (cm *CacheManager) GetSetting(key string) (*model.SettingItem, bool) {
if data, exists := cm.settingCache.Get(key); exists {
if setting, ok := data.(*model.SettingItem); ok {
return setting, true
}
}
return nil, false
}
// cache setting groups
func (cm *CacheManager) SetSettingGroup(key string, settings []model.SettingItem) {
cm.settingCache.Set(key, settings)
}
// cached setting group
func (cm *CacheManager) GetSettingGroup(key string) ([]model.SettingItem, bool) {
if data, exists := cm.settingCache.Get(key); exists {
if settings, ok := data.([]model.SettingItem); ok {
return settings, true
}
}
return nil, false
}
func (cm *CacheManager) SetStorageDetails(storage driver.Driver, details *model.StorageDetails) {
if storage.Config().NoCache {
return
}
expiration := time.Minute * time.Duration(storage.GetStorage().CacheExpiration)
cm.detailCache.SetWithTTL(storage.GetStorage().MountPath, details, expiration)
}
func (cm *CacheManager) GetStorageDetails(storage driver.Driver) (*model.StorageDetails, bool) {
return cm.detailCache.Get(storage.GetStorage().MountPath)
}
func (cm *CacheManager) InvalidateStorageDetails(storage driver.Driver) {
cm.detailCache.Delete(storage.GetStorage().MountPath)
}
// clears all caches
func (cm *CacheManager) ClearAll() {
cm.dirCache.Clear()
cm.linkCache.Clear()
cm.userCache.Clear()
cm.settingCache.Clear()
cm.detailCache.Clear()
}
type directoryCache struct {
objs []model.Obj
sorted []model.Obj
mu sync.RWMutex
dirtyFlags uint8
}
const (
dirtyRemove uint8 = 1 << iota // 对象删除:刷新 sorted 副本,但不需要 full sort/extract
dirtyUpdate // 对象更新:需要执行 full sort + extract
)
func newDirectoryCache(objs []model.Obj) *directoryCache {
sorted := make([]model.Obj, len(objs))
copy(sorted, objs)
return &directoryCache{
objs: objs,
sorted: sorted,
}
}
func (dc *directoryCache) RemoveObject(name string) {
dc.mu.Lock()
defer dc.mu.Unlock()
for i, obj := range dc.objs {
if obj.GetName() == name {
dc.objs = append(dc.objs[:i], dc.objs[i+1:]...)
dc.dirtyFlags |= dirtyRemove
break
}
}
}
func (dc *directoryCache) UpdateObject(oldName string, newObj model.Obj) {
dc.mu.Lock()
defer dc.mu.Unlock()
if oldName != "" {
for i, obj := range dc.objs {
if obj.GetName() == oldName {
dc.objs[i] = newObj
dc.dirtyFlags |= dirtyUpdate
return
}
}
}
dc.objs = append(dc.objs, newObj)
dc.dirtyFlags |= dirtyUpdate
}
func (dc *directoryCache) GetSortedObjects(meta driver.Meta) []model.Obj {
dc.mu.RLock()
if dc.dirtyFlags == 0 {
dc.mu.RUnlock()
return dc.sorted
}
dc.mu.RUnlock()
dc.mu.Lock()
defer dc.mu.Unlock()
sorted := make([]model.Obj, len(dc.objs))
copy(sorted, dc.objs)
dc.sorted = sorted
if dc.dirtyFlags&dirtyUpdate != 0 {
storage := meta.GetStorage()
if meta.Config().LocalSort {
model.SortFiles(sorted, storage.OrderBy, storage.OrderDirection)
}
model.ExtractFolder(sorted, storage.ExtractFolder)
}
dc.dirtyFlags = 0
return sorted
}

View File

@@ -1,139 +0,0 @@
package op
import (
"fmt"
stdpath "path"
"strings"
"github.com/OpenListTeam/OpenList/v4/internal/db"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/pkg/singleflight"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/go-cache"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
func makeJoined(sdb []model.SharingDB) []model.Sharing {
creator := make(map[uint]*model.User)
return utils.MustSliceConvert(sdb, func(s model.SharingDB) model.Sharing {
var c *model.User
var ok bool
if c, ok = creator[s.CreatorId]; !ok {
var err error
if c, err = GetUserById(s.CreatorId); err != nil {
c = nil
} else {
creator[s.CreatorId] = c
}
}
var files []string
if err := utils.Json.UnmarshalFromString(s.FilesRaw, &files); err != nil {
files = make([]string, 0)
}
return model.Sharing{
SharingDB: &s,
Files: files,
Creator: c,
}
})
}
var sharingCache = cache.NewMemCache(cache.WithShards[*model.Sharing](8))
var sharingG singleflight.Group[*model.Sharing]
func GetSharingById(id string, refresh ...bool) (*model.Sharing, error) {
if !utils.IsBool(refresh...) {
if sharing, ok := sharingCache.Get(id); ok {
log.Debugf("use cache when get sharing %s", id)
return sharing, nil
}
}
sharing, err, _ := sharingG.Do(id, func() (*model.Sharing, error) {
s, err := db.GetSharingById(id)
if err != nil {
return nil, errors.WithMessagef(err, "failed get sharing [%s]", id)
}
creator, err := GetUserById(s.CreatorId)
if err != nil {
return nil, errors.WithMessagef(err, "failed get sharing creator [%s]", id)
}
var files []string
if err = utils.Json.UnmarshalFromString(s.FilesRaw, &files); err != nil {
files = make([]string, 0)
}
return &model.Sharing{
SharingDB: s,
Files: files,
Creator: creator,
}, nil
})
return sharing, err
}
func GetSharings(pageIndex, pageSize int) ([]model.Sharing, int64, error) {
s, cnt, err := db.GetSharings(pageIndex, pageSize)
if err != nil {
return nil, 0, errors.WithStack(err)
}
return makeJoined(s), cnt, nil
}
func GetSharingsByCreatorId(userId uint, pageIndex, pageSize int) ([]model.Sharing, int64, error) {
s, cnt, err := db.GetSharingsByCreatorId(userId, pageIndex, pageSize)
if err != nil {
return nil, 0, errors.WithStack(err)
}
return makeJoined(s), cnt, nil
}
func GetSharingUnwrapPath(sharing *model.Sharing, path string) (unwrapPath string, err error) {
if len(sharing.Files) == 0 {
return "", errors.New("cannot get actual path of an invalid sharing")
}
if len(sharing.Files) == 1 {
return stdpath.Join(sharing.Files[0], path), nil
}
path = utils.FixAndCleanPath(path)[1:]
if len(path) == 0 {
return "", errors.New("cannot get actual path of a sharing root path")
}
mapPath := ""
child, rest, _ := strings.Cut(path, "/")
for _, c := range sharing.Files {
if child == stdpath.Base(c) {
mapPath = c
break
}
}
if mapPath == "" {
return "", fmt.Errorf("failed find child [%s] of sharing [%s]", child, sharing.ID)
}
return stdpath.Join(mapPath, rest), nil
}
func CreateSharing(sharing *model.Sharing) (id string, err error) {
sharing.CreatorId = sharing.Creator.ID
sharing.FilesRaw, err = utils.Json.MarshalToString(utils.MustSliceConvert(sharing.Files, utils.FixAndCleanPath))
if err != nil {
return "", errors.WithStack(err)
}
return db.CreateSharing(sharing.SharingDB)
}
func UpdateSharing(sharing *model.Sharing, skipMarshal ...bool) (err error) {
if !utils.IsBool(skipMarshal...) {
sharing.CreatorId = sharing.Creator.ID
sharing.FilesRaw, err = utils.Json.MarshalToString(utils.MustSliceConvert(sharing.Files, utils.FixAndCleanPath))
if err != nil {
return errors.WithStack(err)
}
}
sharingCache.Del(sharing.ID)
return db.UpdateSharing(sharing.SharingDB)
}
func DeleteSharing(sid string) error {
sharingCache.Del(sid)
return db.DeleteSharingById(sid)
}

View File

@@ -1,65 +0,0 @@
package sharing
import (
"context"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/pkg/errors"
)
func archiveMeta(ctx context.Context, sid, path string, args model.SharingArchiveMetaArgs) (*model.Sharing, *model.ArchiveMetaProvider, error) {
sharing, err := op.GetSharingById(sid, args.Refresh)
if err != nil {
return nil, nil, errors.WithStack(errs.SharingNotFound)
}
if !sharing.Valid() {
return sharing, nil, errors.WithStack(errs.InvalidSharing)
}
if !sharing.Verify(args.Pwd) {
return sharing, nil, errors.WithStack(errs.WrongShareCode)
}
path = utils.FixAndCleanPath(path)
if len(sharing.Files) == 1 || path != "/" {
unwrapPath, err := op.GetSharingUnwrapPath(sharing, path)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing unwrap path")
}
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing file")
}
obj, err := op.GetArchiveMeta(ctx, storage, actualPath, args.ArchiveMetaArgs)
return sharing, obj, err
}
return nil, nil, errors.New("cannot get sharing root archive meta")
}
func archiveList(ctx context.Context, sid, path string, args model.SharingArchiveListArgs) (*model.Sharing, []model.Obj, error) {
sharing, err := op.GetSharingById(sid, args.Refresh)
if err != nil {
return nil, nil, errors.WithStack(errs.SharingNotFound)
}
if !sharing.Valid() {
return sharing, nil, errors.WithStack(errs.InvalidSharing)
}
if !sharing.Verify(args.Pwd) {
return sharing, nil, errors.WithStack(errs.WrongShareCode)
}
path = utils.FixAndCleanPath(path)
if len(sharing.Files) == 1 || path != "/" {
unwrapPath, err := op.GetSharingUnwrapPath(sharing, path)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing unwrap path")
}
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing file")
}
obj, err := op.ListArchive(ctx, storage, actualPath, args.ArchiveListArgs)
return sharing, obj, err
}
return nil, nil, errors.New("cannot get sharing root archive list")
}

View File

@@ -1,60 +0,0 @@
package sharing
import (
"context"
stdpath "path"
"time"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/pkg/errors"
)
func get(ctx context.Context, sid, path string, args model.SharingListArgs) (*model.Sharing, model.Obj, error) {
sharing, err := op.GetSharingById(sid, args.Refresh)
if err != nil {
return nil, nil, errors.WithStack(errs.SharingNotFound)
}
if !sharing.Valid() {
return sharing, nil, errors.WithStack(errs.InvalidSharing)
}
if !sharing.Verify(args.Pwd) {
return sharing, nil, errors.WithStack(errs.WrongShareCode)
}
path = utils.FixAndCleanPath(path)
if len(sharing.Files) == 1 || path != "/" {
unwrapPath, err := op.GetSharingUnwrapPath(sharing, path)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing unwrap path")
}
if unwrapPath != "/" {
virtualFiles := op.GetStorageVirtualFilesByPath(stdpath.Dir(unwrapPath))
for _, f := range virtualFiles {
if f.GetName() == stdpath.Base(unwrapPath) {
return sharing, f, nil
}
}
} else {
return sharing, &model.Object{
Name: sid,
Size: 0,
Modified: time.Time{},
IsFolder: true,
}, nil
}
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing file")
}
obj, err := op.Get(ctx, storage, actualPath)
return sharing, obj, err
}
return sharing, &model.Object{
Name: sid,
Size: 0,
Modified: time.Time{},
IsFolder: true,
}, nil
}

View File

@@ -1,46 +0,0 @@
package sharing
import (
"context"
"strings"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/OpenListTeam/OpenList/v4/server/common"
"github.com/pkg/errors"
)
func link(ctx context.Context, sid, path string, args *LinkArgs) (*model.Sharing, *model.Link, model.Obj, error) {
sharing, err := op.GetSharingById(sid, args.SharingListArgs.Refresh)
if err != nil {
return nil, nil, nil, errors.WithStack(errs.SharingNotFound)
}
if !sharing.Valid() {
return sharing, nil, nil, errors.WithStack(errs.InvalidSharing)
}
if !sharing.Verify(args.Pwd) {
return sharing, nil, nil, errors.WithStack(errs.WrongShareCode)
}
path = utils.FixAndCleanPath(path)
if len(sharing.Files) == 1 || path != "/" {
unwrapPath, err := op.GetSharingUnwrapPath(sharing, path)
if err != nil {
return nil, nil, nil, errors.WithMessage(err, "failed get sharing unwrap path")
}
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if err != nil {
return nil, nil, nil, errors.WithMessage(err, "failed get sharing link")
}
l, obj, err := op.Link(ctx, storage, actualPath, args.LinkArgs)
if err != nil {
return nil, nil, nil, errors.WithMessage(err, "failed get sharing link")
}
if l.URL != "" && !strings.HasPrefix(l.URL, "http://") && !strings.HasPrefix(l.URL, "https://") {
l.URL = common.GetApiUrl(ctx) + l.URL
}
return sharing, l, obj, nil
}
return nil, nil, nil, errors.New("cannot get sharing root link")
}

View File

@@ -1,83 +0,0 @@
package sharing
import (
"context"
stdpath "path"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/op"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/pkg/errors"
)
func list(ctx context.Context, sid, path string, args model.SharingListArgs) (*model.Sharing, []model.Obj, error) {
sharing, err := op.GetSharingById(sid, args.Refresh)
if err != nil {
return nil, nil, errors.WithStack(errs.SharingNotFound)
}
if !sharing.Valid() {
return sharing, nil, errors.WithStack(errs.InvalidSharing)
}
if !sharing.Verify(args.Pwd) {
return sharing, nil, errors.WithStack(errs.WrongShareCode)
}
path = utils.FixAndCleanPath(path)
if len(sharing.Files) == 1 || path != "/" {
unwrapPath, err := op.GetSharingUnwrapPath(sharing, path)
if err != nil {
return nil, nil, errors.WithMessage(err, "failed get sharing unwrap path")
}
virtualFiles := op.GetStorageVirtualFilesByPath(unwrapPath)
storage, actualPath, err := op.GetStorageAndActualPath(unwrapPath)
if err != nil && len(virtualFiles) == 0 {
return nil, nil, errors.WithMessage(err, "failed list sharing")
}
var objs []model.Obj
if storage != nil {
objs, err = op.List(ctx, storage, actualPath, model.ListArgs{
Refresh: args.Refresh,
ReqPath: stdpath.Join(sid, path),
})
if err != nil && len(virtualFiles) == 0 {
return nil, nil, errors.WithMessage(err, "failed list sharing")
}
}
om := model.NewObjMerge()
objs = om.Merge(objs, virtualFiles...)
model.SortFiles(objs, sharing.OrderBy, sharing.OrderDirection)
model.ExtractFolder(objs, sharing.ExtractFolder)
return sharing, objs, nil
}
objs := make([]model.Obj, 0, len(sharing.Files))
for _, f := range sharing.Files {
if f != "/" {
isVf := false
virtualFiles := op.GetStorageVirtualFilesByPath(stdpath.Dir(f))
for _, vf := range virtualFiles {
if vf.GetName() == stdpath.Base(f) {
objs = append(objs, vf)
isVf = true
break
}
}
if isVf {
continue
}
} else {
continue
}
storage, actualPath, err := op.GetStorageAndActualPath(f)
if err != nil {
continue
}
obj, err := op.Get(ctx, storage, actualPath)
if err != nil {
continue
}
objs = append(objs, obj)
}
model.SortFiles(objs, sharing.OrderBy, sharing.OrderDirection)
model.ExtractFolder(objs, sharing.ExtractFolder)
return sharing, objs, nil
}

View File

@@ -1,58 +0,0 @@
package sharing
import (
"context"
"github.com/OpenListTeam/OpenList/v4/internal/model"
log "github.com/sirupsen/logrus"
)
func List(ctx context.Context, sid, path string, args model.SharingListArgs) (*model.Sharing, []model.Obj, error) {
sharing, res, err := list(ctx, sid, path, args)
if err != nil {
log.Errorf("failed list sharing %s/%s: %+v", sid, path, err)
return nil, nil, err
}
return sharing, res, nil
}
func Get(ctx context.Context, sid, path string, args model.SharingListArgs) (*model.Sharing, model.Obj, error) {
sharing, res, err := get(ctx, sid, path, args)
if err != nil {
log.Warnf("failed get sharing %s/%s: %s", sid, path, err)
return nil, nil, err
}
return sharing, res, nil
}
func ArchiveMeta(ctx context.Context, sid, path string, args model.SharingArchiveMetaArgs) (*model.Sharing, *model.ArchiveMetaProvider, error) {
sharing, res, err := archiveMeta(ctx, sid, path, args)
if err != nil {
log.Warnf("failed get sharing archive meta %s/%s: %s", sid, path, err)
return nil, nil, err
}
return sharing, res, nil
}
func ArchiveList(ctx context.Context, sid, path string, args model.SharingArchiveListArgs) (*model.Sharing, []model.Obj, error) {
sharing, res, err := archiveList(ctx, sid, path, args)
if err != nil {
log.Warnf("failed list sharing archive %s/%s: %s", sid, path, err)
return nil, nil, err
}
return sharing, res, nil
}
type LinkArgs struct {
model.SharingListArgs
model.LinkArgs
}
func Link(ctx context.Context, sid, path string, args *LinkArgs) (*model.Sharing, *model.Link, model.Obj, error) {
sharing, res, file, err := link(ctx, sid, path, args)
if err != nil {
log.Errorf("failed get sharing link %s/%s: %+v", sid, path, err)
return nil, nil, nil, err
}
return sharing, res, file, nil
}

View File

@@ -1,337 +0,0 @@
package stream
import (
"bytes"
"context"
"encoding/hex"
"errors"
"fmt"
"io"
"net/http"
"os"
"github.com/OpenListTeam/OpenList/v4/internal/conf"
"github.com/OpenListTeam/OpenList/v4/internal/errs"
"github.com/OpenListTeam/OpenList/v4/internal/model"
"github.com/OpenListTeam/OpenList/v4/internal/net"
"github.com/OpenListTeam/OpenList/v4/pkg/http_range"
"github.com/OpenListTeam/OpenList/v4/pkg/pool"
"github.com/OpenListTeam/OpenList/v4/pkg/utils"
"github.com/rclone/rclone/lib/mmap"
log "github.com/sirupsen/logrus"
)
type RangeReaderFunc func(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error)
func (f RangeReaderFunc) RangeRead(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error) {
return f(ctx, httpRange)
}
func GetRangeReaderFromLink(size int64, link *model.Link) (model.RangeReaderIF, error) {
if link.Concurrency > 0 || link.PartSize > 0 {
down := net.NewDownloader(func(d *net.Downloader) {
d.Concurrency = link.Concurrency
d.PartSize = link.PartSize
})
var rangeReader RangeReaderFunc = func(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error) {
var req *net.HttpRequestParams
if link.RangeReader != nil {
req = &net.HttpRequestParams{
Range: httpRange,
Size: size,
}
} else {
requestHeader, _ := ctx.Value(conf.RequestHeaderKey).(http.Header)
header := net.ProcessHeader(requestHeader, link.Header)
req = &net.HttpRequestParams{
Range: httpRange,
Size: size,
URL: link.URL,
HeaderRef: header,
}
}
return down.Download(ctx, req)
}
if link.RangeReader != nil {
down.HttpClient = net.GetRangeReaderHttpRequestFunc(link.RangeReader)
return rangeReader, nil
}
return RateLimitRangeReaderFunc(rangeReader), nil
}
if link.RangeReader != nil {
return link.RangeReader, nil
}
if len(link.URL) == 0 {
return nil, errors.New("invalid link: must have at least one of URL or RangeReader")
}
rangeReader := func(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error) {
if httpRange.Length < 0 || httpRange.Start+httpRange.Length > size {
httpRange.Length = size - httpRange.Start
}
requestHeader, _ := ctx.Value(conf.RequestHeaderKey).(http.Header)
header := net.ProcessHeader(requestHeader, link.Header)
header = http_range.ApplyRangeToHttpHeader(httpRange, header)
response, err := net.RequestHttp(ctx, "GET", header, link.URL)
if err != nil {
if _, ok := errs.UnwrapOrSelf(err).(net.HttpStatusCodeError); ok {
return nil, err
}
return nil, fmt.Errorf("http request failure, err:%w", err)
}
if httpRange.Start == 0 && (httpRange.Length == -1 || httpRange.Length == size) || response.StatusCode == http.StatusPartialContent ||
checkContentRange(&response.Header, httpRange.Start) {
return response.Body, nil
} else if response.StatusCode == http.StatusOK {
log.Warnf("remote http server not supporting range request, expect low perfromace!")
readCloser, err := net.GetRangedHttpReader(response.Body, httpRange.Start, httpRange.Length)
if err != nil {
return nil, err
}
return readCloser, nil
}
return response.Body, nil
}
return RateLimitRangeReaderFunc(rangeReader), nil
}
// RangeReaderIF.RangeRead返回的io.ReadCloser保留file的签名。
func GetRangeReaderFromMFile(size int64, file model.File) model.RangeReaderIF {
return &model.FileRangeReader{
RangeReaderIF: RangeReaderFunc(func(ctx context.Context, httpRange http_range.Range) (io.ReadCloser, error) {
length := httpRange.Length
if length < 0 || httpRange.Start+length > size {
length = size - httpRange.Start
}
return &model.FileCloser{File: io.NewSectionReader(file, httpRange.Start, length)}, nil
}),
}
}
// 139 cloud does not properly return 206 http status code, add a hack here
func checkContentRange(header *http.Header, offset int64) bool {
start, _, err := http_range.ParseContentRange(header.Get("Content-Range"))
if err != nil {
log.Warnf("exception trying to parse Content-Range, will ignore,err=%s", err)
}
if start == offset {
return true
}
return false
}
type ReaderWithCtx struct {
io.Reader
Ctx context.Context
}
func (r *ReaderWithCtx) Read(p []byte) (n int, err error) {
if utils.IsCanceled(r.Ctx) {
return 0, r.Ctx.Err()
}
return r.Reader.Read(p)
}
func (r *ReaderWithCtx) Close() error {
if c, ok := r.Reader.(io.Closer); ok {
return c.Close()
}
return nil
}
func CacheFullAndHash(stream model.FileStreamer, up *model.UpdateProgress, hashType *utils.HashType, hashParams ...any) (model.File, string, error) {
h := hashType.NewFunc(hashParams...)
tmpF, err := stream.CacheFullAndWriter(up, h)
if err != nil {
return nil, "", err
}
return tmpF, hex.EncodeToString(h.Sum(nil)), nil
}
type StreamSectionReaderIF interface {
// 线程不安全
GetSectionReader(off, length int64) (io.ReadSeeker, error)
FreeSectionReader(sr io.ReadSeeker)
// 线程不安全
DiscardSection(off int64, length int64) error
}
func NewStreamSectionReader(file model.FileStreamer, maxBufferSize int, up *model.UpdateProgress) (StreamSectionReaderIF, error) {
if file.GetFile() != nil {
return &cachedSectionReader{file.GetFile()}, nil
}
maxBufferSize = min(maxBufferSize, int(file.GetSize()))
if maxBufferSize > conf.MaxBufferLimit {
f, err := os.CreateTemp(conf.Conf.TempDir, "file-*")
if err != nil {
return nil, err
}
if f.Truncate((file.GetSize()+int64(maxBufferSize-1))/int64(maxBufferSize)*int64(maxBufferSize)) != nil {
// fallback to full cache
_, _ = f.Close(), os.Remove(f.Name())
cache, err := file.CacheFullAndWriter(up, nil)
if err != nil {
return nil, err
}
return &cachedSectionReader{cache}, nil
}
ss := &fileSectionReader{Reader: file, temp: f}
ss.bufPool = &pool.Pool[*offsetWriterWithBase]{
New: func() *offsetWriterWithBase {
base := ss.fileOff
ss.fileOff += int64(maxBufferSize)
return &offsetWriterWithBase{io.NewOffsetWriter(ss.temp, base), base}
},
}
file.Add(utils.CloseFunc(func() error {
ss.bufPool.Reset()
return errors.Join(ss.temp.Close(), os.Remove(ss.temp.Name()))
}))
return ss, nil
}
ss := &directSectionReader{file: file}
if conf.MmapThreshold > 0 && maxBufferSize >= conf.MmapThreshold {
ss.bufPool = &pool.Pool[[]byte]{
New: func() []byte {
buf, err := mmap.Alloc(maxBufferSize)
if err == nil {
ss.file.Add(utils.CloseFunc(func() error {
return mmap.Free(buf)
}))
} else {
buf = make([]byte, maxBufferSize)
}
return buf
},
}
} else {
ss.bufPool = &pool.Pool[[]byte]{
New: func() []byte {
return make([]byte, maxBufferSize)
},
}
}
file.Add(utils.CloseFunc(func() error {
ss.bufPool.Reset()
return nil
}))
return ss, nil
}
type cachedSectionReader struct {
cache io.ReaderAt
}
func (*cachedSectionReader) DiscardSection(off int64, length int64) error {
return nil
}
func (s *cachedSectionReader) GetSectionReader(off, length int64) (io.ReadSeeker, error) {
return io.NewSectionReader(s.cache, off, length), nil
}
func (*cachedSectionReader) FreeSectionReader(sr io.ReadSeeker) {}
type fileSectionReader struct {
io.Reader
off int64
temp *os.File
fileOff int64
bufPool *pool.Pool[*offsetWriterWithBase]
}
type offsetWriterWithBase struct {
*io.OffsetWriter
base int64
}
// 线程不安全
func (ss *fileSectionReader) DiscardSection(off int64, length int64) error {
if off != ss.off {
return fmt.Errorf("stream not cached: request offset %d != current offset %d", off, ss.off)
}
_, err := utils.CopyWithBufferN(io.Discard, ss.Reader, length)
if err != nil {
return fmt.Errorf("failed to skip data: (expect =%d) %w", length, err)
}
ss.off += length
return nil
}
type fileBufferSectionReader struct {
io.ReadSeeker
fileBuf *offsetWriterWithBase
}
func (ss *fileSectionReader) GetSectionReader(off, length int64) (io.ReadSeeker, error) {
if off != ss.off {
return nil, fmt.Errorf("stream not cached: request offset %d != current offset %d", off, ss.off)
}
fileBuf := ss.bufPool.Get()
_, _ = fileBuf.Seek(0, io.SeekStart)
n, err := utils.CopyWithBufferN(fileBuf, ss.Reader, length)
if err != nil {
return nil, fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", length, n, err)
}
ss.off += length
return &fileBufferSectionReader{io.NewSectionReader(ss.temp, fileBuf.base, length), fileBuf}, nil
}
func (ss *fileSectionReader) FreeSectionReader(rs io.ReadSeeker) {
if sr, ok := rs.(*fileBufferSectionReader); ok {
ss.bufPool.Put(sr.fileBuf)
sr.fileBuf = nil
sr.ReadSeeker = nil
}
}
type directSectionReader struct {
file model.FileStreamer
off int64
bufPool *pool.Pool[[]byte]
}
// 线程不安全
func (ss *directSectionReader) DiscardSection(off int64, length int64) error {
if off != ss.off {
return fmt.Errorf("stream not cached: request offset %d != current offset %d", off, ss.off)
}
_, err := utils.CopyWithBufferN(io.Discard, ss.file, length)
if err != nil {
return fmt.Errorf("failed to skip data: (expect =%d) %w", length, err)
}
ss.off += length
return nil
}
type bufferSectionReader struct {
io.ReadSeeker
buf []byte
}
// 线程不安全
func (ss *directSectionReader) GetSectionReader(off, length int64) (io.ReadSeeker, error) {
if off != ss.off {
return nil, fmt.Errorf("stream not cached: request offset %d != current offset %d", off, ss.off)
}
tempBuf := ss.bufPool.Get()
buf := tempBuf[:length]
n, err := io.ReadFull(ss.file, buf)
if int64(n) != length {
return nil, fmt.Errorf("failed to read all data: (expect =%d, actual =%d) %w", length, n, err)
}
ss.off += int64(n)
return &bufferSectionReader{bytes.NewReader(buf), buf}, nil
}
func (ss *directSectionReader) FreeSectionReader(rs io.ReadSeeker) {
if sr, ok := rs.(*bufferSectionReader); ok {
ss.bufPool.Put(sr.buf[0:cap(sr.buf)])
sr.buf = nil
sr.ReadSeeker = nil
}
}

27
layers/file/driver.go Normal file
View File

@@ -0,0 +1,27 @@
package file
import "context"
// HostFileServer 驱动文件接口 #################################################################
type HostFileServer interface {
// CopyFile 复制文件 =======================================================================
CopyFile(ctx context.Context, sources []string, targets []string) ([]*BackFileAction, error)
// MoveFile 移动文件 =======================================================================
MoveFile(ctx context.Context, sources []string, targets []string) ([]*BackFileAction, error)
// NameFile 移动文件 =======================================================================
NameFile(ctx context.Context, sources []string, targets []string) ([]*BackFileAction, error)
// ListFile 列举文件 =======================================================================
ListFile(ctx context.Context, path []string, opt *ListFileOption) ([]*HostFileObject, error)
// FindFile 搜索文件 =======================================================================
FindFile(ctx context.Context, path []string, opt *FindFileOption) ([]*HostFileObject, error)
// Download 获取文件 =======================================================================
Download(ctx context.Context, path []string, opt *DownloadOption) ([]*LinkFileObject, error)
// Uploader 上传文件 =======================================================================
Uploader(ctx context.Context, path []string, opt *UploaderOption) ([]*BackFileAction, error)
// KillFile 删除文件 =======================================================================
KillFile(ctx context.Context, path []string, opt *KillFileOption) ([]*BackFileAction, error)
// MakeFile 搜索文件 =======================================================================
MakeFile(ctx context.Context, path []string, opt *MakeFileOption) ([]*BackFileAction, error)
// MakePath 搜索文件 =======================================================================
MakePath(ctx context.Context, path []string, opt *MakeFileOption) ([]*BackFileAction, error)
}

71
layers/file/manage.go Normal file
View File

@@ -0,0 +1,71 @@
package file
import (
"context"
)
// UserFileServer 文件服务接口 #################################################################
type UserFileServer interface {
// CopyFile 复制文件 =======================================================================
CopyFile(ctx context.Context, sources []string, targets []string) ([]*BackFileAction, error)
// MoveFile 移动文件 =======================================================================
MoveFile(ctx context.Context, sources []string, targets []string) ([]*BackFileAction, error)
// NameFile 移动文件 =======================================================================
NameFile(ctx context.Context, sources []string, targets []string) ([]*BackFileAction, error)
// ListFile 列举文件 =======================================================================
ListFile(ctx context.Context, path []string, opt *ListFileOption) ([]*UserFileObject, error)
// FindFile 搜索文件 =======================================================================
FindFile(ctx context.Context, path []string, opt *FindFileOption) ([]*UserFileObject, error)
// Download 获取文件 =======================================================================
Download(ctx context.Context, path []string, opt *DownloadOption) ([]*LinkFileObject, error)
// Uploader 上传文件 =======================================================================
Uploader(ctx context.Context, path []string, opt *UploaderOption) ([]*BackFileAction, error)
// KillFile 删除文件 =======================================================================
KillFile(ctx context.Context, path []string, opt *KillFileOption) ([]*BackFileAction, error)
// MakeFile 搜索文件 =======================================================================
MakeFile(ctx context.Context, path []string, opt *MakeFileOption) ([]*BackFileAction, error)
// MakePath 搜索文件 =======================================================================
MakePath(ctx context.Context, path []string, opt *MakeFileOption) ([]*BackFileAction, error)
// PermFile 设置权限 =======================================================================
PermFile(ctx context.Context, path []string, opt *PermissionFile) ([]*BackFileAction, error)
//// NewShare 创建分享 =======================================================================
//NewShare(ctx context.Context, path []string, opt *NewShareAction) ([]*BackFileAction, error)
//// GetShare 获取分享 =======================================================================
//GetShare(ctx context.Context, path []string, opt *NewShareAction) ([]*UserFileObject, error)
//// DelShare 删除分享 =======================================================================
//DelShare(ctx context.Context, path []string, opt *NewShareAction) ([]*BackFileAction, error)
}
type UserFileUpload interface {
fullPost(ctx context.Context, path []string)
pfCreate(ctx context.Context, path []string)
pfUpload(ctx context.Context, path []string)
pfUpdate(ctx context.Context, path []string)
}
func ListFile(ctx context.Context, path []string, opt *ListFileOption) ([]*UserFileObject, error) {
return ListDeal([]*HostFileObject{})
}
func FindFile(ctx context.Context, path []string, opt *ListFileOption) ([]*UserFileObject, error) {
return ListDeal([]*HostFileObject{})
}
func ListDeal(originList []*HostFileObject) ([]*UserFileObject, error) {
serverList := make([]*UserFileObject, 0)
for _, fileItem := range originList {
serverList = append(serverList, &UserFileObject{
HostFileObject: *fileItem,
// ... 用户层逻辑
})
}
return serverList, nil
}
func Download(ctx context.Context, path []string, opt *ListFileOption) ([]*LinkFileObject, error) {
}
func Uploader(ctx context.Context, path []string, opt *ListFileOption) ([]*BackFileAction, error) {
}

79
layers/file/object.go Normal file
View File

@@ -0,0 +1,79 @@
package file
import "time"
// HostFileObject 驱动层获取获取的文件信息
type HostFileObject struct {
realName []string // 真实名称
previews []string // 文件预览
fileSize int64 // 文件大小
lastTime time.Time // 修改时间
makeTime time.Time // 创建时间
fileType bool // 文件类型
fileHash string // 文件哈希
hashType int16 // 哈希类型
}
// UserFileObject 由用户层转换后的文件信息
type UserFileObject struct {
HostFileObject
showPath []string // 文件路径
showName []string // 文件名称
realPath []string // 真实路径
checksum int32 // 密码校验
fileMask int16 // 文件权限
encrypts int16 // 文件状态
// 下列信息用于前端展示文件用
enc_type string // 加解密类型
enc_from string // 文件密码源
enc_pass string // 加解密密码
com_type string // 压缩的类型
sub_nums int16 // 子文件数量
// 下列信息用于后端内部处理用
// fileMask =================
// 占用000000 0 000 000 000
// 含义ABCDEF 1 421 421 421
// A-加密 B-前端解密 C-自解密
// D-is分卷 E-is压缩 F-is隐藏
// encrypts =================
// 占用位0000000000 00 0000
// 含义为:分卷数量 压缩 加密
}
type PermissionFile struct {
}
type LinkFileObject struct {
download []string // 下载链接
usrAgent []string // 用户代理
}
type ListFileOption struct {
}
type FindFileOption struct {
}
type KillFileOption struct {
}
type MakeFileOption struct {
}
type DownloadOption struct {
downType int8 // 下载类型
}
type UploaderOption struct {
}
type BackFileAction struct {
success bool // 是否成功
message string // 错误信息
}
type NewShareAction struct {
BackFileAction
shareID string // 分享编码
pubUrls string // 公开链接
passkey string // 分析密码
expired time.Time // 过期时间
}

16
layers/perm/fsmask.go Normal file
View File

@@ -0,0 +1,16 @@
package perm
type FileMask struct {
uuid string // 密钥UUID
user string // 所属用户
path string // 匹配路径
name string // 友好名称
idKeyset string // 密钥集ID
encrypts string // 加密组ID
password string // 独立密码
fileUser string // 所有用户
filePart int64 // 分卷大小
fileMask int16 // 文件权限
compress int16 // 是否压缩
isEnable bool // 是否启用
}

22
layers/perm/keyset.go Normal file
View File

@@ -0,0 +1,22 @@
package perm
type UserKeys struct {
uuid string // 密钥UUID
user string // 所属用户
main string // 核心密钥用户密钥SHA2
name string // 友好名称
algo int8 // 密钥算法
enabled bool // 是否启用
encFile bool // 加密文件
encName bool // 加密名称
keyAuto bool // 自动更新
keyRand bool // 随机密钥
keyAuth UserAuth // 密钥认证
}
type UserAuth struct {
uuid string // 密钥UUID
user string // 所属用户
plugin string // 认证插件
config string // 认证配置
}

10
layers/perm/shared.go Normal file
View File

@@ -0,0 +1,10 @@
package perm
type ShareUrl struct {
uuid string // 密钥UUID
user string // 所属用户
path string // 分享路径
pass string // 分享密码
date string // 过期时间
flag bool // 是否有效
}

14
layers/user/object.go Normal file
View File

@@ -0,0 +1,14 @@
package user
type UserInfo struct {
uuid string // 用户UUID
name string // 用户名称
flag bool // 是否有效
perm PermInfo // 权限信息
}
type PermInfo struct {
isAdmin bool // 是否管理员
davRead bool // 是否允许读
// ...
}

View File

@@ -1,6 +1,6 @@
package main
import "github.com/OpenListTeam/OpenList/v4/cmd"
import "github.com/OpenListTeam/OpenList/v5/cmd"
func main() {
cmd.Execute()

View File

@@ -87,7 +87,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: "1.25.0"
go-version: "1.24.5"
- name: Setup web
run: bash build.sh dev web

View File

@@ -33,7 +33,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: "1.25.0"
go-version: "1.24.5"
- name: Setup web
run: bash build.sh dev web

View File

@@ -46,7 +46,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: '1.25.0'
go-version: '1.24'
- name: Checkout
uses: actions/checkout@v4
@@ -73,5 +73,4 @@ jobs:
with:
files: build/compress/*
prerelease: false
tag_name: ${{ github.event.release.tag_name }}

View File

@@ -47,7 +47,7 @@ jobs:
- uses: actions/setup-go@v5
with:
go-version: '1.25.0'
go-version: 'stable'
- name: Cache Musl
id: cache-musl
@@ -87,7 +87,7 @@ jobs:
- uses: actions/setup-go@v5
with:
go-version: '1.25.0'
go-version: 'stable'
- name: Cache Musl
id: cache-musl

Some files were not shown because too many files have changed in this diff Show More