Compare commits

...

52 Commits

Author SHA1 Message Date
千石
3cddb6b7ed fix(driver): Handle Lanzou anti-crawler challenge by recalculating cookies (#9364)
- Detect and solve `acw_sc__v2` challenge to bypass anti-crawler validation
- Refactored request header initialization logic for clarity
2025-11-11 20:27:20 +08:00
千石
ce41587095 feat(cloud189): Added sanitization for file and folder names (#9366)
- Introduced `sanitizeName` function to remove four-byte characters (e.g., emojis) from names before upload or creation.
- Added `StripEmoji` option in driver configurations for cloud189 and cloud189pc.
- Updated file and folder operations (upload, rename, and creation) to use sanitized names.
- Ensured compatibility with both cloud189 and cloud189pc implementations.
2025-11-11 20:26:51 +08:00
千石
0cbc7ebc92 feat(driver): Added support for Gitee driver (#9368)
* feat(driver): Added support for Gitee driver

- Implemented core driver functions including initialization, file listing, and file linking
- Added Gitee-specific API interaction and object mapping
- Registered Gitee driver in the driver registry

* feat(driver): Added cookie-based authentication support for Gitee driver

- Extended request handling to include `Cookie` header if provided
- Updated metadata to include `cookie` field with appropriate documentation
- Adjusted file link generation to propagate `Cookie` headers in requests
2025-11-11 20:25:26 +08:00
千石
b4d9beb49c fix(Mediatrack): Add support for X-Device-Fingerprint header (#9354)
Introduce a `DeviceFingerprint` field to the request metadata.
This field is used to conditionally set the `X-Device-Fingerprint`
HTTP header in outgoing requests if its value is not empty.
2025-10-24 00:31:15 +08:00
千石
4c8401855c feat: Add new driver bitqiu support (#9355)
* feat(bitqiu): Add Bitqiu cloud drive support

- Implement the new Bitqiu cloud drive.
- Add core driver logic, metadata handling, and utility functions.
- Register the Bitqiu driver for use.

* feat(driver): Implement GetLink, CreateDir, and Move operations

- Implement `GetLink` method to retrieve download links for files.
- Implement `CreateDir` method to create new directories.
- Implement `Move` method to relocate files and directories.
- Add new API endpoints and data structures for download and directory creation responses.
- Integrate retry logic with re-authentication for API calls in implemented methods.
- Update HTTP request headers to include `x-requested-with`.

* feat(bitqiu): Add rename, copy, and delete operations

- Implement `Rename` operation with retry logic and API calls.
- Implement `Copy` operation, including asynchronous handling, polling for completion, and status checks.
- Implement `Remove` operation with retry logic and API calls.
- Add new API endpoint URLs for rename, copy, and delete, and a new copy success code.
- Introduce `AsyncManagerData`, `AsyncTask`, and `AsyncTaskInfo` types to support async copy status monitoring.
- Add utility functions `updateObjectName` and `parentPathOf` for object manipulation.
- Integrate login retry mechanism for all file operations.

* feat(bitqiu-upload): Implement chunked file upload support

- Implement multi-part chunked upload logic for the BitQiu service.
- Introduce `UploadInitData` and `ChunkUploadResponse` structs for structured API communication.
- Refactor the `Save` method to orchestrate initial upload, chunked data transfer, and finalization.
- Add `uploadFileInChunks` function to handle sequential uploading of file parts.
- Add `completeChunkUpload` function to finalize the chunked upload process on the server.
- Ensure proper temporary file cleanup using `defer tmpFile.Close()`.

* feat(driver): Implement automatic root folder ID retrieval

- Add `userInfoURL` constant for fetching user information.
- Implement `ensureRootFolderID` function to retrieve and set the driver's root folder ID if not already present.
- Integrate `ensureRootFolderID` into the driver's `Init` process.
- Define `UserInfoData` struct to parse the `rootDirId` from user information responses.

* feat(client): Implement configurable user agent

*   Introduce a configurable `UserAgent` field in the client's settings.
*   Add a `userAgent()` method to retrieve the user agent, prioritizing the custom setting or using a predefined default.
*   Apply the determined user agent to all outbound HTTP requests made by the `BitQiu` client.
2025-10-24 00:29:33 +08:00
千石
e2016dd031 refactor(webdav): Use ResolvePath instead of JoinPath (#9344)
- Changed the path concatenation method between `reqPath` and `src` and `dst` to use `ResolvePath`
- Updated the implementation of path handling in multiple functions
- Improved the consistency and reliability of path resolution
2025-10-16 17:23:11 +08:00
千石
a6bd90a9b2 feat(driver/s3): Add OSS Archive Support (#9350)
* feat(s3): Add support for S3 object storage classes

Introduces a new 'storage_class' configuration option for S3 providers. Users can now specify the desired storage class (e.g., Standard, GLACIER, DEEP_ARCHIVE) for objects uploaded to S3-compatible services like AWS S3 and Tencent COS.

The input storage class string is normalized to match AWS SDK constants, supporting various common aliases. If an unknown storage class is provided, it will be used as a raw value with a warning. This enhancement provides greater control over storage costs and data access patterns.

* feat(storage): Support for displaying file storage classes

Adds storage class information to file metadata and API responses.

This change introduces the ability to store file storage classes in file metadata and display them in API responses. This allows users to view a file's storage tier (e.g., S3 Standard, Glacier), enhancing data management capabilities.

Implementation details include:
- Introducing the StorageClassProvider interface and the ObjWrapStorageClass structure to uniformly handle and communicate object storage class information.
- Updated file metadata structures (e.g., ArchiveObj, FileInfo, RespFile) to include a StorageClass field.
- Modified relevant API response functions (e.g., GetFileInfo, GetFileList) to populate and return storage classes.
- Integrated functionality for retrieving object storage classes from underlying storage systems (e.g., S3) and wrapping them in lists.

* feat(driver/s3): Added the "Other" interface and implemented it by the S3 driver.

A new `driver.Other` interface has been added and defined in the `other.go` file.
The S3 driver has been updated to implement this new interface, extending its functionality.

* feat(s3): Add S3 object archive and thaw task management

This commit introduces comprehensive support for S3 object archive and thaw operations, managed asynchronously through a new task system.

- **S3 Transition Task System**:
  - Adds a new `S3Transition` task configuration, including workers, max retries, and persistence options.
  - Initializes `S3TransitionTaskManager` to handle asynchronous S3 archive/thaw requests.
  - Registers dedicated API routes for monitoring S3 transition tasks.

- **Integrate S3 Archive/Thaw with Other API**:
  - Modifies the `Other` API handler to intercept `archive` and `thaw` methods for S3 storage drivers.
  - Dispatches these operations as `S3TransitionTask` instances to the task manager for background processing.
  - Returns a task ID to the client for tracking the status of the dispatched operation.

- **Refactor `other` package for improved API consistency**:
  - Exports previously internal structs such as `archiveRequest`, `thawRequest`, `objectDescriptor`, `archiveResponse`, `thawResponse`, and `restoreStatus` by making their names public.
  - Makes helper functions like `decodeOtherArgs`, `normalizeStorageClass`, and `normalizeRestoreTier` public.
  - Introduces new constants for various S3 `Other` API methods.
2025-10-16 17:22:54 +08:00
千石
35d322443b feat(driver): Add URL signing support (#9347)
Introduces the ability to sign generated URLs for enhanced security and access control.

This feature is activated by configuring a `PrivateKey`, `UID`, and `ValidDuration` in the driver settings. If a private key is provided, the driver will sign the output URLs, making them time-limited based on the `ValidDuration`. The `ValidDuration` defaults to 30 minutes if not specified.

The core signing logic is encapsulated in the new `sign.go` file. The `driver.go` file integrates this signing process before returning the final URL.
2025-10-11 19:14:13 +08:00
D@' 3z K!7
81a7f28ba2 feat(drivers): add ProtonDrive driver (#9331)
- Implement complete ProtonDrive storage driver with end-to-end encryption support
- Add authentication via username/password with credential caching and reusable login
- Support all core operations: List, Link, Put, Copy, Move, Remove, Rename, MakeDir
- Include encrypted file operations with PGP key management and node passphrase handling
- Add temporary HTTP server for secure file downloads with range request support
- Support media streaming using temp server range requests
- Implement progress tracking for uploads and downloads
- Support directory operations with circular move detection
- Add proper error handling and panic recovery for external library integration

Closes #9312
2025-09-30 14:18:58 +08:00
textrix
fe564c42da feat: add pCloud driver support (#9339)
- Implement OAuth2 authentication with US/EU region support
- Add file operations (list, upload, download, delete, rename, move, copy)
- Add folder operations (create, rename, move, delete)
- Enhance error handling with pCloud-specific retry logic
- Use correct API methods: GET for reads, POST for writes
- Implement direct upload approach for better performance
- Add exponential backoff for failed requests with 4xxx/5xxx classification
2025-09-30 14:17:54 +08:00
Chesyre
d17889bf8e feat(gofile): add configurable link expiration handling (#9329)
* feat(driver): add Gofile storage driver

Add support for Gofile.io cloud storage service with full CRUD operations.
Features:
- File and folder listing
- Upload and download functionality
- Create, move, rename, copy, and delete operations
- Direct link generation for file access
- API token authentication
The driver implements all required driver interfaces and follows
the existing driver patterns in the codebase.

* feat(gofile): add configurable link expiration handling

- Adjusts driver addition metadata to accept LinkExpiry and DirectLinkExpiry options for caching and API expiry control (drivers/gofile/meta.go:10).

- Applies the new options when building file links, setting optional local cache expiration (drivers/gofile/driver.go:101) and sending an expireTime to the direct-link API (drivers/gofile/util.go:202).

- Logs Gofile API error payloads and validates the structured error response before returning it (drivers/gofile/util.go:141).

- Adds the required imports and returns the configured model.Link instance (drivers/gofile/driver.go:6).
2025-09-30 14:16:28 +08:00
千石
4f8bc478d5 refactor(driver): Refactored directory link check logic (#9324)
- Use `filePath` variable to simplify path handling
- Replace `isSymlinkDir` with `isLinkedDir` in `isFolder` check
- Use simplified path variables in `times.Stat` function calls

refactor(util): Optimized directory link check functions

- Renamed `isSymlinkDir` to `isLinkedDir` to expand Windows platform support
- Corrected path resolution logic to ensure link paths are absolute
- Added error handling to prevent path resolution failures
2025-09-14 21:03:58 +08:00
千石
e1800f18e4 feat: Check usage before deleting storage (#9322)
* feat(storage): Added role and user path checking functionality

- Added `GetAllRoles` function to retrieve all roles
- Added `GetAllUsers` function to retrieve all users
- Added `firstPathSegment` function to extract the first segment of a path
- Checks whether a storage object is used by a role or user, and returns relevant information for unusing it

* fix(storage): Fixed a potential null value issue with not checking firstMount.

- Added a check to see if `firstMount` is null to prevent logic errors.
- Adjusted the loading logic of `GetAllRoles` and `GetAllUsers` to only execute when `firstMount` is non-null.
- Fixed the `usedBy` check logic to ensure that an error message is returned under the correct conditions.
- Optimized code structure to reduce unnecessary execution paths.
2025-09-12 17:56:23 +08:00
D@' 3z K!7
16cce37947 fix(drivers): add session renewal cron for MediaFire driver (#9321)
- Implement automatic session token renewal every 6-9 minutes
- Add validation for required SessionToken and Cookie fields in Init
- Handle session expiration by calling renewToken on validation failure
- Prevent storage failures due to MediaFire session timeouts

Fixes session closure issues that occur after server restarts or extended periods.

Co-authored-by: Da3zKi7 <da3zki7@duck.com>
2025-09-12 17:53:47 +08:00
千石
6e7c7d1dd0 refactor (auth): Optimize permission path processing logic (#9320)
- Changed permission path collection from map to slice to improve code readability
- Removed redundant path checks to improve path addition efficiency
- Restructured the loop logic for path processing to simplify the path permission assignment process
2025-09-11 21:16:33 +08:00
Chesyre
28a8428559 feat(driver): add Gofile storage driver (#9318)
Add support for Gofile.io cloud storage service with full CRUD operations.
Features:
- File and folder listing
- Upload and download functionality
- Create, move, rename, copy, and delete operations
- Direct link generation for file access
- API token authentication
The driver implements all required driver interfaces and follows
the existing driver patterns in the codebase.
2025-09-11 11:46:31 +08:00
D@' 3z K!7
d0026030cb feat(drivers): add MediaFire driver support (#9319)
- Implement complete MediaFire storage driver
- Add authentication via session_token and cookie
- Support all core operations: List, Get, Link, Put, Copy, Move, Remove, Rename, MakeDir
- Include thumbnail generation for media files
- Handle MediaFire's resumable upload API with multi-unit transfers
- Add proper error handling and progress reporting

Closes 请求支持Mediafire #7869

Co-authored-by: Da3zKi7 <da3zki7@duck.com>
2025-09-11 11:46:09 +08:00
千石
fcbc79cb24 feat: Support 123pan safebox (#9311)
* feat(meta): Added a SafePassword field

- Added the SafePassword field to meta.go
- Revised the field format to align with the code style
- The SafePassword field is used to supplement the extended functionality

* feat(driver): Added support for safe unlocking logic

- Added safe file unlocking logic in `driver.go`, returning an error if unlocking fails.
- Introduced the `safeBoxUnlocked` variable of type `sync.Map` to record the IDs of unlocked files.
- Enhanced error handling logic to automatically attempt to unlock safe files and re-retrieve the file list.
- Added the `IsLock` field to file types in `types.go` to identify whether they are safe files.
- Added a constant definition for the `SafeBoxUnlock` interface address in `util.go`.
- Added the `unlockSafeBox` method to unlock a safe with a specified file ID via the API.
- Optimized the file retrieval logic to automatically call the unlock method when the safe is locked.

* Refactor (driver): Optimize lock field type

- Changed the `IsLock` field type from `int` to `bool` for better semantics.
- Updated the check logic to use direct Boolean comparisons to improve code readability and accuracy.
2025-09-05 19:58:27 +08:00
Sakkyoi Cheng
930f9f6096 fix(ssologin): missing role in SSO auto-registration and minor callback issue (#9305)
* fix(ssologin): return after error response

* fix(ssologin): set default role for SSO user creation
2025-09-04 22:15:39 +08:00
千石
23107483a1 Refactor (storage): Comment out the path validation logic (#9308)
- Comment out the error return logic for paths with "/"
- Remove storage path restrictions to allow for flexible handling of root paths
2025-09-04 22:14:33 +08:00
千石
4b288a08ef fix: session invalid issue (#9301)
* feat(auth): Enhanced device login session management

- Upon login, obtain and verify `Client-Id` to ensure unique device sessions.
- If there are too many device sessions, clean up old ones according to the configured policy or return an error.
- If a device session is invalid, deregister the old token and return a 401 error.
- Added `EnsureActiveOnLogin` function to handle the creation and refresh of device sessions during login.

* feat(session): Modified session deletion logic to mark sessions as inactive.

- Changed session deletion logic to mark sessions as inactive using the `MarkInactive` method.
- Adjusted error handling to ensure an error is returned if marking fails.

* feat(session): Added device limits and eviction policies

- Added a device limit, controlling the maximum number of devices using the `MaxDevices` configuration option.
- If the number of devices exceeds the limit, the configured eviction policy is used.
- If the policy is `evict_oldest`, the oldest device is evicted.
- Otherwise, an error message indicating too many devices is returned.

* refactor(session): Filter for the user's oldest active session

- Renamed `GetOldestSession` to `GetOldestActiveSession` to more accurately reflect its functionality
- Updated the SQL query to add the `status = SessionActive` condition to retrieve only active sessions
- Replaced all callpoints and unified the new function name to ensure logical consistency
2025-08-29 21:20:29 +08:00
Sky_slience
63391a2091 fix(readme): remove outdated sponsor links from README files (#9300)
Co-authored-by: Sky_slience <Skyslience@spdzy.com>
2025-08-29 14:56:54 +08:00
JoaHuang
a11e4cfb31 Merge pull request #9299 from okatu-loli/session-manage-2
fix: session login error
2025-08-29 13:45:10 +08:00
okatu-loli
9a7c82a71e feat(auth): Optimized device session handling logic
- Introduced middleware to handle device sessions
- Changed `handleSession` to `HandleSession` in multiple places in `auth.go` to maintain consistent naming
- Updated response structure to return `device_key` and `token`
2025-08-29 13:31:44 +08:00
okatu-loli
8623da5361 feat(session): Added user session limit and device eviction logic
- Renamed `CountSessionsByUser` to `CountActiveSessionsByUser` and added session status filtering
- Added user and device session limit, with policy handling when exceeding the limit
- Introduced device eviction policy: If the maximum number of devices is exceeded, the oldest session will be evicted using the "evict_oldest" policy
- Modified `LastActive` update logic to ensure accurate session activity time
2025-08-29 11:53:55 +08:00
千石
84adba3acc feat(user): Enhanced role assignment logic (#9297)
- Imported the `utils` package
- Modified the role assignment logic to prevent assigning administrator or guest roles to users
2025-08-28 09:57:34 +08:00
千石
3bf0af1e68 fix(session): Fixed the session status update logic. (#9296)
- Removed the error returned when the session status is `SessionInactive`.
- Updated the `LastActive` field of the session to always record the current time.
2025-08-28 09:57:13 +08:00
千石
de09ba08b6 chore(deps): Update 115driver dependency to v1.1.2 (#9294)
- Upgrade `github.com/SheltonZhu/115driver` to v1.1.2 in `go.mod`
- Modify `replace` to point to `github.com/okatu-loli/115driver v1.1.2`
- Remove old version checksum from `go.sum` and add new version checksum
2025-08-27 17:46:34 +08:00
千石
c64f899a63 feat: implement session management (#9286)
* feat(auth): Added device session management

- Added the `handleSession` function to manage user device sessions and verify client identity
- Updated `auth.go` to call `handleSession` for device handling when a user logs in
- Added the `Session` model to database migrations
- Added `device.go` and `session.go` files to handle device session logic
- Updated `settings.go` to add device-related configuration items, such as the maximum number of devices, device eviction policy, and session TTL

* feat(session): Adds session management features

- Added `SessionInactive` error type in `device.go`
- Added session-related APIs in `router.go` to support listing and evicting sessions
- Added `ListSessionsByUser`, `ListSessions`, and `MarkInactive` methods in `session.go`
- Returns an appropriate error when the session state is `SessionInactive`

* feat(auth): Marks the device session as invalid.

- Import the `session` package into the `auth` module to handle device session status.
- Add a check in the login logic. If `device_key` is obtained, call `session.MarkInactive` to mark the device session as invalid.
- Store the invalid status in the context variable `session_inactive` for subsequent middleware checks.
- Add a check in the session refresh logic to abort the process if the current session has been marked invalid.

* feat(auth, session): Added device information processing and session management changes

- Updated device handling logic in `auth.go` to pass user agent and IP information
- Adjusted database queries in `session.go` to optimize session query fields and add `user_agent` and `ip` fields
- Modified the `Handle` method to add `ua` and `ip` parameters to store the user agent and IP address
- Added the `SessionResp` structure to return a session response containing `user_agent` and `ip`
- Updated the `/admin/user/create` and `/webdav` endpoints to pass the user agent and IP address to the device handler
2025-08-25 19:46:38 +08:00
千石
3319f6ea6a feat(search): Optimized search result filtering and paging logic (#9287)
- Introduced the `filteredNodes` list to optimize the node filtering process
- Filtered results based on the page limit during paging
- Modified search logic to ensure nodes are within the user's base path
- Added access permission checks for node metadata
- Adjusted paging logic to avoid redundant node retrieval
2025-08-25 19:46:24 +08:00
千石
d7723c378f chore(deps): Upgrade 115driver to v1.1.1 (#9283)
- Upgraded `github.com/SheltonZhu/115driver` from v1.0.34 to v1.1.1
- Updated the corresponding version verification information in `go.sum`
2025-08-25 19:46:10 +08:00
千石
a9fcd51bc4 fix: ensure DefaultRole stores role ID while exposing role name in APIs (#9279)
* fix(setting): ensure DefaultRole stores role ID while exposing role name in APIs

- Simplified initial settings to use `model.GUEST` as the default role ID instead of querying roles at startup.
- Updated `GetSetting`, `ListSettings` handlers to:
  - Convert stored role ID into the corresponding role name when returning data.
  - Preserve dynamic role options for selection.
- Removed unused `strings` import and role preloading logic from `InitialSettings`.
- This change avoids DB dependency during initialization while keeping consistent role display for frontend clients.

* fix(setting): ensure DefaultRole stores role ID while exposing role
name in APIs (fix/settings-get-role)

- Simplify initial settings to use `model.GUEST` as the default role ID
  instead of querying roles at startup.
- Update `GetSetting`, `ListSettings` handlers to:
  - Convert stored role ID into the corresponding role name when
    returning data.
  - Preserve dynamic role options for selection.
- Remove unused `strings` import and role preloading logic from
  `InitialSettings`.
- Avoid DB dependency during initialization while keeping consistent
  role display for frontend clients.
2025-08-19 15:01:32 +08:00
千石
74e384175b fix(lanzou): correct comment parsing logic in lanzou driver (#9278)
- Adjusted logic to skip incrementing index when exiting comments.
- Added checks to continue loop if inside a single-line or block comment.
- Prevents erroneous parsing and retains intended comment exclusion.
2025-08-19 00:53:52 +08:00
千石
eca500861a feat: add user registration endpoint and role-based default settings (#9277)
* feat(setting): add role-based default and registration settings (closed #feat/register-and-statistics)

- Added `AllowRegister` and `DefaultRole` settings to site configuration.
- Integrated dynamic role options for `DefaultRole` using `op.GetRoles`.
- Updated `setting.go` handlers to manage `DefaultRole` options dynamically.
- Modified `const.go` to include new site settings constants.
- Updated dependencies in `go.mod` and `go.sum` to support new functionality.

* feat(register-and-statistics): add user registration endpoint

- Added `POST /auth/register` endpoint to support user registration.
- Implemented registration logic in `auth.go` with dynamic role assignment.
- Integrated settings `AllowRegister` and `DefaultRole` for registration flow.
- Updated imports to include new modules: `conf`, `setting`.
- Adjusted user creation logic to use `DefaultRole` setting dynamically.

* feat(register-and-statistics): add user registration endpoint (#register-and-statistics)

- Added `POST /auth/register` endpoint to support user registration.
- Implemented registration logic in `auth.go` with dynamic role assignment.
- Integrated `AllowRegister` and `DefaultRole` settings for registration flow.
- Updated imports to include new modules: `conf`, `setting`.
- Adjusted user creation logic to use `DefaultRole` dynamically.

* feat(register-and-statistics): enhance role management logic (#register-and-statistics)

- Refactored CreateRole and UpdateRole functions to handle default role.
- Added dynamic role assignment logic in 'role.go' using conf settings.
- Improved request handling in 'handles/role.go' with structured data.
- Implemented default role logic in 'db/role.go' to update non-default roles.
- Modified 'model/role.go' to include a 'Default' field for role management.

* feat(register-and-statistics): enhance role management logic

- Refactor CreateRole and UpdateRole to handle default roles.
- Add dynamic role assignment using conf settings in 'role.go'.
- Improve request handling with structured data in 'handles/role.go'.
- Implement default role logic in 'db/role.go' for non-default roles.
- Modify 'model/role.go' to include 'Default' field for role management.

* feat(register-and-statistics): improve role handling logic

- Switch from role names to role IDs for better consistency.
- Update logic to prioritize "guest" for default role ID.
- Adjust `DefaultRole` setting to use role IDs.
- Refactor `getRoleOptions` to return role IDs as a comma-separated string.

* feat(register-and-statistics): improve role handling logic
2025-08-18 16:38:21 +08:00
千石
97d4f79b96 fix: resolve webdav decode issue (#9268)
* fix: resolve webdav handshake error in permission checks

- Updated role permission logic to handle bidirectional subpaths,
  fixing handshake termination by remote host due to path mismatch.
- Refactored function naming for consistency and clarity.
- Enhanced filtering of objects based on user permissions.
- Modified `makePropstatResponse` to preserve encoded href paths.
- Added test for `makePropstatResponse` to ensure href encoding.

* Delete server/webdav/makepropstatresponse_test.go

* ci(workflow): set GOPROXY for Go builds on GitHub Actions

- Use `GOPROXY=https://proxy.golang.org,direct` to speed up module downloads
- Mitigates network flakiness (e.g., checksum DB timeouts/rate limits)
- `,direct` provides fallback for private/unproxyable modules
- No build logic changes; only affects dependency resolution across all matrix targets

---------

Co-authored-by: AlistGo <opsgit88@gmail.com>
2025-08-16 20:55:17 +08:00
千石
fcfb3369d1 fix: webdav error location (#9266)
* feat: improve WebDAV permission handling and user role fetching

- Added logic to handle root permissions in WebDAV requests.
- Improved the user role fetching mechanism.
- Enhanced path checks and permission scopes in role_perm.go.
- Set FetchRole function to avoid import cycles between modules.

* fix(webdav): resolve connection reset issue by encoding paths

- Adjust path encoding in webdav.go to prevent connection reset.
- Utilize utils.EncodePath for correct path formatting.
- Ensure proper handling of directory paths with trailing slash.

* fix(webdav): resolve connection reset issue by encoding paths

- Adjust path encoding in webdav.go to prevent connection reset.
- Utilize utils.FixAndCleanPath for correct path formatting.
- Ensure proper handling of directory paths with trailing slash.

* fix: resolve webdav handshake error in permission checks

- Updated role permission logic to handle bidirectional subpaths.
- This adjustment fixes the issue where remote host terminates the
  handshake due to improper path matching.

* fix: resolve webdav handshake error in permission checks (fix/fix-webdav-error)

- Updated role permission logic to handle bidirectional subpaths,
  fixing handshake termination by remote host due to path mismatch.
- Refactored function naming for consistency and clarity.
- Enhanced filtering of objects based on user permissions.

* fix: resolve webdav handshake error in permission checks

- Updated role permission logic to handle bidirectional subpaths,
  fixing handshake termination by remote host due to path mismatch.
- Refactored function naming for consistency and clarity.
- Enhanced filtering of objects based on user permissions.
2025-08-15 23:10:55 +08:00
千石
aea3ba1499 feat: add tag backup and fix bugs (#9265)
* feat(label): enhance label file binding and router setup (feat/add-tag-backup)

- Add `GetLabelsByFileNamesPublic` to retrieve labels using file names.
- Refactor router setup for label and file binding routes.
- Improve `toObjsResp` for efficient label retrieval by file names.
- Comment out unnecessary user ID parameter in `toObjsResp`.

* feat(label): enhance label file binding and router setup

- Add `GetLabelsByFileNamesPublic` for label retrieval by file names.
- Refactor router setup for label and file binding routes.
- Improve `toObjsResp` for efficient label retrieval by file names.
- Comment out unnecessary user ID parameter in `toObjsResp`.

* refactor(db): comment out debug print in GetLabelIds (#feat/add-tag-backup)

- Comment out debug print statement in GetLabelIds to clean up logs.
- Enhance code readability by removing unnecessary debug output.

* feat(label-file-binding): add batch creation and improve label ID handling

- Introduced `CreateLabelFileBinDingBatch` API for batch label binding.
- Added `collectLabelIDs` helper function to handle label ID parsing.
- Enhanced label ID handling to support varied delimiters and input formats.
- Refactored `CreateLabelFileBinDing` logic for improved code readability.
- Updated router to include `POST /label_file_binding/create_batch`.
2025-08-15 23:09:00 +08:00
千石
6b2d81eede feat(user): enhance path management and role handling (#9249)
- Add `GetUsersByRole` function for fetching users by role.
- Introduce `GetAllBasePathsFromRoles` to aggregate paths from roles.
- Refine path handling in `pkg/utils/path.go` for normalization.
- Comment out base path prefix updates to simplify role operations.
2025-08-06 16:31:36 +08:00
千石
85fe4e5bb3 feat(alist_v3): add IntSlice type for JSON unmarshalling (#9247)
- Add `IntSlice` type to handle both single int and array in JSON.
- Modify `MeResp` struct to use `IntSlice` for `Role` field.
- Import `encoding/json` for JSON operations.
2025-08-04 12:02:45 +08:00
千石
52da07e8a7 feat(123_open): add new driver support for 123 Open (#9246)
- Implement new driver for 123 Open service, enabling file operations
  such as listing, uploading, moving, and removing files.
- Introduce token management for authentication and authorization.
- Add API integration for various file operations and actions.
- Include utility functions for handling API requests and responses.
- Register the new driver in the existing drivers' list.
2025-08-04 11:56:57 +08:00
Sky_slience
46de9e9ebb fix(driver): 123 download and modify request headers on the frontend (#9236)
Co-authored-by: Sky_slience <Skyslience@spdzy.com>
2025-08-03 20:00:09 +08:00
千石
ae90fb579b feat(log): enhance log formatter to respect NO_COLOR env variable (#9239)
- Adjust log formatter to disable colors when NO_COLOR or ALIST_NO_COLOR
  environment variables are set.
- Reorganize formatter settings for better readability.
2025-08-03 09:26:23 +08:00
Sky_slience
394a18cbd9 Fix 123 download (#9235)
* fix(driver): handle additional HTTP status code 210 for URL redirection

* fix(driver): 123 download url error

---------

Co-authored-by: Sky_slience <Skyslience@spdzy.com>
2025-07-30 16:55:32 +08:00
千石
280960ce3e feat(user-db): enhance user management with role-based queries (allow-edit-role-guest) (#9234)
- Add `GetUsersByRole` function to fetch users based on their roles.
- Extend `UpdateUserBasePathPrefix` to accept optional user lists.
- Ensure path cleaning in `UpdateUserBasePathPrefix` for consistency.
- Integrate guest role fetching in `auth.go` middleware.
- Utilize `GetUsersByRole` in `role.go` for base path modifications.
- Remove redundant line in `role.go` role modification logic.
2025-07-30 13:15:35 +08:00
Sky_slience
74332e91fb feat(ui): add new UI configuration option to settings (#9233)
* feat(ui): add new UI configuration option to settings

* fix(ui): disable new UI feature by default

---------

Co-authored-by: Sky_slience <Skyslience@spdzy.com>
2025-07-30 12:22:02 +08:00
Sky_slience
540d6c7064 fix(meta): update OAuth token URL and improve default client credentials (#9231) 2025-07-30 10:48:33 +08:00
千石
55b2bb6b80 feat(user-management): Enhance admin management and role handling 2025-07-29 19:45:28 +08:00
qianshi
d5df6fa4cf Merge branch 'main' into feat/allow-edit-role-guest 2025-07-29 19:13:01 +08:00
千石
3353055482 Update Dockerfile.ci (#9230)
chore(docker): Update base image from alpine:edge to alpine:3.20.7 in Dockerfile.ci
2025-07-29 18:35:47 +08:00
千石
4d7c2a09ce docs(README): Add API documentation links across multiple languages (#9225)
- Add API documentation section to `README.md` with link to Apifox
- Add API documentation section to `README_ja.md` with Japanese translation and link to Apifox
- Add API documentation section to `README_cn.md` with Chinese translation and link to Apifox
2025-07-29 09:42:34 +08:00
qianshi
5b8c26510b feat(user-management): Enhance admin management and role handling
- Add `CountEnabledAdminsExcluding` function to count enabled admins excluding a specific user.
- Implement `CountUsersByRoleAndEnabledExclude` in `internal/db/user.go` to support exclusion logic.
- Refactor role handling with switch-case for better readability in `server/handles/role.go`.
- Ensure at least one enabled admin remains when disabling an admin in `server/handles/user.go`.
- Maintain guest role name consistency when updating roles in `internal/op/role.go`.
2025-07-28 23:07:07 +08:00
千石
91cc7529a0 feat(user/role/storage): enhance user and storage operations with additional validations (#9223)
- Update `CreateUser` to adjust `BasePath` based on user roles and clean paths.
- Modify `UpdateUser` to incorporate role-based path changes.
- Add validation in `CreateStorage` and `UpdateStorage` to prevent root mount path.
- Prevent changes to admin user's role and username in user handler.
- Update `UpdateRole` to modify user base paths when role paths change, and clear user cache accordingly.
- Import `errors` package to handle error messages.
2025-07-27 22:25:45 +08:00
112 changed files with 9270 additions and 255 deletions

View File

@@ -25,6 +25,8 @@ jobs:
- android-arm64
name: Build
runs-on: ${{ matrix.platform }}
env:
GOPROXY: https://proxy.golang.org,direct
steps:
- name: Checkout

View File

@@ -1,4 +1,4 @@
FROM alpine:edge
FROM alpine:3.20.7
ARG TARGETPLATFORM
ARG INSTALL_FFMPEG=false
@@ -31,4 +31,4 @@ RUN /entrypoint.sh version
ENV PUID=0 PGID=0 UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
VOLUME /opt/alist/data/
EXPOSE 5244 5245
CMD [ "/entrypoint.sh" ]
CMD [ "/entrypoint.sh" ]

View File

@@ -57,7 +57,9 @@ English | [中文](./README_cn.md) | [日本語](./README_ja.md) | [Contributing
- [x] [UPYUN Storage Service](https://www.upyun.com/products/file-storage)
- [x] WebDav(Support OneDrive/SharePoint without API)
- [x] Teambition([China](https://www.teambition.com/ ),[International](https://us.teambition.com/ ))
- [x] [MediaFire](https://www.mediafire.com)
- [x] [Mediatrack](https://www.mediatrack.cn/)
- [x] [ProtonDrive](https://proton.me/drive)
- [x] [139yun](https://yun.139.com/) (Personal, Family, Group)
- [x] [YandexDisk](https://disk.yandex.com/)
- [x] [BaiduNetdisk](http://pan.baidu.com/)
@@ -101,6 +103,10 @@ English | [中文](./README_cn.md) | [日本語](./README_ja.md) | [Contributing
<https://alistgo.com/>
## API Documentation (via Apifox):
<https://alist-public.apifox.cn/>
## Demo
<https://al.nn.ci>
@@ -117,8 +123,6 @@ https://alistgo.com/guide/sponsor.html
### Special sponsors
- [VidHub](https://apps.apple.com/app/apple-store/id1659622164?pt=118612019&ct=alist&mt=8) - An elegant cloud video player within the Apple ecosystem. Support for iPhone, iPad, Mac, and Apple TV.
- [亚洲云](https://www.asiayun.com/aff/QQCOOQKZ) - 高防服务器|服务器租用|福州高防|广东电信|香港服务器|美国服务器|海外服务器 - 国内靠谱的企业级云计算服务提供商 (sponsored Chinese API server)
- [找资源](http://zhaoziyuan2.cc/) - 阿里云盘资源搜索引擎
## Contributors

View File

@@ -57,7 +57,9 @@
- [x] [又拍云对象存储](https://www.upyun.com/products/file-storage)
- [x] WebDav(支持无API的OneDrive/SharePoint)
- [x] Teambition[中国](https://www.teambition.com/ )[国际](https://us.teambition.com/ )
- [x] [MediaFire](https://www.mediafire.com)
- [x] [分秒帧](https://www.mediatrack.cn/)
- [x] [ProtonDrive](https://proton.me/drive)
- [x] [和彩云](https://yun.139.com/) (个人云, 家庭云,共享群组)
- [x] [Yandex.Disk](https://disk.yandex.com/)
- [x] [百度网盘](http://pan.baidu.com/)
@@ -99,6 +101,10 @@
<https://alistgo.com/zh/>
## API 文档(通过 Apifox 提供)
<https://alist-public.apifox.cn/>
## Demo
<https://al.nn.ci>
@@ -114,8 +120,6 @@ AList 是一个开源软件,如果你碰巧喜欢这个项目,并希望我
### 特别赞助
- [VidHub](https://apps.apple.com/app/apple-store/id1659622164?pt=118612019&ct=alist&mt=8) - 苹果生态下优雅的网盘视频播放器iPhoneiPadMacApple TV全平台支持。
- [亚洲云](https://www.asiayun.com/aff/QQCOOQKZ) - 高防服务器|服务器租用|福州高防|广东电信|香港服务器|美国服务器|海外服务器 - 国内靠谱的企业级云计算服务提供商 (国内API服务器赞助)
- [找资源](http://zhaoziyuan2.cc/) - 阿里云盘资源搜索引擎
## 贡献者

View File

@@ -57,7 +57,9 @@
- [x] [UPYUN Storage Service](https://www.upyun.com/products/file-storage)
- [x] WebDav(Support OneDrive/SharePoint without API)
- [x] Teambition([China](https://www.teambition.com/ ),[International](https://us.teambition.com/ ))
- [x] [MediaFire](https://www.mediafire.com)
- [x] [Mediatrack](https://www.mediatrack.cn/)
- [x] [ProtonDrive](https://proton.me/drive)
- [x] [139yun](https://yun.139.com/) (Personal, Family, Group)
- [x] [YandexDisk](https://disk.yandex.com/)
- [x] [BaiduNetdisk](http://pan.baidu.com/)
@@ -100,6 +102,10 @@
<https://alistgo.com/>
## APIドキュメントApifox 提供)
<https://alist-public.apifox.cn/>
## デモ
<https://al.nn.ci>
@@ -116,8 +122,6 @@ https://alistgo.com/guide/sponsor.html
### スペシャルスポンサー
- [VidHub](https://apps.apple.com/app/apple-store/id1659622164?pt=118612019&ct=alist&mt=8) - An elegant cloud video player within the Apple ecosystem. Support for iPhone, iPad, Mac, and Apple TV.
- [亚洲云](https://www.asiayun.com/aff/QQCOOQKZ) - 高防服务器|服务器租用|福州高防|广东电信|香港服务器|美国服务器|海外服务器 - 国内靠谱的企业级云计算服务提供商 (sponsored Chinese API server)
- [找资源](http://zhaoziyuan2.cc/) - 阿里云盘资源搜索引擎
## コントリビューター

View File

@@ -6,6 +6,8 @@ import (
"fmt"
"net/http"
"net/url"
"strconv"
"strings"
"sync"
"time"
@@ -28,7 +30,8 @@ import (
type Pan123 struct {
model.Storage
Addition
apiRateLimit sync.Map
apiRateLimit sync.Map
safeBoxUnlocked sync.Map
}
func (d *Pan123) Config() driver.Config {
@@ -52,9 +55,26 @@ func (d *Pan123) Drop(ctx context.Context) error {
}
func (d *Pan123) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
if f, ok := dir.(File); ok && f.IsLock {
if err := d.unlockSafeBox(f.FileId); err != nil {
return nil, err
}
}
files, err := d.getFiles(ctx, dir.GetID(), dir.GetName())
if err != nil {
return nil, err
msg := strings.ToLower(err.Error())
if strings.Contains(msg, "safe box") || strings.Contains(err.Error(), "保险箱") {
if id, e := strconv.ParseInt(dir.GetID(), 10, 64); e == nil {
if e = d.unlockSafeBox(id); e == nil {
files, err = d.getFiles(ctx, dir.GetID(), dir.GetName())
} else {
return nil, e
}
}
}
if err != nil {
return nil, err
}
}
return utils.SliceConvert(files, func(src File) (model.Obj, error) {
return src, nil

View File

@@ -6,8 +6,9 @@ import (
)
type Addition struct {
Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
SafePassword string `json:"safe_password"`
driver.RootID
//OrderBy string `json:"order_by" type:"select" options:"file_id,file_name,size,update_at" default:"file_name"`
//OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`

View File

@@ -20,6 +20,7 @@ type File struct {
Etag string `json:"Etag"`
S3KeyFlag string `json:"S3KeyFlag"`
DownloadUrl string `json:"DownloadUrl"`
IsLock bool `json:"IsLock"`
}
func (f File) CreateTime() time.Time {

View File

@@ -43,6 +43,7 @@ const (
S3Auth = MainApi + "/file/s3_upload_object/auth"
UploadCompleteV2 = MainApi + "/file/upload_complete/v2"
S3Complete = MainApi + "/file/s3_complete_multipart_upload"
SafeBoxUnlock = MainApi + "/restful/goapi/v1/file/safe_box/auth/unlockbox"
//AuthKeySalt = "8-8D$sL8gPjom7bk#cY"
)
@@ -238,6 +239,22 @@ do:
return body, nil
}
func (d *Pan123) unlockSafeBox(fileId int64) error {
if _, ok := d.safeBoxUnlocked.Load(fileId); ok {
return nil
}
data := base.Json{"password": d.SafePassword}
url := fmt.Sprintf("%s?fileId=%d", SafeBoxUnlock, fileId)
_, err := d.Request(url, http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
}, nil)
if err != nil {
return err
}
d.safeBoxUnlocked.Store(fileId, true)
return nil
}
func (d *Pan123) getFiles(ctx context.Context, parentId string, name string) ([]File, error) {
page := 1
total := 0
@@ -267,6 +284,15 @@ func (d *Pan123) getFiles(ctx context.Context, parentId string, name string) ([]
req.SetQueryParams(query)
}, &resp)
if err != nil {
msg := strings.ToLower(err.Error())
if strings.Contains(msg, "safe box") || strings.Contains(err.Error(), "保险箱") {
if fid, e := strconv.ParseInt(parentId, 10, 64); e == nil {
if e = d.unlockSafeBox(fid); e == nil {
return d.getFiles(ctx, parentId, name)
}
return nil, e
}
}
return nil, err
}
log.Debug(string(_res))

191
drivers/123_open/api.go Normal file
View File

@@ -0,0 +1,191 @@
package _123Open
import (
"fmt"
"github.com/go-resty/resty/v2"
"net/http"
)
const (
// baseurl
ApiBaseURL = "https://open-api.123pan.com"
// auth
ApiToken = "/api/v1/access_token"
// file list
ApiFileList = "/api/v2/file/list"
// direct link
ApiGetDirectLink = "/api/v1/direct-link/url"
// mkdir
ApiMakeDir = "/upload/v1/file/mkdir"
// remove
ApiRemove = "/api/v1/file/trash"
// upload
ApiUploadDomainURL = "/upload/v2/file/domain"
ApiSingleUploadURL = "/upload/v2/file/single/create"
ApiCreateUploadURL = "/upload/v2/file/create"
ApiUploadSliceURL = "/upload/v2/file/slice"
ApiUploadCompleteURL = "/upload/v2/file/upload_complete"
// move
ApiMove = "/api/v1/file/move"
// rename
ApiRename = "/api/v1/file/name"
)
type Response[T any] struct {
Code int `json:"code"`
Message string `json:"message"`
Data T `json:"data"`
}
type TokenResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data TokenData `json:"data"`
}
type TokenData struct {
AccessToken string `json:"accessToken"`
ExpiredAt string `json:"expiredAt"`
}
type FileListResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data FileListData `json:"data"`
}
type FileListData struct {
LastFileId int64 `json:"lastFileId"`
FileList []File `json:"fileList"`
}
type DirectLinkResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data DirectLinkData `json:"data"`
}
type DirectLinkData struct {
URL string `json:"url"`
}
type MakeDirRequest struct {
Name string `json:"name"`
ParentID int64 `json:"parentID"`
}
type MakeDirResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data MakeDirData `json:"data"`
}
type MakeDirData struct {
DirID int64 `json:"dirID"`
}
type RemoveRequest struct {
FileIDs []int64 `json:"fileIDs"`
}
type UploadCreateResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data UploadCreateData `json:"data"`
}
type UploadCreateData struct {
FileID int64 `json:"fileId"`
Reuse bool `json:"reuse"`
PreuploadID string `json:"preuploadId"`
SliceSize int64 `json:"sliceSize"`
Servers []string `json:"servers"`
}
type UploadUrlResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data UploadUrlData `json:"data"`
}
type UploadUrlData struct {
PresignedURL string `json:"presignedUrl"`
}
type UploadCompleteResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data UploadCompleteData `json:"data"`
}
type UploadCompleteData struct {
FileID int `json:"fileID"`
Completed bool `json:"completed"`
}
func (d *Open123) Request(endpoint string, method string, setup func(*resty.Request), result any) (*resty.Response, error) {
client := resty.New()
token, err := d.tm.getToken()
if err != nil {
return nil, err
}
req := client.R().
SetHeader("Authorization", "Bearer "+token).
SetHeader("Platform", "open_platform").
SetHeader("Content-Type", "application/json").
SetResult(result)
if setup != nil {
setup(req)
}
switch method {
case http.MethodGet:
return req.Get(ApiBaseURL + endpoint)
case http.MethodPost:
return req.Post(ApiBaseURL + endpoint)
case http.MethodPut:
return req.Put(ApiBaseURL + endpoint)
default:
return nil, fmt.Errorf("unsupported method: %s", method)
}
}
func (d *Open123) RequestTo(fullURL string, method string, setup func(*resty.Request), result any) (*resty.Response, error) {
client := resty.New()
token, err := d.tm.getToken()
if err != nil {
return nil, err
}
req := client.R().
SetHeader("Authorization", "Bearer "+token).
SetHeader("Platform", "open_platform").
SetHeader("Content-Type", "application/json").
SetResult(result)
if setup != nil {
setup(req)
}
switch method {
case http.MethodGet:
return req.Get(fullURL)
case http.MethodPost:
return req.Post(fullURL)
case http.MethodPut:
return req.Put(fullURL)
default:
return nil, fmt.Errorf("unsupported method: %s", method)
}
}

294
drivers/123_open/driver.go Normal file
View File

@@ -0,0 +1,294 @@
package _123Open
import (
"context"
"fmt"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"net/http"
"strconv"
"time"
)
type Open123 struct {
model.Storage
Addition
UploadThread int
tm *tokenManager
}
func (d *Open123) Config() driver.Config {
return config
}
func (d *Open123) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Open123) Init(ctx context.Context) error {
d.tm = newTokenManager(d.ClientID, d.ClientSecret)
if _, err := d.tm.getToken(); err != nil {
return fmt.Errorf("token 初始化失败: %w", err)
}
return nil
}
func (d *Open123) Drop(ctx context.Context) error {
return nil
}
func (d *Open123) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
parentFileId, err := strconv.ParseInt(dir.GetID(), 10, 64)
if err != nil {
return nil, err
}
fileLastId := int64(0)
var results []File
for fileLastId != -1 {
files, err := d.getFiles(parentFileId, 100, fileLastId)
if err != nil {
return nil, err
}
for _, f := range files.Data.FileList {
if f.Trashed == 0 {
results = append(results, f)
}
}
fileLastId = files.Data.LastFileId
}
objs := make([]model.Obj, 0, len(results))
for _, f := range results {
objs = append(objs, f)
}
return objs, nil
}
func (d *Open123) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if file.IsDir() {
return nil, errs.LinkIsDir
}
fileID := file.GetID()
var result DirectLinkResp
url := fmt.Sprintf("%s?fileID=%s", ApiGetDirectLink, fileID)
_, err := d.Request(url, http.MethodGet, nil, &result)
if err != nil {
return nil, err
}
if result.Code != 0 {
return nil, fmt.Errorf("get link failed: %s", result.Message)
}
linkURL := result.Data.URL
if d.PrivateKey != "" {
if d.UID == 0 {
return nil, fmt.Errorf("uid is required when private key is set")
}
duration := time.Duration(d.ValidDuration)
if duration <= 0 {
duration = 30
}
signedURL, err := SignURL(linkURL, d.PrivateKey, d.UID, duration*time.Minute)
if err != nil {
return nil, err
}
linkURL = signedURL
}
return &model.Link{
URL: linkURL,
}, nil
}
func (d *Open123) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
parentID, err := strconv.ParseInt(parentDir.GetID(), 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid parent ID: %w", err)
}
var result MakeDirResp
reqBody := MakeDirRequest{
Name: dirName,
ParentID: parentID,
}
_, err = d.Request(ApiMakeDir, http.MethodPost, func(r *resty.Request) {
r.SetBody(reqBody)
}, &result)
if err != nil {
return nil, err
}
if result.Code != 0 {
return nil, fmt.Errorf("mkdir failed: %s", result.Message)
}
newDir := File{
FileId: result.Data.DirID,
FileName: dirName,
Type: 1,
ParentFileId: int(parentID),
Size: 0,
Trashed: 0,
}
return newDir, nil
}
func (d *Open123) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
srcID, err := strconv.ParseInt(srcObj.GetID(), 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid src file ID: %w", err)
}
dstID, err := strconv.ParseInt(dstDir.GetID(), 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid dest dir ID: %w", err)
}
var result Response[any]
reqBody := map[string]interface{}{
"fileIDs": []int64{srcID},
"toParentFileID": dstID,
}
_, err = d.Request(ApiMove, http.MethodPost, func(r *resty.Request) {
r.SetBody(reqBody)
}, &result)
if err != nil {
return nil, err
}
if result.Code != 0 {
return nil, fmt.Errorf("move failed: %s", result.Message)
}
files, err := d.getFiles(dstID, 100, 0)
if err != nil {
return nil, fmt.Errorf("move succeed but failed to get target dir: %w", err)
}
for _, f := range files.Data.FileList {
if f.FileId == srcID {
return f, nil
}
}
return nil, fmt.Errorf("move succeed but file not found in target dir")
}
func (d *Open123) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
srcID, err := strconv.ParseInt(srcObj.GetID(), 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid file ID: %w", err)
}
var result Response[any]
reqBody := map[string]interface{}{
"fileId": srcID,
"fileName": newName,
}
_, err = d.Request(ApiRename, http.MethodPut, func(r *resty.Request) {
r.SetBody(reqBody)
}, &result)
if err != nil {
return nil, err
}
if result.Code != 0 {
return nil, fmt.Errorf("rename failed: %s", result.Message)
}
parentID := 0
if file, ok := srcObj.(File); ok {
parentID = file.ParentFileId
}
files, err := d.getFiles(int64(parentID), 100, 0)
if err != nil {
return nil, fmt.Errorf("rename succeed but failed to get parent dir: %w", err)
}
for _, f := range files.Data.FileList {
if f.FileId == srcID {
return f, nil
}
}
return nil, fmt.Errorf("rename succeed but file not found in parent dir")
}
func (d *Open123) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
return nil, errs.NotSupport
}
func (d *Open123) Remove(ctx context.Context, obj model.Obj) error {
idStr := obj.GetID()
id, err := strconv.ParseInt(idStr, 10, 64)
if err != nil {
return fmt.Errorf("invalid file ID: %w", err)
}
var result Response[any]
reqBody := RemoveRequest{
FileIDs: []int64{id},
}
_, err = d.Request(ApiRemove, http.MethodPost, func(r *resty.Request) {
r.SetBody(reqBody)
}, &result)
if err != nil {
return err
}
if result.Code != 0 {
return fmt.Errorf("remove failed: %s", result.Message)
}
return nil
}
func (d *Open123) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
parentFileId, err := strconv.ParseInt(dstDir.GetID(), 10, 64)
etag := file.GetHash().GetHash(utils.MD5)
if len(etag) < utils.MD5.Width {
up = model.UpdateProgressWithRange(up, 50, 100)
_, etag, err = stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil {
return err
}
}
createResp, err := d.create(parentFileId, file.GetName(), etag, file.GetSize(), 2, false)
if err != nil {
return err
}
if createResp.Data.Reuse {
return nil
}
return d.Upload(ctx, file, parentFileId, createResp, up)
}
func (d *Open123) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
return nil, errs.NotSupport
}
func (d *Open123) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
return nil, errs.NotSupport
}
func (d *Open123) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
return nil, errs.NotSupport
}
func (d *Open123) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
return nil, errs.NotSupport
}
//func (d *Open123) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Open123)(nil)

36
drivers/123_open/meta.go Normal file
View File

@@ -0,0 +1,36 @@
package _123Open
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootID
ClientID string `json:"client_id" required:"true" label:"Client ID"`
ClientSecret string `json:"client_secret" required:"true" label:"Client Secret"`
PrivateKey string `json:"private_key"`
UID uint64 `json:"uid" type:"number"`
ValidDuration int64 `json:"valid_duration" type:"number" default:"30" help:"minutes"`
}
var config = driver.Config{
Name: "123 Open",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Open123{}
})
}

27
drivers/123_open/sign.go Normal file
View File

@@ -0,0 +1,27 @@
package _123Open
import (
"crypto/md5"
"fmt"
"math/rand"
"net/url"
"time"
)
func SignURL(originURL, privateKey string, uid uint64, validDuration time.Duration) (string, error) {
if privateKey == "" {
return originURL, nil
}
parsed, err := url.Parse(originURL)
if err != nil {
return "", err
}
ts := time.Now().Add(validDuration).Unix()
randInt := rand.Int()
signature := fmt.Sprintf("%d-%d-%d-%x", ts, randInt, uid, md5.Sum([]byte(fmt.Sprintf("%s-%d-%d-%d-%s",
parsed.Path, ts, randInt, uid, privateKey))))
query := parsed.Query()
query.Add("auth_key", signature)
parsed.RawQuery = query.Encode()
return parsed.String(), nil
}

85
drivers/123_open/token.go Normal file
View File

@@ -0,0 +1,85 @@
package _123Open
import (
"bytes"
"encoding/json"
"fmt"
"net/http"
"sync"
"time"
)
const tokenURL = ApiBaseURL + ApiToken
type tokenManager struct {
clientID string
clientSecret string
mu sync.Mutex
accessToken string
expireTime time.Time
}
func newTokenManager(clientID, clientSecret string) *tokenManager {
return &tokenManager{
clientID: clientID,
clientSecret: clientSecret,
}
}
func (tm *tokenManager) getToken() (string, error) {
tm.mu.Lock()
defer tm.mu.Unlock()
if tm.accessToken != "" && time.Now().Before(tm.expireTime.Add(-5*time.Minute)) {
return tm.accessToken, nil
}
reqBody := map[string]string{
"clientID": tm.clientID,
"clientSecret": tm.clientSecret,
}
body, _ := json.Marshal(reqBody)
req, err := http.NewRequest("POST", tokenURL, bytes.NewBuffer(body))
if err != nil {
return "", err
}
req.Header.Set("Platform", "open_platform")
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
var result TokenResp
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return "", err
}
if result.Code != 0 {
return "", fmt.Errorf("get token failed: %s", result.Message)
}
tm.accessToken = result.Data.AccessToken
expireAt, err := time.Parse(time.RFC3339, result.Data.ExpiredAt)
if err != nil {
return "", fmt.Errorf("parse expire time failed: %w", err)
}
tm.expireTime = expireAt
return tm.accessToken, nil
}
func (tm *tokenManager) buildHeaders() (http.Header, error) {
token, err := tm.getToken()
if err != nil {
return nil, err
}
header := http.Header{}
header.Set("Authorization", "Bearer "+token)
header.Set("Platform", "open_platform")
header.Set("Content-Type", "application/json")
return header, nil
}

70
drivers/123_open/types.go Normal file
View File

@@ -0,0 +1,70 @@
package _123Open
import (
"fmt"
"github.com/alist-org/alist/v3/pkg/utils"
"time"
)
type File struct {
FileName string `json:"filename"`
Size int64 `json:"size"`
CreateAt string `json:"createAt"`
UpdateAt string `json:"updateAt"`
FileId int64 `json:"fileId"`
Type int `json:"type"`
Etag string `json:"etag"`
S3KeyFlag string `json:"s3KeyFlag"`
ParentFileId int `json:"parentFileId"`
Category int `json:"category"`
Status int `json:"status"`
Trashed int `json:"trashed"`
}
func (f File) GetID() string {
return fmt.Sprint(f.FileId)
}
func (f File) GetName() string {
return f.FileName
}
func (f File) GetSize() int64 {
return f.Size
}
func (f File) IsDir() bool {
return f.Type == 1
}
func (f File) GetModified() string {
return f.UpdateAt
}
func (f File) GetThumb() string {
return ""
}
func (f File) ModTime() time.Time {
t, err := time.Parse("2006-01-02 15:04:05", f.UpdateAt)
if err != nil {
return time.Time{}
}
return t
}
func (f File) CreateTime() time.Time {
t, err := time.Parse("2006-01-02 15:04:05", f.CreateAt)
if err != nil {
return time.Time{}
}
return t
}
func (f File) GetHash() utils.HashInfo {
return utils.NewHashInfo(utils.MD5, f.Etag)
}
func (f File) GetPath() string {
return ""
}

282
drivers/123_open/upload.go Normal file
View File

@@ -0,0 +1,282 @@
package _123Open
import (
"bytes"
"context"
"crypto/md5"
"encoding/hex"
"encoding/json"
"fmt"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"golang.org/x/sync/errgroup"
"io"
"mime/multipart"
"net/http"
"runtime"
"strconv"
"time"
)
func (d *Open123) create(parentFileID int64, filename, etag string, size int64, duplicate int, containDir bool) (*UploadCreateResp, error) {
var resp UploadCreateResp
_, err := d.Request(ApiCreateUploadURL, http.MethodPost, func(req *resty.Request) {
body := base.Json{
"parentFileID": parentFileID,
"filename": filename,
"etag": etag,
"size": size,
}
if duplicate > 0 {
body["duplicate"] = duplicate
}
if containDir {
body["containDir"] = true
}
req.SetBody(body)
}, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}
func (d *Open123) GetUploadDomains() ([]string, error) {
var resp struct {
Code int `json:"code"`
Message string `json:"message"`
Data []string `json:"data"`
}
_, err := d.Request(ApiUploadDomainURL, http.MethodGet, nil, &resp)
if err != nil {
return nil, err
}
if resp.Code != 0 {
return nil, fmt.Errorf("get upload domain failed: %s", resp.Message)
}
return resp.Data, nil
}
func (d *Open123) UploadSingle(ctx context.Context, createResp *UploadCreateResp, file model.FileStreamer, parentID int64) error {
domain := createResp.Data.Servers[0]
etag := file.GetHash().GetHash(utils.MD5)
if len(etag) < utils.MD5.Width {
_, _, err := stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil {
return err
}
}
reader, err := file.RangeRead(http_range.Range{Start: 0, Length: file.GetSize()})
if err != nil {
return err
}
reader = driver.NewLimitedUploadStream(ctx, reader)
var b bytes.Buffer
mw := multipart.NewWriter(&b)
mw.WriteField("parentFileID", fmt.Sprint(parentID))
mw.WriteField("filename", file.GetName())
mw.WriteField("etag", etag)
mw.WriteField("size", fmt.Sprint(file.GetSize()))
fw, _ := mw.CreateFormFile("file", file.GetName())
_, err = io.Copy(fw, reader)
mw.Close()
req, err := http.NewRequestWithContext(ctx, "POST", domain+ApiSingleUploadURL, &b)
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+d.tm.accessToken)
req.Header.Set("Platform", "open_platform")
req.Header.Set("Content-Type", mw.FormDataContentType())
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
var result struct {
Code int `json:"code"`
Message string `json:"message"`
Data struct {
FileID int64 `json:"fileID"`
Completed bool `json:"completed"`
} `json:"data"`
}
body, _ := io.ReadAll(resp.Body)
if err := json.Unmarshal(body, &result); err != nil {
return fmt.Errorf("unmarshal response error: %v, body: %s", err, string(body))
}
if result.Code != 0 {
return fmt.Errorf("upload failed: %s", result.Message)
}
if !result.Data.Completed || result.Data.FileID == 0 {
return fmt.Errorf("upload incomplete or missing fileID")
}
return nil
}
func (d *Open123) Upload(ctx context.Context, file model.FileStreamer, parentID int64, createResp *UploadCreateResp, up driver.UpdateProgress) error {
if cacher, ok := file.(interface{ CacheFullInTempFile() (model.File, error) }); ok {
if _, err := cacher.CacheFullInTempFile(); err != nil {
return err
}
}
size := file.GetSize()
chunkSize := createResp.Data.SliceSize
uploadNums := (size + chunkSize - 1) / chunkSize
uploadDomain := createResp.Data.Servers[0]
if d.UploadThread <= 0 {
cpuCores := runtime.NumCPU()
threads := cpuCores * 2
if threads < 4 {
threads = 4
}
if threads > 16 {
threads = 16
}
d.UploadThread = threads
fmt.Printf("[Upload] Auto set upload concurrency: %d (CPU cores=%d)\n", d.UploadThread, cpuCores)
}
fmt.Printf("[Upload] File size: %d bytes, chunk size: %d bytes, total slices: %d, concurrency: %d\n",
size, chunkSize, uploadNums, d.UploadThread)
if size <= 1<<30 {
return d.UploadSingle(ctx, createResp, file, parentID)
}
if createResp.Data.Reuse {
up(100)
return nil
}
client := resty.New()
semaphore := make(chan struct{}, d.UploadThread)
threadG, _ := errgroup.WithContext(ctx)
var progressArr = make([]int64, uploadNums)
for partIndex := int64(0); partIndex < uploadNums; partIndex++ {
partIndex := partIndex
semaphore <- struct{}{}
threadG.Go(func() error {
defer func() { <-semaphore }()
offset := partIndex * chunkSize
length := min(chunkSize, size-offset)
partNumber := partIndex + 1
fmt.Printf("[Slice %d] Starting read from offset %d, length %d\n", partNumber, offset, length)
reader, err := file.RangeRead(http_range.Range{Start: offset, Length: length})
if err != nil {
return fmt.Errorf("[Slice %d] RangeRead error: %v", partNumber, err)
}
buf := make([]byte, length)
n, err := io.ReadFull(reader, buf)
if err != nil && err != io.EOF {
return fmt.Errorf("[Slice %d] Read error: %v", partNumber, err)
}
buf = buf[:n]
hash := md5.Sum(buf)
sliceMD5Str := hex.EncodeToString(hash[:])
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
writer.WriteField("preuploadID", createResp.Data.PreuploadID)
writer.WriteField("sliceNo", strconv.FormatInt(partNumber, 10))
writer.WriteField("sliceMD5", sliceMD5Str)
partName := fmt.Sprintf("%s.part%d", file.GetName(), partNumber)
fw, _ := writer.CreateFormFile("slice", partName)
fw.Write(buf)
writer.Close()
resp, err := client.R().
SetHeader("Authorization", "Bearer "+d.tm.accessToken).
SetHeader("Platform", "open_platform").
SetHeader("Content-Type", writer.FormDataContentType()).
SetBody(body.Bytes()).
Post(uploadDomain + ApiUploadSliceURL)
if err != nil {
return fmt.Errorf("[Slice %d] Upload HTTP error: %v", partNumber, err)
}
if resp.StatusCode() != 200 {
return fmt.Errorf("[Slice %d] Upload failed with status: %s, resp: %s", partNumber, resp.Status(), resp.String())
}
progressArr[partIndex] = length
var totalUploaded int64 = 0
for _, v := range progressArr {
totalUploaded += v
}
if up != nil {
percent := float64(totalUploaded) / float64(size) * 100
up(percent)
}
fmt.Printf("[Slice %d] MD5: %s\n", partNumber, sliceMD5Str)
fmt.Printf("[Slice %d] Upload finished\n", partNumber)
return nil
})
}
if err := threadG.Wait(); err != nil {
return err
}
var completeResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data struct {
Completed bool `json:"completed"`
FileID int64 `json:"fileID"`
} `json:"data"`
}
for {
reqBody := fmt.Sprintf(`{"preuploadID":"%s"}`, createResp.Data.PreuploadID)
req, err := http.NewRequestWithContext(ctx, "POST", uploadDomain+ApiUploadCompleteURL, bytes.NewBufferString(reqBody))
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+d.tm.accessToken)
req.Header.Set("Platform", "open_platform")
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
body, _ := io.ReadAll(resp.Body)
resp.Body.Close()
if err := json.Unmarshal(body, &completeResp); err != nil {
return fmt.Errorf("completion response unmarshal error: %v, body: %s", err, string(body))
}
if completeResp.Code != 0 {
return fmt.Errorf("completion API returned error code %d: %s", completeResp.Code, completeResp.Message)
}
if completeResp.Data.Completed && completeResp.Data.FileID != 0 {
fmt.Printf("[Upload] Upload completed successfully. FileID: %d\n", completeResp.Data.FileID)
break
}
time.Sleep(time.Second)
}
up(100)
return nil
}

20
drivers/123_open/util.go Normal file
View File

@@ -0,0 +1,20 @@
package _123Open
import (
"fmt"
"net/http"
)
func (d *Open123) getFiles(parentFileId int64, limit int, lastFileId int64) (*FileListResp, error) {
var result FileListResp
url := fmt.Sprintf("%s?parentFileId=%d&limit=%d&lastFileId=%d", ApiFileList, parentFileId, limit, lastFileId)
_, err := d.Request(url, http.MethodGet, nil, &result)
if err != nil {
return nil, err
}
if result.Code != 0 {
return nil, fmt.Errorf("list error: %s", result.Message)
}
return &result, nil
}

View File

@@ -80,9 +80,10 @@ func (d *Cloud189) Link(ctx context.Context, file model.Obj, args model.LinkArgs
}
func (d *Cloud189) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
safeName := d.sanitizeName(dirName)
form := map[string]string{
"parentFolderId": parentDir.GetID(),
"folderName": dirName,
"folderName": safeName,
}
_, err := d.request("https://cloud.189.cn/api/open/file/createFolder.action", http.MethodPost, func(req *resty.Request) {
req.SetFormData(form)
@@ -126,9 +127,10 @@ func (d *Cloud189) Rename(ctx context.Context, srcObj model.Obj, newName string)
idKey = "folderId"
nameKey = "destFolderName"
}
safeName := d.sanitizeName(newName)
form := map[string]string{
idKey: srcObj.GetID(),
nameKey: newName,
nameKey: safeName,
}
_, err := d.request(url, http.MethodPost, func(req *resty.Request) {
req.SetFormData(form)

View File

@@ -6,9 +6,10 @@ import (
)
type Addition struct {
Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
Cookie string `json:"cookie" help:"Fill in the cookie if need captcha"`
Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
Cookie string `json:"cookie" help:"Fill in the cookie if need captcha"`
StripEmoji bool `json:"strip_emoji" help:"Remove four-byte characters (e.g., emoji) before upload"`
driver.RootID
}

View File

@@ -11,9 +11,11 @@ import (
"io"
"math"
"net/http"
"path"
"strconv"
"strings"
"time"
"unicode/utf8"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
@@ -222,13 +224,37 @@ func (d *Cloud189) getFiles(fileId string) ([]model.Obj, error) {
return res, nil
}
func (d *Cloud189) sanitizeName(name string) string {
if !d.StripEmoji {
return name
}
b := strings.Builder{}
for _, r := range name {
if utf8.RuneLen(r) == 4 {
continue
}
b.WriteRune(r)
}
sanitized := b.String()
if sanitized == "" {
ext := path.Ext(name)
if ext != "" {
sanitized = "file" + ext
} else {
sanitized = "file"
}
}
return sanitized
}
func (d *Cloud189) oldUpload(dstDir model.Obj, file model.FileStreamer) error {
safeName := d.sanitizeName(file.GetName())
res, err := d.client.R().SetMultipartFormData(map[string]string{
"parentId": dstDir.GetID(),
"sessionKey": "??",
"opertype": "1",
"fname": file.GetName(),
}).SetMultipartField("Filedata", file.GetName(), file.GetMimetype(), file).Post("https://hb02.upload.cloud.189.cn/v1/DCIWebUploadAction")
"fname": safeName,
}).SetMultipartField("Filedata", safeName, file.GetMimetype(), file).Post("https://hb02.upload.cloud.189.cn/v1/DCIWebUploadAction")
if err != nil {
return err
}
@@ -313,9 +339,10 @@ func (d *Cloud189) newUpload(ctx context.Context, dstDir model.Obj, file model.F
const DEFAULT int64 = 10485760
var count = int64(math.Ceil(float64(file.GetSize()) / float64(DEFAULT)))
safeName := d.sanitizeName(file.GetName())
res, err := d.uploadRequest("/person/initMultiUpload", map[string]string{
"parentFolderId": dstDir.GetID(),
"fileName": encode(file.GetName()),
"fileName": encode(safeName),
"fileSize": strconv.FormatInt(file.GetSize(), 10),
"sliceSize": strconv.FormatInt(DEFAULT, 10),
"lazyCheck": "1",

View File

@@ -205,10 +205,11 @@ func (y *Cloud189PC) MakeDir(ctx context.Context, parentDir model.Obj, dirName s
fullUrl += "/createFolder.action"
var newFolder Cloud189Folder
safeName := y.sanitizeName(dirName)
_, err := y.post(fullUrl, func(req *resty.Request) {
req.SetContext(ctx)
req.SetQueryParams(map[string]string{
"folderName": dirName,
"folderName": safeName,
"relativePath": "",
})
if isFamily {
@@ -225,6 +226,7 @@ func (y *Cloud189PC) MakeDir(ctx context.Context, parentDir model.Obj, dirName s
if err != nil {
return nil, err
}
newFolder.Name = safeName
return &newFolder, nil
}
@@ -258,21 +260,29 @@ func (y *Cloud189PC) Rename(ctx context.Context, srcObj model.Obj, newName strin
}
var newObj model.Obj
safeName := y.sanitizeName(newName)
switch f := srcObj.(type) {
case *Cloud189File:
fullUrl += "/renameFile.action"
queryParam["fileId"] = srcObj.GetID()
queryParam["destFileName"] = newName
queryParam["destFileName"] = safeName
newObj = &Cloud189File{Icon: f.Icon} // 复用预览
case *Cloud189Folder:
fullUrl += "/renameFolder.action"
queryParam["folderId"] = srcObj.GetID()
queryParam["destFolderName"] = newName
queryParam["destFolderName"] = safeName
newObj = &Cloud189Folder{}
default:
return nil, errs.NotSupport
}
switch obj := newObj.(type) {
case *Cloud189File:
obj.Name = safeName
case *Cloud189Folder:
obj.Name = safeName
}
_, err := y.request(fullUrl, method, func(req *resty.Request) {
req.SetContext(ctx).SetQueryParams(queryParam)
}, nil, newObj, isFamily)

View File

@@ -6,9 +6,10 @@ import (
)
type Addition struct {
Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
VCode string `json:"validate_code"`
Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
VCode string `json:"validate_code"`
StripEmoji bool `json:"strip_emoji" help:"Remove four-byte characters (e.g., emoji) before upload"`
driver.RootID
OrderBy string `json:"order_by" type:"select" options:"filename,filesize,lastOpTime" default:"filename"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`

View File

@@ -12,11 +12,13 @@ import (
"net/http/cookiejar"
"net/url"
"os"
"path"
"regexp"
"sort"
"strconv"
"strings"
"time"
"unicode/utf8"
"golang.org/x/sync/semaphore"
@@ -57,6 +59,29 @@ const (
CHANNEL_ID = "web_cloud.189.cn"
)
func (y *Cloud189PC) sanitizeName(name string) string {
if !y.StripEmoji {
return name
}
b := strings.Builder{}
for _, r := range name {
if utf8.RuneLen(r) == 4 {
continue
}
b.WriteRune(r)
}
sanitized := b.String()
if sanitized == "" {
ext := path.Ext(name)
if ext != "" {
sanitized = "file" + ext
} else {
sanitized = "file"
}
}
return sanitized
}
func (y *Cloud189PC) SignatureHeader(url, method, params string, isFamily bool) map[string]string {
dateOfGmt := getHttpDateStr()
sessionKey := y.getTokenInfo().SessionKey
@@ -475,10 +500,11 @@ func (y *Cloud189PC) refreshSession() (err error) {
func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
size := file.GetSize()
sliceSize := partSize(size)
safeName := y.sanitizeName(file.GetName())
params := Params{
"parentFolderId": dstDir.GetID(),
"fileName": url.QueryEscape(file.GetName()),
"fileName": url.QueryEscape(safeName),
"fileSize": fmt.Sprint(file.GetSize()),
"sliceSize": fmt.Sprint(sliceSize),
"lazyCheck": "1",
@@ -596,7 +622,8 @@ func (y *Cloud189PC) RapidUpload(ctx context.Context, dstDir model.Obj, stream m
return nil, errors.New("invalid hash")
}
uploadInfo, err := y.OldUploadCreate(ctx, dstDir.GetID(), fileMd5, stream.GetName(), fmt.Sprint(stream.GetSize()), isFamily)
safeName := y.sanitizeName(stream.GetName())
uploadInfo, err := y.OldUploadCreate(ctx, dstDir.GetID(), fileMd5, safeName, fmt.Sprint(stream.GetSize()), isFamily)
if err != nil {
return nil, err
}
@@ -615,6 +642,7 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
tmpF *os.File
err error
)
safeName := y.sanitizeName(file.GetName())
size := file.GetSize()
if _, ok := cache.(io.ReaderAt); !ok && size > 0 {
tmpF, err = os.CreateTemp(conf.Conf.TempDir, "file-*")
@@ -697,7 +725,7 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
//step.2 预上传
params := Params{
"parentFolderId": dstDir.GetID(),
"fileName": url.QueryEscape(file.GetName()),
"fileName": url.QueryEscape(safeName),
"fileSize": fmt.Sprint(file.GetSize()),
"fileMd5": fileMd5Hex,
"sliceSize": fmt.Sprint(sliceSize),
@@ -833,9 +861,10 @@ func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model
return nil, err
}
rateLimited := driver.NewLimitedUploadStream(ctx, io.NopCloser(tempFile))
safeName := y.sanitizeName(file.GetName())
// 创建上传会话
uploadInfo, err := y.OldUploadCreate(ctx, dstDir.GetID(), fileMd5, file.GetName(), fmt.Sprint(file.GetSize()), isFamily)
uploadInfo, err := y.OldUploadCreate(ctx, dstDir.GetID(), fileMd5, safeName, fmt.Sprint(file.GetSize()), isFamily)
if err != nil {
return nil, err
}

View File

@@ -1,6 +1,7 @@
package alist_v3
import (
"encoding/json"
"time"
"github.com/alist-org/alist/v3/internal/model"
@@ -72,15 +73,15 @@ type LoginResp struct {
}
type MeResp struct {
Id int `json:"id"`
Username string `json:"username"`
Password string `json:"password"`
BasePath string `json:"base_path"`
Role []int `json:"role"`
Disabled bool `json:"disabled"`
Permission int `json:"permission"`
SsoId string `json:"sso_id"`
Otp bool `json:"otp"`
Id int `json:"id"`
Username string `json:"username"`
Password string `json:"password"`
BasePath string `json:"base_path"`
Role IntSlice `json:"role"`
Disabled bool `json:"disabled"`
Permission int `json:"permission"`
SsoId string `json:"sso_id"`
Otp bool `json:"otp"`
}
type ArchiveMetaReq struct {
@@ -168,3 +169,17 @@ type DecompressReq struct {
PutIntoNewDir bool `json:"put_into_new_dir"`
SrcDir string `json:"src_dir"`
}
type IntSlice []int
func (s *IntSlice) UnmarshalJSON(data []byte) error {
if len(data) > 0 && data[0] == '[' {
return json.Unmarshal(data, (*[]int)(s))
}
var single int
if err := json.Unmarshal(data, &single); err != nil {
return err
}
*s = []int{single}
return nil
}

View File

@@ -11,7 +11,7 @@ type Addition struct {
RefreshToken string `json:"refresh_token" required:"true"`
OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at"`
OrderDirection string `json:"order_direction" type:"select" options:"ASC,DESC"`
OauthTokenURL string `json:"oauth_token_url" default:"https://api.nn.ci/alist/ali_open/token"`
OauthTokenURL string `json:"oauth_token_url" default:"https://api.alistgo.com/alist/ali_open/token"`
ClientID string `json:"client_id" required:"false" help:"Keep it empty if you don't have one"`
ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"`
RemoveWay string `json:"remove_way" required:"true" type:"select" options:"trash,delete"`

View File

@@ -6,6 +6,7 @@ import (
_ "github.com/alist-org/alist/v3/drivers/115_share"
_ "github.com/alist-org/alist/v3/drivers/123"
_ "github.com/alist-org/alist/v3/drivers/123_link"
_ "github.com/alist-org/alist/v3/drivers/123_open"
_ "github.com/alist-org/alist/v3/drivers/123_share"
_ "github.com/alist-org/alist/v3/drivers/139"
_ "github.com/alist-org/alist/v3/drivers/189"
@@ -20,6 +21,7 @@ import (
_ "github.com/alist-org/alist/v3/drivers/baidu_netdisk"
_ "github.com/alist-org/alist/v3/drivers/baidu_photo"
_ "github.com/alist-org/alist/v3/drivers/baidu_share"
_ "github.com/alist-org/alist/v3/drivers/bitqiu"
_ "github.com/alist-org/alist/v3/drivers/chaoxing"
_ "github.com/alist-org/alist/v3/drivers/cloudreve"
_ "github.com/alist-org/alist/v3/drivers/cloudreve_v4"
@@ -29,8 +31,10 @@ import (
_ "github.com/alist-org/alist/v3/drivers/dropbox"
_ "github.com/alist-org/alist/v3/drivers/febbox"
_ "github.com/alist-org/alist/v3/drivers/ftp"
_ "github.com/alist-org/alist/v3/drivers/gitee"
_ "github.com/alist-org/alist/v3/drivers/github"
_ "github.com/alist-org/alist/v3/drivers/github_releases"
_ "github.com/alist-org/alist/v3/drivers/gofile"
_ "github.com/alist-org/alist/v3/drivers/google_drive"
_ "github.com/alist-org/alist/v3/drivers/google_photo"
_ "github.com/alist-org/alist/v3/drivers/halalcloud"
@@ -40,6 +44,7 @@ import (
_ "github.com/alist-org/alist/v3/drivers/lanzou"
_ "github.com/alist-org/alist/v3/drivers/lenovonas_share"
_ "github.com/alist-org/alist/v3/drivers/local"
_ "github.com/alist-org/alist/v3/drivers/mediafire"
_ "github.com/alist-org/alist/v3/drivers/mediatrack"
_ "github.com/alist-org/alist/v3/drivers/mega"
_ "github.com/alist-org/alist/v3/drivers/misskey"
@@ -48,8 +53,10 @@ import (
_ "github.com/alist-org/alist/v3/drivers/onedrive"
_ "github.com/alist-org/alist/v3/drivers/onedrive_app"
_ "github.com/alist-org/alist/v3/drivers/onedrive_sharelink"
_ "github.com/alist-org/alist/v3/drivers/pcloud"
_ "github.com/alist-org/alist/v3/drivers/pikpak"
_ "github.com/alist-org/alist/v3/drivers/pikpak_share"
_ "github.com/alist-org/alist/v3/drivers/proton_drive"
_ "github.com/alist-org/alist/v3/drivers/quark_uc"
_ "github.com/alist-org/alist/v3/drivers/quark_uc_tv"
_ "github.com/alist-org/alist/v3/drivers/quqi"

View File

@@ -11,8 +11,8 @@ type Addition struct {
OrderBy string `json:"order_by" type:"select" options:"name,time,size" default:"name"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
DownloadAPI string `json:"download_api" type:"select" options:"official,crack,crack_video" default:"official"`
ClientID string `json:"client_id" required:"true" default:"iYCeC9g08h5vuP9UqvPHKKSVrKFXGa1v"`
ClientSecret string `json:"client_secret" required:"true" default:"jXiFMOPVPCWlO2M5CwWQzffpNPaGTRBG"`
ClientID string `json:"client_id" required:"true" default:"hq9yQ9w9kR4YHj1kyYafLygVocobh7Sf"`
ClientSecret string `json:"client_secret" required:"true" default:"YH2VpZcFJHYNnV6vLfHQXDBhcE7ZChyE"`
CustomCrackUA string `json:"custom_crack_ua" required:"true" default:"netdisk"`
AccessToken string
UploadThread string `json:"upload_thread" default:"3" help:"1<=thread<=32"`

767
drivers/bitqiu/driver.go Normal file
View File

@@ -0,0 +1,767 @@
package bitqiu
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http/cookiejar"
"path"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
streamPkg "github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
)
const (
baseURL = "https://pan.bitqiu.com"
loginURL = baseURL + "/loginServer/login"
userInfoURL = baseURL + "/user/getInfo"
listURL = baseURL + "/apiToken/cfi/fs/resources/pages"
uploadInitializeURL = baseURL + "/apiToken/cfi/fs/upload/v2/initialize"
uploadCompleteURL = baseURL + "/apiToken/cfi/fs/upload/v2/complete"
downloadURL = baseURL + "/download/getUrl"
createDirURL = baseURL + "/resource/create"
moveResourceURL = baseURL + "/resource/remove"
renameResourceURL = baseURL + "/resource/rename"
copyResourceURL = baseURL + "/apiToken/cfi/fs/async/copy"
copyManagerURL = baseURL + "/apiToken/cfi/fs/async/manager"
deleteResourceURL = baseURL + "/resource/delete"
successCode = "10200"
uploadSuccessCode = "30010"
copySubmittedCode = "10300"
orgChannel = "default|default|default"
)
const (
copyPollInterval = time.Second
copyPollMaxAttempts = 60
chunkSize = int64(1 << 20)
)
const defaultUserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.0.0 Safari/537.36"
type BitQiu struct {
model.Storage
Addition
client *resty.Client
userID string
}
func (d *BitQiu) Config() driver.Config {
return config
}
func (d *BitQiu) GetAddition() driver.Additional {
return &d.Addition
}
func (d *BitQiu) Init(ctx context.Context) error {
if d.Addition.UserPlatform == "" {
d.Addition.UserPlatform = uuid.NewString()
op.MustSaveDriverStorage(d)
}
if d.client == nil {
jar, err := cookiejar.New(nil)
if err != nil {
return err
}
d.client = base.NewRestyClient()
d.client.SetBaseURL(baseURL)
d.client.SetCookieJar(jar)
}
d.client.SetHeader("user-agent", d.userAgent())
return d.login(ctx)
}
func (d *BitQiu) Drop(ctx context.Context) error {
d.client = nil
d.userID = ""
return nil
}
func (d *BitQiu) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
if d.userID == "" {
if err := d.login(ctx); err != nil {
return nil, err
}
}
parentID := d.resolveParentID(dir)
dirPath := ""
if dir != nil {
dirPath = dir.GetPath()
}
pageSize := d.pageSize()
orderType := d.orderType()
desc := d.orderDesc()
var results []model.Obj
page := 1
for {
form := map[string]string{
"parentId": parentID,
"limit": strconv.Itoa(pageSize),
"orderType": orderType,
"desc": desc,
"model": "1",
"userId": d.userID,
"currentPage": strconv.Itoa(page),
"page": strconv.Itoa(page),
"org_channel": orgChannel,
}
var resp Response[ResourcePage]
if err := d.postForm(ctx, listURL, form, &resp); err != nil {
return nil, err
}
if resp.Code != successCode {
if resp.Code == "10401" || resp.Code == "10404" {
if err := d.login(ctx); err != nil {
return nil, err
}
continue
}
return nil, fmt.Errorf("list failed: %s", resp.Message)
}
objs, err := utils.SliceConvert(resp.Data.Data, func(item Resource) (model.Obj, error) {
return item.toObject(parentID, dirPath)
})
if err != nil {
return nil, err
}
results = append(results, objs...)
if !resp.Data.HasNext || len(resp.Data.Data) == 0 {
break
}
page++
}
return results, nil
}
func (d *BitQiu) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if file.IsDir() {
return nil, errs.NotFile
}
if d.userID == "" {
if err := d.login(ctx); err != nil {
return nil, err
}
}
form := map[string]string{
"fileIds": file.GetID(),
"org_channel": orgChannel,
}
for attempt := 0; attempt < 2; attempt++ {
var resp Response[DownloadData]
if err := d.postForm(ctx, downloadURL, form, &resp); err != nil {
return nil, err
}
switch resp.Code {
case successCode:
if resp.Data.URL == "" {
return nil, fmt.Errorf("empty download url returned")
}
return &model.Link{URL: resp.Data.URL}, nil
case "10401", "10404":
if err := d.login(ctx); err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("get link failed: %s", resp.Message)
}
}
return nil, fmt.Errorf("get link failed: retry limit reached")
}
func (d *BitQiu) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
if d.userID == "" {
if err := d.login(ctx); err != nil {
return nil, err
}
}
parentID := d.resolveParentID(parentDir)
parentPath := ""
if parentDir != nil {
parentPath = parentDir.GetPath()
}
form := map[string]string{
"parentId": parentID,
"name": dirName,
"org_channel": orgChannel,
}
for attempt := 0; attempt < 2; attempt++ {
var resp Response[CreateDirData]
if err := d.postForm(ctx, createDirURL, form, &resp); err != nil {
return nil, err
}
switch resp.Code {
case successCode:
newParentID := parentID
if resp.Data.ParentID != "" {
newParentID = resp.Data.ParentID
}
name := resp.Data.Name
if name == "" {
name = dirName
}
resource := Resource{
ResourceID: resp.Data.DirID,
ResourceType: 1,
Name: name,
ParentID: newParentID,
}
obj, err := resource.toObject(newParentID, parentPath)
if err != nil {
return nil, err
}
if o, ok := obj.(*Object); ok {
o.ParentID = newParentID
}
return obj, nil
case "10401", "10404":
if err := d.login(ctx); err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("create folder failed: %s", resp.Message)
}
}
return nil, fmt.Errorf("create folder failed: retry limit reached")
}
func (d *BitQiu) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if d.userID == "" {
if err := d.login(ctx); err != nil {
return nil, err
}
}
targetParentID := d.resolveParentID(dstDir)
form := map[string]string{
"dirIds": "",
"fileIds": "",
"parentId": targetParentID,
"org_channel": orgChannel,
}
if srcObj.IsDir() {
form["dirIds"] = srcObj.GetID()
} else {
form["fileIds"] = srcObj.GetID()
}
for attempt := 0; attempt < 2; attempt++ {
var resp Response[any]
if err := d.postForm(ctx, moveResourceURL, form, &resp); err != nil {
return nil, err
}
switch resp.Code {
case successCode:
dstPath := ""
if dstDir != nil {
dstPath = dstDir.GetPath()
}
if setter, ok := srcObj.(model.SetPath); ok {
setter.SetPath(path.Join(dstPath, srcObj.GetName()))
}
if o, ok := srcObj.(*Object); ok {
o.ParentID = targetParentID
}
return srcObj, nil
case "10401", "10404":
if err := d.login(ctx); err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("move failed: %s", resp.Message)
}
}
return nil, fmt.Errorf("move failed: retry limit reached")
}
func (d *BitQiu) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
if d.userID == "" {
if err := d.login(ctx); err != nil {
return nil, err
}
}
form := map[string]string{
"resourceId": srcObj.GetID(),
"name": newName,
"type": "0",
"org_channel": orgChannel,
}
if srcObj.IsDir() {
form["type"] = "1"
}
for attempt := 0; attempt < 2; attempt++ {
var resp Response[any]
if err := d.postForm(ctx, renameResourceURL, form, &resp); err != nil {
return nil, err
}
switch resp.Code {
case successCode:
return updateObjectName(srcObj, newName), nil
case "10401", "10404":
if err := d.login(ctx); err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("rename failed: %s", resp.Message)
}
}
return nil, fmt.Errorf("rename failed: retry limit reached")
}
func (d *BitQiu) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if d.userID == "" {
if err := d.login(ctx); err != nil {
return nil, err
}
}
targetParentID := d.resolveParentID(dstDir)
form := map[string]string{
"dirIds": "",
"fileIds": "",
"parentId": targetParentID,
"org_channel": orgChannel,
}
if srcObj.IsDir() {
form["dirIds"] = srcObj.GetID()
} else {
form["fileIds"] = srcObj.GetID()
}
for attempt := 0; attempt < 2; attempt++ {
var resp Response[any]
if err := d.postForm(ctx, copyResourceURL, form, &resp); err != nil {
return nil, err
}
switch resp.Code {
case successCode, copySubmittedCode:
return d.waitForCopiedObject(ctx, srcObj, dstDir)
case "10401", "10404":
if err := d.login(ctx); err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("copy failed: %s", resp.Message)
}
}
return nil, fmt.Errorf("copy failed: retry limit reached")
}
func (d *BitQiu) Remove(ctx context.Context, obj model.Obj) error {
if d.userID == "" {
if err := d.login(ctx); err != nil {
return err
}
}
form := map[string]string{
"dirIds": "",
"fileIds": "",
"org_channel": orgChannel,
}
if obj.IsDir() {
form["dirIds"] = obj.GetID()
} else {
form["fileIds"] = obj.GetID()
}
for attempt := 0; attempt < 2; attempt++ {
var resp Response[any]
if err := d.postForm(ctx, deleteResourceURL, form, &resp); err != nil {
return err
}
switch resp.Code {
case successCode:
return nil
case "10401", "10404":
if err := d.login(ctx); err != nil {
return err
}
default:
return fmt.Errorf("remove failed: %s", resp.Message)
}
}
return fmt.Errorf("remove failed: retry limit reached")
}
func (d *BitQiu) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
if d.userID == "" {
if err := d.login(ctx); err != nil {
return nil, err
}
}
up(0)
tmpFile, md5sum, err := streamPkg.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil {
return nil, err
}
defer tmpFile.Close()
parentID := d.resolveParentID(dstDir)
parentPath := ""
if dstDir != nil {
parentPath = dstDir.GetPath()
}
form := map[string]string{
"parentId": parentID,
"name": file.GetName(),
"size": strconv.FormatInt(file.GetSize(), 10),
"hash": md5sum,
"sampleMd5": md5sum,
"org_channel": orgChannel,
}
var resp Response[json.RawMessage]
if err = d.postForm(ctx, uploadInitializeURL, form, &resp); err != nil {
return nil, err
}
if resp.Code != uploadSuccessCode {
switch resp.Code {
case successCode:
var initData UploadInitData
if err := json.Unmarshal(resp.Data, &initData); err != nil {
return nil, fmt.Errorf("parse upload init response failed: %w", err)
}
serverCode, err := d.uploadFileInChunks(ctx, tmpFile, file.GetSize(), md5sum, initData, up)
if err != nil {
return nil, err
}
obj, err := d.completeChunkUpload(ctx, initData, parentID, parentPath, file.GetName(), file.GetSize(), md5sum, serverCode)
if err != nil {
return nil, err
}
up(100)
return obj, nil
default:
return nil, fmt.Errorf("upload failed: %s", resp.Message)
}
}
var resource Resource
if err := json.Unmarshal(resp.Data, &resource); err != nil {
return nil, fmt.Errorf("parse upload response failed: %w", err)
}
obj, err := resource.toObject(parentID, parentPath)
if err != nil {
return nil, err
}
up(100)
return obj, nil
}
func (d *BitQiu) uploadFileInChunks(ctx context.Context, tmpFile model.File, size int64, md5sum string, initData UploadInitData, up driver.UpdateProgress) (string, error) {
if d.client == nil {
return "", fmt.Errorf("client not initialized")
}
if size <= 0 {
return "", fmt.Errorf("invalid file size")
}
buf := make([]byte, chunkSize)
offset := int64(0)
var finishedFlag string
for offset < size {
chunkLen := chunkSize
remaining := size - offset
if remaining < chunkLen {
chunkLen = remaining
}
reader := io.NewSectionReader(tmpFile, offset, chunkLen)
chunkBuf := buf[:chunkLen]
if _, err := io.ReadFull(reader, chunkBuf); err != nil {
return "", fmt.Errorf("read chunk failed: %w", err)
}
headers := map[string]string{
"accept": "*/*",
"content-type": "application/octet-stream",
"appid": initData.AppID,
"token": initData.Token,
"userid": strconv.FormatInt(initData.UserID, 10),
"serialnumber": initData.SerialNumber,
"hash": md5sum,
"len": strconv.FormatInt(chunkLen, 10),
"offset": strconv.FormatInt(offset, 10),
"user-agent": d.userAgent(),
}
var chunkResp ChunkUploadResponse
req := d.client.R().
SetContext(ctx).
SetHeaders(headers).
SetBody(chunkBuf).
SetResult(&chunkResp)
if _, err := req.Post(initData.UploadURL); err != nil {
return "", err
}
if chunkResp.ErrCode != 0 {
return "", fmt.Errorf("chunk upload failed with code %d", chunkResp.ErrCode)
}
finishedFlag = chunkResp.FinishedFlag
offset += chunkLen
up(float64(offset) * 100 / float64(size))
}
if finishedFlag == "" {
return "", fmt.Errorf("upload finished without server code")
}
return finishedFlag, nil
}
func (d *BitQiu) completeChunkUpload(ctx context.Context, initData UploadInitData, parentID, parentPath, name string, size int64, md5sum, serverCode string) (model.Obj, error) {
form := map[string]string{
"currentPage": "1",
"limit": "1",
"userId": strconv.FormatInt(initData.UserID, 10),
"status": "0",
"parentId": parentID,
"name": name,
"fileUid": initData.FileUID,
"fileSid": initData.FileSID,
"size": strconv.FormatInt(size, 10),
"serverCode": serverCode,
"snapTime": "",
"hash": md5sum,
"sampleMd5": md5sum,
"org_channel": orgChannel,
}
var resp Response[Resource]
if err := d.postForm(ctx, uploadCompleteURL, form, &resp); err != nil {
return nil, err
}
if resp.Code != successCode {
return nil, fmt.Errorf("complete upload failed: %s", resp.Message)
}
return resp.Data.toObject(parentID, parentPath)
}
func (d *BitQiu) login(ctx context.Context) error {
if d.client == nil {
return fmt.Errorf("client not initialized")
}
form := map[string]string{
"passport": d.Username,
"password": utils.GetMD5EncodeStr(d.Password),
"remember": "0",
"captcha": "",
"org_channel": orgChannel,
}
var resp Response[LoginData]
if err := d.postForm(ctx, loginURL, form, &resp); err != nil {
return err
}
if resp.Code != successCode {
return fmt.Errorf("login failed: %s", resp.Message)
}
d.userID = strconv.FormatInt(resp.Data.UserID, 10)
return d.ensureRootFolderID(ctx)
}
func (d *BitQiu) ensureRootFolderID(ctx context.Context) error {
rootID := d.Addition.GetRootId()
if rootID != "" && rootID != "0" {
return nil
}
form := map[string]string{
"org_channel": orgChannel,
}
var resp Response[UserInfoData]
if err := d.postForm(ctx, userInfoURL, form, &resp); err != nil {
return err
}
if resp.Code != successCode {
return fmt.Errorf("get user info failed: %s", resp.Message)
}
if resp.Data.RootDirID == "" {
return fmt.Errorf("get user info failed: empty root dir id")
}
if d.Addition.RootFolderID != resp.Data.RootDirID {
d.Addition.RootFolderID = resp.Data.RootDirID
op.MustSaveDriverStorage(d)
}
return nil
}
func (d *BitQiu) postForm(ctx context.Context, url string, form map[string]string, result interface{}) error {
if d.client == nil {
return fmt.Errorf("client not initialized")
}
req := d.client.R().
SetContext(ctx).
SetHeaders(d.commonHeaders()).
SetFormData(form)
if result != nil {
req = req.SetResult(result)
}
_, err := req.Post(url)
return err
}
func (d *BitQiu) waitForCopiedObject(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
expectedName := srcObj.GetName()
expectedIsDir := srcObj.IsDir()
var lastListErr error
for attempt := 0; attempt < copyPollMaxAttempts; attempt++ {
if attempt > 0 {
if err := waitWithContext(ctx, copyPollInterval); err != nil {
return nil, err
}
}
if err := d.checkCopyFailure(ctx); err != nil {
return nil, err
}
obj, err := d.findObjectInDir(ctx, dstDir, expectedName, expectedIsDir)
if err != nil {
lastListErr = err
continue
}
if obj != nil {
return obj, nil
}
}
if lastListErr != nil {
return nil, lastListErr
}
return nil, fmt.Errorf("copy task timed out waiting for completion")
}
func (d *BitQiu) checkCopyFailure(ctx context.Context) error {
form := map[string]string{
"org_channel": orgChannel,
}
for attempt := 0; attempt < 2; attempt++ {
var resp Response[AsyncManagerData]
if err := d.postForm(ctx, copyManagerURL, form, &resp); err != nil {
return err
}
switch resp.Code {
case successCode:
if len(resp.Data.FailTasks) > 0 {
return fmt.Errorf("copy failed: %s", resp.Data.FailTasks[0].ErrorMessage())
}
return nil
case "10401", "10404":
if err := d.login(ctx); err != nil {
return err
}
default:
return fmt.Errorf("query copy status failed: %s", resp.Message)
}
}
return fmt.Errorf("query copy status failed: retry limit reached")
}
func (d *BitQiu) findObjectInDir(ctx context.Context, dir model.Obj, name string, isDir bool) (model.Obj, error) {
objs, err := d.List(ctx, dir, model.ListArgs{})
if err != nil {
return nil, err
}
for _, obj := range objs {
if obj.GetName() == name && obj.IsDir() == isDir {
return obj, nil
}
}
return nil, nil
}
func waitWithContext(ctx context.Context, d time.Duration) error {
timer := time.NewTimer(d)
defer timer.Stop()
select {
case <-ctx.Done():
return ctx.Err()
case <-timer.C:
return nil
}
}
func (d *BitQiu) commonHeaders() map[string]string {
headers := map[string]string{
"accept": "application/json, text/plain, */*",
"accept-language": "en-US,en;q=0.9",
"cache-control": "no-cache",
"pragma": "no-cache",
"user-platform": d.Addition.UserPlatform,
"x-kl-saas-ajax-request": "Ajax_Request",
"x-requested-with": "XMLHttpRequest",
"referer": baseURL + "/",
"origin": baseURL,
"user-agent": d.userAgent(),
}
return headers
}
func (d *BitQiu) userAgent() string {
if ua := strings.TrimSpace(d.Addition.UserAgent); ua != "" {
return ua
}
return defaultUserAgent
}
func (d *BitQiu) resolveParentID(dir model.Obj) string {
if dir != nil && dir.GetID() != "" {
return dir.GetID()
}
if root := d.Addition.GetRootId(); root != "" {
return root
}
return config.DefaultRoot
}
func (d *BitQiu) pageSize() int {
if size, err := strconv.Atoi(d.Addition.PageSize); err == nil && size > 0 {
return size
}
return 24
}
func (d *BitQiu) orderType() string {
if d.Addition.OrderType != "" {
return d.Addition.OrderType
}
return "updateTime"
}
func (d *BitQiu) orderDesc() string {
if d.Addition.OrderDesc {
return "1"
}
return "0"
}
var _ driver.Driver = (*BitQiu)(nil)
var _ driver.PutResult = (*BitQiu)(nil)

28
drivers/bitqiu/meta.go Normal file
View File

@@ -0,0 +1,28 @@
package bitqiu
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootID
Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
UserPlatform string `json:"user_platform" help:"Optional device identifier; auto-generated if empty."`
OrderType string `json:"order_type" type:"select" options:"updateTime,createTime,name,size" default:"updateTime"`
OrderDesc bool `json:"order_desc"`
PageSize string `json:"page_size" default:"24" help:"Number of entries to request per page."`
UserAgent string `json:"user_agent" default:"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.0.0 Safari/537.36"`
}
var config = driver.Config{
Name: "BitQiu",
DefaultRoot: "0",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &BitQiu{}
})
}

107
drivers/bitqiu/types.go Normal file
View File

@@ -0,0 +1,107 @@
package bitqiu
import "encoding/json"
type Response[T any] struct {
Code string `json:"code"`
Message string `json:"message"`
Data T `json:"data"`
}
type LoginData struct {
UserID int64 `json:"userId"`
}
type ResourcePage struct {
CurrentPage int `json:"currentPage"`
PageSize int `json:"pageSize"`
TotalCount int `json:"totalCount"`
TotalPageCount int `json:"totalPageCount"`
Data []Resource `json:"data"`
HasNext bool `json:"hasNext"`
}
type Resource struct {
ResourceID string `json:"resourceId"`
ResourceUID string `json:"resourceUid"`
ResourceType int `json:"resourceType"`
ParentID string `json:"parentId"`
Name string `json:"name"`
ExtName string `json:"extName"`
Size *json.Number `json:"size"`
CreateTime *string `json:"createTime"`
UpdateTime *string `json:"updateTime"`
FileMD5 string `json:"fileMd5"`
}
type DownloadData struct {
URL string `json:"url"`
MD5 string `json:"md5"`
Size int64 `json:"size"`
}
type UserInfoData struct {
RootDirID string `json:"rootDirId"`
}
type CreateDirData struct {
DirID string `json:"dirId"`
Name string `json:"name"`
ParentID string `json:"parentId"`
}
type AsyncManagerData struct {
WaitTasks []AsyncTask `json:"waitTaskList"`
RunningTasks []AsyncTask `json:"runningTaskList"`
SuccessTasks []AsyncTask `json:"successTaskList"`
FailTasks []AsyncTask `json:"failTaskList"`
TaskList []AsyncTask `json:"taskList"`
}
type AsyncTask struct {
TaskID string `json:"taskId"`
Status int `json:"status"`
ErrorMsg string `json:"errorMsg"`
Message string `json:"message"`
Result *AsyncTaskInfo `json:"result"`
TargetName string `json:"targetName"`
TargetDirID string `json:"parentId"`
}
type AsyncTaskInfo struct {
Resource Resource `json:"resource"`
DirID string `json:"dirId"`
FileID string `json:"fileId"`
Name string `json:"name"`
ParentID string `json:"parentId"`
}
func (t AsyncTask) ErrorMessage() string {
if t.ErrorMsg != "" {
return t.ErrorMsg
}
if t.Message != "" {
return t.Message
}
return "unknown error"
}
type UploadInitData struct {
Name string `json:"name"`
Size int64 `json:"size"`
Token string `json:"token"`
FileUID string `json:"fileUid"`
FileSID string `json:"fileSid"`
ParentID string `json:"parentId"`
UserID int64 `json:"userId"`
SerialNumber string `json:"serialNumber"`
UploadURL string `json:"uploadUrl"`
AppID string `json:"appId"`
}
type ChunkUploadResponse struct {
ErrCode int `json:"errCode"`
Offset int64 `json:"offset"`
Finished int `json:"finished"`
FinishedFlag string `json:"finishedFlag"`
}

102
drivers/bitqiu/util.go Normal file
View File

@@ -0,0 +1,102 @@
package bitqiu
import (
"path"
"strings"
"time"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
)
type Object struct {
model.Object
ParentID string
}
func (r Resource) toObject(parentID, parentPath string) (model.Obj, error) {
id := r.ResourceID
if id == "" {
id = r.ResourceUID
}
obj := &Object{
Object: model.Object{
ID: id,
Name: r.Name,
IsFolder: r.ResourceType == 1,
},
ParentID: parentID,
}
if r.Size != nil {
if size, err := (*r.Size).Int64(); err == nil {
obj.Size = size
}
}
if ct := parseBitQiuTime(r.CreateTime); !ct.IsZero() {
obj.Ctime = ct
}
if mt := parseBitQiuTime(r.UpdateTime); !mt.IsZero() {
obj.Modified = mt
}
if r.FileMD5 != "" {
obj.HashInfo = utils.NewHashInfo(utils.MD5, strings.ToLower(r.FileMD5))
}
obj.SetPath(path.Join(parentPath, obj.Name))
return obj, nil
}
func parseBitQiuTime(value *string) time.Time {
if value == nil {
return time.Time{}
}
trimmed := strings.TrimSpace(*value)
if trimmed == "" {
return time.Time{}
}
if ts, err := time.ParseInLocation("2006-01-02 15:04:05", trimmed, time.Local); err == nil {
return ts
}
return time.Time{}
}
func updateObjectName(obj model.Obj, newName string) model.Obj {
newPath := path.Join(parentPathOf(obj.GetPath()), newName)
switch o := obj.(type) {
case *Object:
o.Name = newName
o.Object.Name = newName
o.SetPath(newPath)
return o
case *model.Object:
o.Name = newName
o.SetPath(newPath)
return o
}
if setter, ok := obj.(model.SetPath); ok {
setter.SetPath(newPath)
}
return &model.Object{
ID: obj.GetID(),
Path: newPath,
Name: newName,
Size: obj.GetSize(),
Modified: obj.ModTime(),
Ctime: obj.CreateTime(),
IsFolder: obj.IsDir(),
HashInfo: obj.GetHash(),
}
}
func parentPathOf(p string) string {
if p == "" {
return ""
}
dir := path.Dir(p)
if dir == "." {
return ""
}
return dir
}

224
drivers/gitee/driver.go Normal file
View File

@@ -0,0 +1,224 @@
package gitee
import (
"context"
"errors"
"fmt"
"net/http"
"net/url"
stdpath "path"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
type Gitee struct {
model.Storage
Addition
client *resty.Client
}
func (d *Gitee) Config() driver.Config {
return config
}
func (d *Gitee) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Gitee) Init(ctx context.Context) error {
d.RootFolderPath = utils.FixAndCleanPath(d.RootFolderPath)
d.Endpoint = strings.TrimSpace(d.Endpoint)
if d.Endpoint == "" {
d.Endpoint = "https://gitee.com/api/v5"
}
d.Endpoint = strings.TrimSuffix(d.Endpoint, "/")
d.Owner = strings.TrimSpace(d.Owner)
d.Repo = strings.TrimSpace(d.Repo)
d.Token = strings.TrimSpace(d.Token)
d.DownloadProxy = strings.TrimSpace(d.DownloadProxy)
if d.Owner == "" || d.Repo == "" {
return errors.New("owner and repo are required")
}
d.client = base.NewRestyClient().
SetBaseURL(d.Endpoint).
SetHeader("Accept", "application/json")
repo, err := d.getRepo()
if err != nil {
return err
}
d.Ref = strings.TrimSpace(d.Ref)
if d.Ref == "" {
d.Ref = repo.DefaultBranch
}
return nil
}
func (d *Gitee) Drop(ctx context.Context) error {
return nil
}
func (d *Gitee) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
relPath := d.relativePath(dir.GetPath())
contents, err := d.listContents(relPath)
if err != nil {
return nil, err
}
objs := make([]model.Obj, 0, len(contents))
for i := range contents {
objs = append(objs, contents[i].toModelObj())
}
return objs, nil
}
func (d *Gitee) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
var downloadURL string
if obj, ok := file.(*Object); ok {
downloadURL = obj.DownloadURL
if downloadURL == "" {
relPath := d.relativePath(file.GetPath())
content, err := d.getContent(relPath)
if err != nil {
return nil, err
}
if content.DownloadURL == "" {
return nil, errors.New("empty download url")
}
obj.DownloadURL = content.DownloadURL
downloadURL = content.DownloadURL
}
} else {
relPath := d.relativePath(file.GetPath())
content, err := d.getContent(relPath)
if err != nil {
return nil, err
}
if content.DownloadURL == "" {
return nil, errors.New("empty download url")
}
downloadURL = content.DownloadURL
}
url := d.applyProxy(downloadURL)
return &model.Link{
URL: url,
Header: http.Header{
"Cookie": {d.Cookie},
},
}, nil
}
func (d *Gitee) newRequest() *resty.Request {
req := d.client.R()
if d.Token != "" {
req.SetQueryParam("access_token", d.Token)
}
if d.Ref != "" {
req.SetQueryParam("ref", d.Ref)
}
return req
}
func (d *Gitee) apiPath(path string) string {
escapedOwner := url.PathEscape(d.Owner)
escapedRepo := url.PathEscape(d.Repo)
if path == "" {
return fmt.Sprintf("/repos/%s/%s/contents", escapedOwner, escapedRepo)
}
return fmt.Sprintf("/repos/%s/%s/contents/%s", escapedOwner, escapedRepo, encodePath(path))
}
func (d *Gitee) listContents(path string) ([]Content, error) {
res, err := d.newRequest().Get(d.apiPath(path))
if err != nil {
return nil, err
}
if res.IsError() {
return nil, toErr(res)
}
var contents []Content
if err := utils.Json.Unmarshal(res.Body(), &contents); err != nil {
var single Content
if err2 := utils.Json.Unmarshal(res.Body(), &single); err2 == nil && single.Type != "" {
if single.Type != "dir" {
return nil, errs.NotFolder
}
return []Content{}, nil
}
return nil, err
}
for i := range contents {
contents[i].Path = joinPath(path, contents[i].Name)
}
return contents, nil
}
func (d *Gitee) getContent(path string) (*Content, error) {
res, err := d.newRequest().Get(d.apiPath(path))
if err != nil {
return nil, err
}
if res.IsError() {
return nil, toErr(res)
}
var content Content
if err := utils.Json.Unmarshal(res.Body(), &content); err != nil {
return nil, err
}
if content.Type == "" {
return nil, errors.New("invalid response")
}
if content.Path == "" {
content.Path = path
}
return &content, nil
}
func (d *Gitee) relativePath(full string) string {
full = utils.FixAndCleanPath(full)
root := utils.FixAndCleanPath(d.RootFolderPath)
if root == "/" {
return strings.TrimPrefix(full, "/")
}
if utils.PathEqual(full, root) {
return ""
}
prefix := utils.PathAddSeparatorSuffix(root)
if strings.HasPrefix(full, prefix) {
return strings.TrimPrefix(full, prefix)
}
return strings.TrimPrefix(full, "/")
}
func (d *Gitee) applyProxy(raw string) string {
if raw == "" || d.DownloadProxy == "" {
return raw
}
proxy := d.DownloadProxy
if !strings.HasSuffix(proxy, "/") {
proxy += "/"
}
return proxy + strings.TrimLeft(raw, "/")
}
func encodePath(p string) string {
if p == "" {
return ""
}
parts := strings.Split(p, "/")
for i, part := range parts {
parts[i] = url.PathEscape(part)
}
return strings.Join(parts, "/")
}
func joinPath(base, name string) string {
if base == "" {
return name
}
return strings.TrimPrefix(stdpath.Join(base, name), "./")
}

29
drivers/gitee/meta.go Normal file
View File

@@ -0,0 +1,29 @@
package gitee
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootPath
Endpoint string `json:"endpoint" type:"string" help:"Gitee API endpoint, default https://gitee.com/api/v5"`
Token string `json:"token" type:"string"`
Owner string `json:"owner" type:"string" required:"true"`
Repo string `json:"repo" type:"string" required:"true"`
Ref string `json:"ref" type:"string" help:"Branch, tag or commit SHA, defaults to repository default branch"`
DownloadProxy string `json:"download_proxy" type:"string" help:"Prefix added before download URLs, e.g. https://mirror.example.com/"`
Cookie string `json:"cookie" type:"string" help:"Cookie returned from user info request"`
}
var config = driver.Config{
Name: "Gitee",
LocalSort: true,
DefaultRoot: "/",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Gitee{}
})
}

60
drivers/gitee/types.go Normal file
View File

@@ -0,0 +1,60 @@
package gitee
import (
"time"
"github.com/alist-org/alist/v3/internal/model"
)
type Links struct {
Self string `json:"self"`
Html string `json:"html"`
}
type Content struct {
Type string `json:"type"`
Size *int64 `json:"size"`
Name string `json:"name"`
Path string `json:"path"`
Sha string `json:"sha"`
URL string `json:"url"`
HtmlURL string `json:"html_url"`
DownloadURL string `json:"download_url"`
Links Links `json:"_links"`
}
func (c Content) toModelObj() model.Obj {
size := int64(0)
if c.Size != nil {
size = *c.Size
}
return &Object{
Object: model.Object{
ID: c.Path,
Name: c.Name,
Size: size,
Modified: time.Unix(0, 0),
IsFolder: c.Type == "dir",
},
DownloadURL: c.DownloadURL,
HtmlURL: c.HtmlURL,
}
}
type Object struct {
model.Object
DownloadURL string
HtmlURL string
}
func (o *Object) URL() string {
return o.DownloadURL
}
type Repo struct {
DefaultBranch string `json:"default_branch"`
}
type ErrResp struct {
Message string `json:"message"`
}

44
drivers/gitee/util.go Normal file
View File

@@ -0,0 +1,44 @@
package gitee
import (
"fmt"
"net/url"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
func (d *Gitee) getRepo() (*Repo, error) {
req := d.client.R()
if d.Token != "" {
req.SetQueryParam("access_token", d.Token)
}
if d.Cookie != "" {
req.SetHeader("Cookie", d.Cookie)
}
escapedOwner := url.PathEscape(d.Owner)
escapedRepo := url.PathEscape(d.Repo)
res, err := req.Get(fmt.Sprintf("/repos/%s/%s", escapedOwner, escapedRepo))
if err != nil {
return nil, err
}
if res.IsError() {
return nil, toErr(res)
}
var repo Repo
if err := utils.Json.Unmarshal(res.Body(), &repo); err != nil {
return nil, err
}
if repo.DefaultBranch == "" {
return nil, fmt.Errorf("failed to fetch default branch")
}
return &repo, nil
}
func toErr(res *resty.Response) error {
var errMsg ErrResp
if err := utils.Json.Unmarshal(res.Body(), &errMsg); err == nil && errMsg.Message != "" {
return fmt.Errorf("%s: %s", res.Status(), errMsg.Message)
}
return fmt.Errorf(res.Status())
}

271
drivers/gofile/driver.go Normal file
View File

@@ -0,0 +1,271 @@
package gofile
import (
"context"
"fmt"
"time"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
)
type Gofile struct {
model.Storage
Addition
accountId string
}
func (d *Gofile) Config() driver.Config {
return config
}
func (d *Gofile) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Gofile) Init(ctx context.Context) error {
if d.APIToken == "" {
return fmt.Errorf("API token is required")
}
// Get account ID
accountId, err := d.getAccountId(ctx)
if err != nil {
return fmt.Errorf("failed to get account ID: %w", err)
}
d.accountId = accountId
// Get account info to set root folder if not specified
if d.RootFolderID == "" {
accountInfo, err := d.getAccountInfo(ctx, accountId)
if err != nil {
return fmt.Errorf("failed to get account info: %w", err)
}
d.RootFolderID = accountInfo.Data.RootFolder
}
// Save driver storage
op.MustSaveDriverStorage(d)
return nil
}
func (d *Gofile) Drop(ctx context.Context) error {
return nil
}
func (d *Gofile) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var folderId string
if dir.GetID() == "" {
folderId = d.GetRootId()
} else {
folderId = dir.GetID()
}
endpoint := fmt.Sprintf("/contents/%s", folderId)
var response ContentsResponse
err := d.getJSON(ctx, endpoint, &response)
if err != nil {
return nil, err
}
var objects []model.Obj
// Process children or contents
contents := response.Data.Children
if contents == nil {
contents = response.Data.Contents
}
for _, content := range contents {
objects = append(objects, d.convertContentToObj(content))
}
return objects, nil
}
func (d *Gofile) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if file.IsDir() {
return nil, errs.NotFile
}
// Create a direct link for the file
directLink, err := d.createDirectLink(ctx, file.GetID())
if err != nil {
return nil, fmt.Errorf("failed to create direct link: %w", err)
}
// Configure cache expiration based on user setting
link := &model.Link{
URL: directLink,
}
// Only set expiration if LinkExpiry > 0 (0 means no caching)
if d.LinkExpiry > 0 {
expiration := time.Duration(d.LinkExpiry) * 24 * time.Hour
link.Expiration = &expiration
}
return link, nil
}
func (d *Gofile) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
var parentId string
if parentDir.GetID() == "" {
parentId = d.GetRootId()
} else {
parentId = parentDir.GetID()
}
data := map[string]interface{}{
"parentFolderId": parentId,
"folderName": dirName,
}
var response CreateFolderResponse
err := d.postJSON(ctx, "/contents/createFolder", data, &response)
if err != nil {
return nil, err
}
return &model.Object{
ID: response.Data.ID,
Name: response.Data.Name,
IsFolder: true,
}, nil
}
func (d *Gofile) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
var dstId string
if dstDir.GetID() == "" {
dstId = d.GetRootId()
} else {
dstId = dstDir.GetID()
}
data := map[string]interface{}{
"contentsId": srcObj.GetID(),
"folderId": dstId,
}
err := d.putJSON(ctx, "/contents/move", data, nil)
if err != nil {
return nil, err
}
// Return updated object
return &model.Object{
ID: srcObj.GetID(),
Name: srcObj.GetName(),
Size: srcObj.GetSize(),
Modified: srcObj.ModTime(),
IsFolder: srcObj.IsDir(),
}, nil
}
func (d *Gofile) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
data := map[string]interface{}{
"attribute": "name",
"attributeValue": newName,
}
var response UpdateResponse
err := d.putJSON(ctx, fmt.Sprintf("/contents/%s/update", srcObj.GetID()), data, &response)
if err != nil {
return nil, err
}
return &model.Object{
ID: srcObj.GetID(),
Name: newName,
Size: srcObj.GetSize(),
Modified: srcObj.ModTime(),
IsFolder: srcObj.IsDir(),
}, nil
}
func (d *Gofile) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
var dstId string
if dstDir.GetID() == "" {
dstId = d.GetRootId()
} else {
dstId = dstDir.GetID()
}
data := map[string]interface{}{
"contentsId": srcObj.GetID(),
"folderId": dstId,
}
var response CopyResponse
err := d.postJSON(ctx, "/contents/copy", data, &response)
if err != nil {
return nil, err
}
// Get the new ID from the response
newId := srcObj.GetID()
if response.Data.CopiedContents != nil {
if id, ok := response.Data.CopiedContents[srcObj.GetID()]; ok {
newId = id
}
}
return &model.Object{
ID: newId,
Name: srcObj.GetName(),
Size: srcObj.GetSize(),
Modified: srcObj.ModTime(),
IsFolder: srcObj.IsDir(),
}, nil
}
func (d *Gofile) Remove(ctx context.Context, obj model.Obj) error {
data := map[string]interface{}{
"contentsId": obj.GetID(),
}
return d.deleteJSON(ctx, "/contents", data)
}
func (d *Gofile) Put(ctx context.Context, dstDir model.Obj, fileStreamer model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
var folderId string
if dstDir.GetID() == "" {
folderId = d.GetRootId()
} else {
folderId = dstDir.GetID()
}
response, err := d.uploadFile(ctx, folderId, fileStreamer, up)
if err != nil {
return nil, err
}
return &model.Object{
ID: response.Data.FileId,
Name: response.Data.FileName,
Size: fileStreamer.GetSize(),
IsFolder: false,
}, nil
}
func (d *Gofile) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
return nil, errs.NotImplement
}
func (d *Gofile) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
return nil, errs.NotImplement
}
func (d *Gofile) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
return nil, errs.NotImplement
}
func (d *Gofile) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
return nil, errs.NotImplement
}
var _ driver.Driver = (*Gofile)(nil)

28
drivers/gofile/meta.go Normal file
View File

@@ -0,0 +1,28 @@
package gofile
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootID
APIToken string `json:"api_token" required:"true" help:"Get your API token from your Gofile profile page"`
LinkExpiry int `json:"link_expiry" type:"number" default:"30" help:"Direct link cache duration in days. Set to 0 to disable caching"`
DirectLinkExpiry int `json:"direct_link_expiry" type:"number" default:"0" help:"Direct link expiration time in hours on Gofile server. Set to 0 for no expiration"`
}
var config = driver.Config{
Name: "Gofile",
DefaultRoot: "",
LocalSort: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Gofile{}
})
}

124
drivers/gofile/types.go Normal file
View File

@@ -0,0 +1,124 @@
package gofile
import "time"
type APIResponse struct {
Status string `json:"status"`
Data interface{} `json:"data"`
}
type AccountResponse struct {
Status string `json:"status"`
Data struct {
ID string `json:"id"`
} `json:"data"`
}
type AccountInfoResponse struct {
Status string `json:"status"`
Data struct {
ID string `json:"id"`
Type string `json:"type"`
Email string `json:"email"`
RootFolder string `json:"rootFolder"`
} `json:"data"`
}
type Content struct {
ID string `json:"id"`
Type string `json:"type"` // "file" or "folder"
Name string `json:"name"`
Size int64 `json:"size,omitempty"`
CreateTime int64 `json:"createTime"`
ModTime int64 `json:"modTime,omitempty"`
DirectLink string `json:"directLink,omitempty"`
Children map[string]Content `json:"children,omitempty"`
ParentFolder string `json:"parentFolder,omitempty"`
MD5 string `json:"md5,omitempty"`
MimeType string `json:"mimeType,omitempty"`
Link string `json:"link,omitempty"`
}
type ContentsResponse struct {
Status string `json:"status"`
Data struct {
IsOwner bool `json:"isOwner"`
ID string `json:"id"`
Type string `json:"type"`
Name string `json:"name"`
ParentFolder string `json:"parentFolder"`
CreateTime int64 `json:"createTime"`
ChildrenList []string `json:"childrenList,omitempty"`
Children map[string]Content `json:"children,omitempty"`
Contents map[string]Content `json:"contents,omitempty"`
Public bool `json:"public,omitempty"`
Description string `json:"description,omitempty"`
Tags string `json:"tags,omitempty"`
Expiry int64 `json:"expiry,omitempty"`
} `json:"data"`
}
type UploadResponse struct {
Status string `json:"status"`
Data struct {
DownloadPage string `json:"downloadPage"`
Code string `json:"code"`
ParentFolder string `json:"parentFolder"`
FileId string `json:"fileId"`
FileName string `json:"fileName"`
GuestToken string `json:"guestToken,omitempty"`
} `json:"data"`
}
type DirectLinkResponse struct {
Status string `json:"status"`
Data struct {
DirectLink string `json:"directLink"`
ID string `json:"id"`
} `json:"data"`
}
type CreateFolderResponse struct {
Status string `json:"status"`
Data struct {
ID string `json:"id"`
Type string `json:"type"`
Name string `json:"name"`
ParentFolder string `json:"parentFolder"`
CreateTime int64 `json:"createTime"`
} `json:"data"`
}
type CopyResponse struct {
Status string `json:"status"`
Data struct {
CopiedContents map[string]string `json:"copiedContents"` // oldId -> newId mapping
} `json:"data"`
}
type UpdateResponse struct {
Status string `json:"status"`
Data struct {
ID string `json:"id"`
Name string `json:"name"`
} `json:"data"`
}
type ErrorResponse struct {
Status string `json:"status"`
Error struct {
Message string `json:"message"`
Code string `json:"code"`
} `json:"error"`
}
func (c *Content) ModifiedTime() time.Time {
if c.ModTime > 0 {
return time.Unix(c.ModTime, 0)
}
return time.Unix(c.CreateTime, 0)
}
func (c *Content) IsDir() bool {
return c.Type == "folder"
}

265
drivers/gofile/util.go Normal file
View File

@@ -0,0 +1,265 @@
package gofile
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"mime/multipart"
"net/http"
"path/filepath"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
log "github.com/sirupsen/logrus"
)
const (
baseAPI = "https://api.gofile.io"
uploadAPI = "https://upload.gofile.io"
)
func (d *Gofile) request(ctx context.Context, method, endpoint string, body io.Reader, headers map[string]string) (*http.Response, error) {
var url string
if strings.HasPrefix(endpoint, "http") {
url = endpoint
} else {
url = baseAPI + endpoint
}
req, err := http.NewRequestWithContext(ctx, method, url, body)
if err != nil {
return nil, err
}
req.Header.Set("Authorization", "Bearer "+d.APIToken)
req.Header.Set("User-Agent", "AList/3.0")
for k, v := range headers {
req.Header.Set(k, v)
}
return base.HttpClient.Do(req)
}
func (d *Gofile) getJSON(ctx context.Context, endpoint string, result interface{}) error {
resp, err := d.request(ctx, "GET", endpoint, nil, nil)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return d.handleError(resp)
}
return json.NewDecoder(resp.Body).Decode(result)
}
func (d *Gofile) postJSON(ctx context.Context, endpoint string, data interface{}, result interface{}) error {
jsonData, err := json.Marshal(data)
if err != nil {
return err
}
headers := map[string]string{
"Content-Type": "application/json",
}
resp, err := d.request(ctx, "POST", endpoint, bytes.NewBuffer(jsonData), headers)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return d.handleError(resp)
}
if result != nil {
return json.NewDecoder(resp.Body).Decode(result)
}
return nil
}
func (d *Gofile) putJSON(ctx context.Context, endpoint string, data interface{}, result interface{}) error {
jsonData, err := json.Marshal(data)
if err != nil {
return err
}
headers := map[string]string{
"Content-Type": "application/json",
}
resp, err := d.request(ctx, "PUT", endpoint, bytes.NewBuffer(jsonData), headers)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return d.handleError(resp)
}
if result != nil {
return json.NewDecoder(resp.Body).Decode(result)
}
return nil
}
func (d *Gofile) deleteJSON(ctx context.Context, endpoint string, data interface{}) error {
jsonData, err := json.Marshal(data)
if err != nil {
return err
}
headers := map[string]string{
"Content-Type": "application/json",
}
resp, err := d.request(ctx, "DELETE", endpoint, bytes.NewBuffer(jsonData), headers)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return d.handleError(resp)
}
return nil
}
func (d *Gofile) handleError(resp *http.Response) error {
body, _ := io.ReadAll(resp.Body)
log.Debugf("Gofile API error (HTTP %d): %s", resp.StatusCode, string(body))
var errorResp ErrorResponse
if err := json.Unmarshal(body, &errorResp); err == nil && errorResp.Status == "error" {
return fmt.Errorf("gofile API error: %s (code: %s)", errorResp.Error.Message, errorResp.Error.Code)
}
return fmt.Errorf("gofile API error: HTTP %d - %s", resp.StatusCode, string(body))
}
func (d *Gofile) uploadFile(ctx context.Context, folderId string, file model.FileStreamer, up driver.UpdateProgress) (*UploadResponse, error) {
var body bytes.Buffer
writer := multipart.NewWriter(&body)
if folderId != "" {
writer.WriteField("folderId", folderId)
}
part, err := writer.CreateFormFile("file", filepath.Base(file.GetName()))
if err != nil {
return nil, err
}
// Copy with progress tracking if available
if up != nil {
reader := &progressReader{
reader: file,
total: file.GetSize(),
up: up,
}
_, err = io.Copy(part, reader)
} else {
_, err = io.Copy(part, file)
}
if err != nil {
return nil, err
}
writer.Close()
headers := map[string]string{
"Content-Type": writer.FormDataContentType(),
}
resp, err := d.request(ctx, "POST", uploadAPI+"/uploadfile", &body, headers)
if err != nil {
return nil, err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, d.handleError(resp)
}
var result UploadResponse
err = json.NewDecoder(resp.Body).Decode(&result)
return &result, err
}
func (d *Gofile) createDirectLink(ctx context.Context, contentId string) (string, error) {
data := map[string]interface{}{}
if d.DirectLinkExpiry > 0 {
expireTime := time.Now().Add(time.Duration(d.DirectLinkExpiry) * time.Hour).Unix()
data["expireTime"] = expireTime
}
var result DirectLinkResponse
err := d.postJSON(ctx, fmt.Sprintf("/contents/%s/directlinks", contentId), data, &result)
if err != nil {
return "", err
}
return result.Data.DirectLink, nil
}
func (d *Gofile) convertContentToObj(content Content) model.Obj {
return &model.ObjThumb{
Object: model.Object{
ID: content.ID,
Name: content.Name,
Size: content.Size,
Modified: content.ModifiedTime(),
IsFolder: content.IsDir(),
},
}
}
func (d *Gofile) getAccountId(ctx context.Context) (string, error) {
var result AccountResponse
err := d.getJSON(ctx, "/accounts/getid", &result)
if err != nil {
return "", err
}
return result.Data.ID, nil
}
func (d *Gofile) getAccountInfo(ctx context.Context, accountId string) (*AccountInfoResponse, error) {
var result AccountInfoResponse
err := d.getJSON(ctx, fmt.Sprintf("/accounts/%s", accountId), &result)
if err != nil {
return nil, err
}
return &result, nil
}
// progressReader wraps an io.Reader to track upload progress
type progressReader struct {
reader io.Reader
total int64
read int64
up driver.UpdateProgress
}
func (pr *progressReader) Read(p []byte) (n int, err error) {
n, err = pr.reader.Read(p)
pr.read += int64(n)
if pr.up != nil && pr.total > 0 {
progress := float64(pr.read) * 100 / float64(pr.total)
pr.up(progress)
}
return n, err
}

View File

@@ -94,6 +94,7 @@ func RemoveJSComment(data string) string {
}
if inComment && v == '*' && i+1 < len(data) && data[i+1] == '/' {
inComment = false
i++
continue
}
if v == '/' && i+1 < len(data) {
@@ -108,6 +109,9 @@ func RemoveJSComment(data string) string {
continue
}
}
if inComment || inSingleLineComment {
continue
}
result.WriteByte(v)
}

View File

@@ -430,17 +430,35 @@ func (d *LanZou) getFilesByShareUrl(shareID, pwd string, sharePageData string) (
file.Time = timeFindReg.FindString(sharePageData)
// 重定向获取真实链接
res, err := base.NoRedirectClient.R().SetHeaders(map[string]string{
headers := map[string]string{
"accept-language": "zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6",
}).Get(downloadUrl)
}
res, err := base.NoRedirectClient.R().SetHeaders(headers).Get(downloadUrl)
if err != nil {
return nil, err
}
rPageData := res.String()
if findAcwScV2Reg.MatchString(rPageData) {
log.Debug("lanzou: detected acw_sc__v2 challenge, recalculating cookie")
acwScV2, err := CalcAcwScV2(rPageData)
if err != nil {
return nil, err
}
// retry with calculated cookie to bypass anti-crawler validation
res, err = base.NoRedirectClient.R().
SetHeaders(headers).
SetCookie(&http.Cookie{Name: "acw_sc__v2", Value: acwScV2}).
Get(downloadUrl)
if err != nil {
return nil, err
}
rPageData = res.String()
}
file.Url = res.Header().Get("location")
// 触发验证
rPageData := res.String()
if res.StatusCode() != 302 {
param, err = htmlJsonToMap(rPageData)
if err != nil {

View File

@@ -146,13 +146,14 @@ func (d *Local) FileInfoToObj(ctx context.Context, f fs.FileInfo, reqPath string
thumb += "?type=thumb&sign=" + sign.Sign(stdpath.Join(reqPath, f.Name()))
}
}
isFolder := f.IsDir() || isSymlinkDir(f, fullPath)
filePath := filepath.Join(fullPath, f.Name())
isFolder := f.IsDir() || isLinkedDir(f, filePath)
var size int64
if !isFolder {
size = f.Size()
}
var ctime time.Time
t, err := times.Stat(stdpath.Join(fullPath, f.Name()))
t, err := times.Stat(filePath)
if err == nil {
if t.HasBirthTime() {
ctime = t.BirthTime()
@@ -161,7 +162,7 @@ func (d *Local) FileInfoToObj(ctx context.Context, f fs.FileInfo, reqPath string
file := model.ObjThumb{
Object: model.Object{
Path: filepath.Join(fullPath, f.Name()),
Path: filePath,
Name: f.Name(),
Modified: f.ModTime(),
Size: size,
@@ -197,7 +198,7 @@ func (d *Local) Get(ctx context.Context, path string) (model.Obj, error) {
}
return nil, err
}
isFolder := f.IsDir() || isSymlinkDir(f, path)
isFolder := f.IsDir() || isLinkedDir(f, path)
size := f.Size()
if isFolder {
size = 0

View File

@@ -7,6 +7,7 @@ import (
"io/fs"
"os"
"path/filepath"
"runtime"
"sort"
"strconv"
"strings"
@@ -18,14 +19,18 @@ import (
ffmpeg "github.com/u2takey/ffmpeg-go"
)
func isSymlinkDir(f fs.FileInfo, path string) bool {
if f.Mode()&os.ModeSymlink == os.ModeSymlink {
dst, err := os.Readlink(filepath.Join(path, f.Name()))
func isLinkedDir(f fs.FileInfo, path string) bool {
if f.Mode()&os.ModeSymlink == os.ModeSymlink || (runtime.GOOS == "windows" && f.Mode()&os.ModeIrregular != 0) {
dst, err := os.Readlink(path)
if err != nil {
return false
}
if !filepath.IsAbs(dst) {
dst = filepath.Join(path, dst)
dst = filepath.Join(filepath.Dir(path), dst)
}
dst, err = filepath.Abs(dst)
if err != nil {
return false
}
stat, err := os.Stat(dst)
if err != nil {

433
drivers/mediafire/driver.go Normal file
View File

@@ -0,0 +1,433 @@
package mediafire
/*
Package mediafire
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-11
D@' 3z K!7 - The King Of Cracking
*/
import (
"context"
"fmt"
"math/rand"
"net/http"
"os"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/cron"
"github.com/alist-org/alist/v3/pkg/utils"
)
type Mediafire struct {
model.Storage
Addition
cron *cron.Cron
actionToken string
appBase string
apiBase string
hostBase string
maxRetries int
secChUa string
secChUaPlatform string
userAgent string
}
func (d *Mediafire) Config() driver.Config {
return config
}
func (d *Mediafire) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Mediafire) Init(ctx context.Context) error {
if d.SessionToken == "" {
return fmt.Errorf("Init :: [MediaFire] {critical} missing sessionToken")
}
if d.Cookie == "" {
return fmt.Errorf("Init :: [MediaFire] {critical} missing Cookie")
}
if _, err := d.getSessionToken(ctx); err != nil {
d.renewToken(ctx)
num := rand.Intn(4) + 6
d.cron = cron.NewCron(time.Minute * time.Duration(num))
d.cron.Do(func() {
d.renewToken(ctx)
})
}
return nil
}
func (d *Mediafire) Drop(ctx context.Context) error {
return nil
}
func (d *Mediafire) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
files, err := d.getFiles(ctx, dir.GetID())
if err != nil {
return nil, err
}
return utils.SliceConvert(files, func(src File) (model.Obj, error) {
return d.fileToObj(src), nil
})
}
func (d *Mediafire) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
downloadUrl, err := d.getDirectDownloadLink(ctx, file.GetID())
if err != nil {
return nil, err
}
res, err := base.NoRedirectClient.R().SetDoNotParseResponse(true).SetContext(ctx).Get(downloadUrl)
if err != nil {
return nil, err
}
defer func() {
_ = res.RawBody().Close()
}()
if res.StatusCode() == 302 {
downloadUrl = res.Header().Get("location")
}
return &model.Link{
URL: downloadUrl,
Header: http.Header{
"Origin": []string{d.appBase},
"Referer": []string{d.appBase + "/"},
"sec-ch-ua": []string{d.secChUa},
"sec-ch-ua-platform": []string{d.secChUaPlatform},
"User-Agent": []string{d.userAgent},
//"User-Agent": []string{base.UserAgent},
},
}, nil
}
func (d *Mediafire) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
data := map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"parent_key": parentDir.GetID(),
"foldername": dirName,
}
var resp MediafireFolderCreateResponse
_, err := d.postForm("/folder/create.php", data, &resp)
if err != nil {
return nil, err
}
if resp.Response.Result != "Success" {
return nil, fmt.Errorf("MediaFire API error: %s", resp.Response.Result)
}
created, _ := time.Parse("2006-01-02T15:04:05Z", resp.Response.CreatedUTC)
return &model.ObjThumb{
Object: model.Object{
ID: resp.Response.FolderKey,
Name: resp.Response.Name,
Size: 0,
Modified: created,
Ctime: created,
IsFolder: true,
},
Thumbnail: model.Thumbnail{},
}, nil
}
func (d *Mediafire) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
var data map[string]string
var endpoint string
if srcObj.IsDir() {
endpoint = "/folder/move.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"folder_key_src": srcObj.GetID(),
"folder_key_dst": dstDir.GetID(),
}
} else {
endpoint = "/file/move.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"quick_key": srcObj.GetID(),
"folder_key": dstDir.GetID(),
}
}
var resp MediafireMoveResponse
_, err := d.postForm(endpoint, data, &resp)
if err != nil {
return nil, err
}
if resp.Response.Result != "Success" {
return nil, fmt.Errorf("MediaFire API error: %s", resp.Response.Result)
}
return srcObj, nil
}
func (d *Mediafire) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
var data map[string]string
var endpoint string
if srcObj.IsDir() {
endpoint = "/folder/update.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"folder_key": srcObj.GetID(),
"foldername": newName,
}
} else {
endpoint = "/file/update.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"quick_key": srcObj.GetID(),
"filename": newName,
}
}
var resp MediafireRenameResponse
_, err := d.postForm(endpoint, data, &resp)
if err != nil {
return nil, err
}
if resp.Response.Result != "Success" {
return nil, fmt.Errorf("MediaFire API error: %s", resp.Response.Result)
}
return &model.ObjThumb{
Object: model.Object{
ID: srcObj.GetID(),
Name: newName,
Size: srcObj.GetSize(),
Modified: srcObj.ModTime(),
Ctime: srcObj.CreateTime(),
IsFolder: srcObj.IsDir(),
},
Thumbnail: model.Thumbnail{},
}, nil
}
func (d *Mediafire) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
var data map[string]string
var endpoint string
if srcObj.IsDir() {
endpoint = "/folder/copy.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"folder_key_src": srcObj.GetID(),
"folder_key_dst": dstDir.GetID(),
}
} else {
endpoint = "/file/copy.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"quick_key": srcObj.GetID(),
"folder_key": dstDir.GetID(),
}
}
var resp MediafireCopyResponse
_, err := d.postForm(endpoint, data, &resp)
if err != nil {
return nil, err
}
if resp.Response.Result != "Success" {
return nil, fmt.Errorf("MediaFire API error: %s", resp.Response.Result)
}
var newID string
if srcObj.IsDir() {
if len(resp.Response.NewFolderKeys) > 0 {
newID = resp.Response.NewFolderKeys[0]
}
} else {
if len(resp.Response.NewQuickKeys) > 0 {
newID = resp.Response.NewQuickKeys[0]
}
}
return &model.ObjThumb{
Object: model.Object{
ID: newID,
Name: srcObj.GetName(),
Size: srcObj.GetSize(),
Modified: srcObj.ModTime(),
Ctime: srcObj.CreateTime(),
IsFolder: srcObj.IsDir(),
},
Thumbnail: model.Thumbnail{},
}, nil
}
func (d *Mediafire) Remove(ctx context.Context, obj model.Obj) error {
var data map[string]string
var endpoint string
if obj.IsDir() {
endpoint = "/folder/delete.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"folder_key": obj.GetID(),
}
} else {
endpoint = "/file/delete.php"
data = map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"quick_key": obj.GetID(),
}
}
var resp MediafireRemoveResponse
_, err := d.postForm(endpoint, data, &resp)
if err != nil {
return err
}
if resp.Response.Result != "Success" {
return fmt.Errorf("MediaFire API error: %s", resp.Response.Result)
}
return nil
}
func (d *Mediafire) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
_, err := d.PutResult(ctx, dstDir, file, up)
return err
}
func (d *Mediafire) PutResult(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
tempFile, err := file.CacheFullInTempFile()
if err != nil {
return nil, err
}
defer tempFile.Close()
osFile, ok := tempFile.(*os.File)
if !ok {
return nil, fmt.Errorf("expected *os.File, got %T", tempFile)
}
fileHash, err := d.calculateSHA256(osFile)
if err != nil {
return nil, err
}
checkResp, err := d.uploadCheck(ctx, file.GetName(), file.GetSize(), fileHash, dstDir.GetID())
if err != nil {
return nil, err
}
if checkResp.Response.ResumableUpload.AllUnitsReady == "yes" {
up(100.0)
}
if checkResp.Response.HashExists == "yes" && checkResp.Response.InAccount == "yes" {
up(100.0)
existingFile, err := d.getExistingFileInfo(ctx, fileHash, file.GetName(), dstDir.GetID())
if err == nil {
return existingFile, nil
}
}
var pollKey string
if checkResp.Response.ResumableUpload.AllUnitsReady != "yes" {
var err error
pollKey, err = d.uploadUnits(ctx, osFile, checkResp, file.GetName(), fileHash, dstDir.GetID(), up)
if err != nil {
return nil, err
}
} else {
pollKey = checkResp.Response.ResumableUpload.UploadKey
}
//fmt.Printf("pollKey: %+v\n", pollKey)
pollResp, err := d.pollUpload(ctx, pollKey)
if err != nil {
return nil, err
}
quickKey := pollResp.Response.Doupload.QuickKey
return &model.ObjThumb{
Object: model.Object{
ID: quickKey,
Name: file.GetName(),
Size: file.GetSize(),
},
Thumbnail: model.Thumbnail{},
}, nil
}
func (d *Mediafire) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Mediafire) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Mediafire) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Mediafire) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// return errs.NotImplement to use an internal archive tool
return nil, errs.NotImplement
}
//func (d *Mediafire) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Mediafire)(nil)

54
drivers/mediafire/meta.go Normal file
View File

@@ -0,0 +1,54 @@
package mediafire
/*
Package mediafire
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-11
D@' 3z K!7 - The King Of Cracking
*/
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootPath
//driver.RootID
SessionToken string `json:"session_token" required:"true" type:"string" help:"Required for MediaFire API"`
Cookie string `json:"cookie" required:"true" type:"string" help:"Required for navigation"`
OrderBy string `json:"order_by" type:"select" options:"name,time,size" default:"name"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
ChunkSize int64 `json:"chunk_size" type:"number" default:"100"`
}
var config = driver.Config{
Name: "MediaFire",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "/",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Mediafire{
appBase: "https://app.mediafire.com",
apiBase: "https://www.mediafire.com/api/1.5",
hostBase: "https://www.mediafire.com",
maxRetries: 3,
secChUa: "\"Not)A;Brand\";v=\"8\", \"Chromium\";v=\"139\", \"Google Chrome\";v=\"139\"",
secChUaPlatform: "Windows",
userAgent: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.0.0 Safari/537.36",
}
})
}

232
drivers/mediafire/types.go Normal file
View File

@@ -0,0 +1,232 @@
package mediafire
/*
Package mediafire
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-11
D@' 3z K!7 - The King Of Cracking
*/
type MediafireRenewTokenResponse struct {
Response struct {
Action string `json:"action"`
SessionToken string `json:"session_token"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
} `json:"response"`
}
type MediafireResponse struct {
Response struct {
Action string `json:"action"`
FolderContent struct {
ChunkSize string `json:"chunk_size"`
ContentType string `json:"content_type"`
ChunkNumber string `json:"chunk_number"`
FolderKey string `json:"folderkey"`
Folders []MediafireFolder `json:"folders,omitempty"`
Files []MediafireFile `json:"files,omitempty"`
MoreChunks string `json:"more_chunks"`
} `json:"folder_content"`
Result string `json:"result"`
} `json:"response"`
}
type MediafireFolder struct {
FolderKey string `json:"folderkey"`
Name string `json:"name"`
Created string `json:"created"`
CreatedUTC string `json:"created_utc"`
}
type MediafireFile struct {
QuickKey string `json:"quickkey"`
Filename string `json:"filename"`
Size string `json:"size"`
Created string `json:"created"`
CreatedUTC string `json:"created_utc"`
MimeType string `json:"mimetype"`
}
type File struct {
ID string
Name string
Size int64
CreatedUTC string
IsFolder bool
}
type FolderContentResponse struct {
Folders []MediafireFolder
Files []MediafireFile
MoreChunks bool
}
type MediafireLinksResponse struct {
Response struct {
Action string `json:"action"`
Links []struct {
QuickKey string `json:"quickkey"`
View string `json:"view"`
NormalDownload string `json:"normal_download"`
OneTime struct {
Download string `json:"download"`
View string `json:"view"`
} `json:"one_time"`
} `json:"links"`
OneTimeKeyRequestCount string `json:"one_time_key_request_count"`
OneTimeKeyRequestMaxCount string `json:"one_time_key_request_max_count"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
} `json:"response"`
}
type MediafireDirectDownloadResponse struct {
Response struct {
Action string `json:"action"`
Links []struct {
QuickKey string `json:"quickkey"`
DirectDownload string `json:"direct_download"`
} `json:"links"`
DirectDownloadFreeBandwidth string `json:"direct_download_free_bandwidth"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
} `json:"response"`
}
type MediafireFolderCreateResponse struct {
Response struct {
Action string `json:"action"`
FolderKey string `json:"folder_key"`
UploadKey string `json:"upload_key"`
ParentFolderKey string `json:"parent_folderkey"`
Name string `json:"name"`
Description string `json:"description"`
Created string `json:"created"`
CreatedUTC string `json:"created_utc"`
Privacy string `json:"privacy"`
FileCount string `json:"file_count"`
FolderCount string `json:"folder_count"`
Revision string `json:"revision"`
DropboxEnabled string `json:"dropbox_enabled"`
Flag string `json:"flag"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
NewDeviceRevision int `json:"new_device_revision"`
} `json:"response"`
}
type MediafireMoveResponse struct {
Response struct {
Action string `json:"action"`
Asynchronous string `json:"asynchronous,omitempty"`
NewNames []string `json:"new_names"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
NewDeviceRevision int `json:"new_device_revision"`
} `json:"response"`
}
type MediafireRenameResponse struct {
Response struct {
Action string `json:"action"`
Asynchronous string `json:"asynchronous,omitempty"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
NewDeviceRevision int `json:"new_device_revision"`
} `json:"response"`
}
type MediafireCopyResponse struct {
Response struct {
Action string `json:"action"`
Asynchronous string `json:"asynchronous,omitempty"`
NewQuickKeys []string `json:"new_quickkeys,omitempty"`
NewFolderKeys []string `json:"new_folderkeys,omitempty"`
SkippedCount string `json:"skipped_count,omitempty"`
OtherCount string `json:"other_count,omitempty"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
NewDeviceRevision int `json:"new_device_revision"`
} `json:"response"`
}
type MediafireRemoveResponse struct {
Response struct {
Action string `json:"action"`
Asynchronous string `json:"asynchronous,omitempty"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
NewDeviceRevision int `json:"new_device_revision"`
} `json:"response"`
}
type MediafireCheckResponse struct {
Response struct {
Action string `json:"action"`
HashExists string `json:"hash_exists"`
InAccount string `json:"in_account"`
InFolder string `json:"in_folder"`
FileExists string `json:"file_exists"`
ResumableUpload struct {
AllUnitsReady string `json:"all_units_ready"`
NumberOfUnits string `json:"number_of_units"`
UnitSize string `json:"unit_size"`
Bitmap struct {
Count string `json:"count"`
Words []string `json:"words"`
} `json:"bitmap"`
UploadKey string `json:"upload_key"`
} `json:"resumable_upload"`
AvailableSpace string `json:"available_space"`
UsedStorageSize string `json:"used_storage_size"`
StorageLimit string `json:"storage_limit"`
StorageLimitExceeded string `json:"storage_limit_exceeded"`
UploadURL struct {
Simple string `json:"simple"`
SimpleFallback string `json:"simple_fallback"`
Resumable string `json:"resumable"`
ResumableFallback string `json:"resumable_fallback"`
} `json:"upload_url"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
} `json:"response"`
}
type MediafireActionTokenResponse struct {
Response struct {
Action string `json:"action"`
ActionToken string `json:"action_token"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
} `json:"response"`
}
type MediafirePollResponse struct {
Response struct {
Action string `json:"action"`
Doupload struct {
Result string `json:"result"`
Status string `json:"status"`
Description string `json:"description"`
QuickKey string `json:"quickkey"`
Hash string `json:"hash"`
Filename string `json:"filename"`
Size string `json:"size"`
Created string `json:"created"`
CreatedUTC string `json:"created_utc"`
Revision string `json:"revision"`
} `json:"doupload"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
} `json:"response"`
}
type MediafireFileSearchResponse struct {
Response struct {
Action string `json:"action"`
FileInfo []File `json:"file_info"`
Result string `json:"result"`
CurrentAPIVersion string `json:"current_api_version"`
} `json:"response"`
}

626
drivers/mediafire/util.go Normal file
View File

@@ -0,0 +1,626 @@
package mediafire
/*
Package mediafire
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-11
D@' 3z K!7 - The King Of Cracking
*/
import (
"bytes"
"context"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"net/http"
"os"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
)
func (d *Mediafire) getSessionToken(ctx context.Context) (string, error) {
tokenURL := d.hostBase + "/application/get_session_token.php"
req, err := http.NewRequestWithContext(ctx, http.MethodPost, tokenURL, nil)
if err != nil {
return "", err
}
req.Header.Set("Accept", "*/*")
req.Header.Set("Accept-Encoding", "gzip, deflate, br, zstd")
req.Header.Set("Accept-Language", "en-US,en;q=0.9")
req.Header.Set("Content-Length", "0")
req.Header.Set("Cookie", d.Cookie)
req.Header.Set("DNT", "1")
req.Header.Set("Origin", d.hostBase)
req.Header.Set("Priority", "u=1, i")
req.Header.Set("Referer", (d.hostBase + "/"))
req.Header.Set("Sec-Ch-Ua", d.secChUa)
req.Header.Set("Sec-Ch-Ua-Mobile", "?0")
req.Header.Set("Sec-Ch-Ua-Platform", d.secChUaPlatform)
req.Header.Set("Sec-Fetch-Dest", "empty")
req.Header.Set("Sec-Fetch-Mode", "cors")
req.Header.Set("Sec-Fetch-Site", "same-site")
req.Header.Set("User-Agent", d.userAgent)
//req.Header.Set("Connection", "keep-alive")
resp, err := base.HttpClient.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return "", err
}
//fmt.Printf("getSessionToken :: Raw response: %s\n", string(body))
//fmt.Printf("getSessionToken :: Parsed response: %+v\n", resp)
var tokenResp struct {
Response struct {
SessionToken string `json:"session_token"`
} `json:"response"`
}
if resp.StatusCode == 200 {
if err := json.Unmarshal(body, &tokenResp); err != nil {
return "", err
}
if tokenResp.Response.SessionToken == "" {
return "", fmt.Errorf("empty session token received")
}
cookieMap := make(map[string]string)
for _, cookie := range resp.Cookies() {
cookieMap[cookie.Name] = cookie.Value
}
if len(cookieMap) > 0 {
var cookies []string
for name, value := range cookieMap {
cookies = append(cookies, fmt.Sprintf("%s=%s", name, value))
}
d.Cookie = strings.Join(cookies, "; ")
op.MustSaveDriverStorage(d)
//fmt.Printf("getSessionToken :: Captured cookies: %s\n", d.Cookie)
}
} else {
return "", fmt.Errorf("getSessionToken :: failed to get session token, status code: %d", resp.StatusCode)
}
d.SessionToken = tokenResp.Response.SessionToken
//fmt.Printf("Init :: Obtain Session Token %v", d.SessionToken)
op.MustSaveDriverStorage(d)
return d.SessionToken, nil
}
func (d *Mediafire) renewToken(_ context.Context) error {
query := map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
}
var resp MediafireRenewTokenResponse
_, err := d.postForm("/user/renew_session_token.php", query, &resp)
if err != nil {
return fmt.Errorf("failed to renew token: %w", err)
}
//fmt.Printf("getInfo :: Raw response: %s\n", string(body))
//fmt.Printf("getInfo :: Parsed response: %+v\n", resp)
if resp.Response.Result != "Success" {
return fmt.Errorf("MediaFire token renewal failed: %s", resp.Response.Result)
}
d.SessionToken = resp.Response.SessionToken
//fmt.Printf("Init :: Renew Session Token: %s", resp.Response.Result)
op.MustSaveDriverStorage(d)
return nil
}
func (d *Mediafire) getFiles(ctx context.Context, folderKey string) ([]File, error) {
files := make([]File, 0)
hasMore := true
chunkNumber := 1
for hasMore {
resp, err := d.getFolderContent(ctx, folderKey, chunkNumber)
if err != nil {
return nil, err
}
for _, folder := range resp.Folders {
files = append(files, File{
ID: folder.FolderKey,
Name: folder.Name,
Size: 0,
CreatedUTC: folder.CreatedUTC,
IsFolder: true,
})
}
for _, file := range resp.Files {
size, _ := strconv.ParseInt(file.Size, 10, 64)
files = append(files, File{
ID: file.QuickKey,
Name: file.Filename,
Size: size,
CreatedUTC: file.CreatedUTC,
IsFolder: false,
})
}
hasMore = resp.MoreChunks
chunkNumber++
}
return files, nil
}
func (d *Mediafire) getFolderContent(ctx context.Context, folderKey string, chunkNumber int) (*FolderContentResponse, error) {
foldersResp, err := d.getFolderContentByType(ctx, folderKey, "folders", chunkNumber)
if err != nil {
return nil, err
}
filesResp, err := d.getFolderContentByType(ctx, folderKey, "files", chunkNumber)
if err != nil {
return nil, err
}
return &FolderContentResponse{
Folders: foldersResp.Response.FolderContent.Folders,
Files: filesResp.Response.FolderContent.Files,
MoreChunks: foldersResp.Response.FolderContent.MoreChunks == "yes" || filesResp.Response.FolderContent.MoreChunks == "yes",
}, nil
}
func (d *Mediafire) getFolderContentByType(_ context.Context, folderKey, contentType string, chunkNumber int) (*MediafireResponse, error) {
data := map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"folder_key": folderKey,
"content_type": contentType,
"chunk": strconv.Itoa(chunkNumber),
"chunk_size": strconv.FormatInt(d.ChunkSize, 10),
"details": "yes",
"order_direction": d.OrderDirection,
"order_by": d.OrderBy,
"filter": "",
}
var resp MediafireResponse
_, err := d.postForm("/folder/get_content.php", data, &resp)
if err != nil {
return nil, err
}
if resp.Response.Result != "Success" {
return nil, fmt.Errorf("MediaFire API error: %s", resp.Response.Result)
}
return &resp, nil
}
func (d *Mediafire) fileToObj(f File) *model.ObjThumb {
created, _ := time.Parse("2006-01-02T15:04:05Z", f.CreatedUTC)
var thumbnailURL string
if !f.IsFolder && f.ID != "" {
thumbnailURL = d.hostBase + "/convkey/acaa/" + f.ID + "3g.jpg"
}
return &model.ObjThumb{
Object: model.Object{
ID: f.ID,
//Path: "",
Name: f.Name,
Size: f.Size,
Modified: created,
Ctime: created,
IsFolder: f.IsFolder,
},
Thumbnail: model.Thumbnail{
Thumbnail: thumbnailURL,
},
}
}
func (d *Mediafire) getForm(endpoint string, query map[string]string, resp interface{}) ([]byte, error) {
req := base.RestyClient.R()
req.SetQueryParams(query)
req.SetHeaders(map[string]string{
"Cookie": d.Cookie,
//"User-Agent": base.UserAgent,
"User-Agent": d.userAgent,
"Origin": d.appBase,
"Referer": d.appBase + "/",
})
// If response OK
if resp != nil {
req.SetResult(resp)
}
// Targets MediaFire API
res, err := req.Get(d.apiBase + endpoint)
if err != nil {
return nil, err
}
return res.Body(), nil
}
func (d *Mediafire) postForm(endpoint string, data map[string]string, resp interface{}) ([]byte, error) {
req := base.RestyClient.R()
req.SetFormData(data)
req.SetHeaders(map[string]string{
"Cookie": d.Cookie,
"Content-Type": "application/x-www-form-urlencoded",
//"User-Agent": base.UserAgent,
"User-Agent": d.userAgent,
"Origin": d.appBase,
"Referer": d.appBase + "/",
})
// If response OK
if resp != nil {
req.SetResult(resp)
}
// Targets MediaFire API
res, err := req.Post(d.apiBase + endpoint)
if err != nil {
return nil, err
}
return res.Body(), nil
}
func (d *Mediafire) getDirectDownloadLink(_ context.Context, fileID string) (string, error) {
data := map[string]string{
"session_token": d.SessionToken,
"quick_key": fileID,
"link_type": "direct_download",
"response_format": "json",
}
var resp MediafireDirectDownloadResponse
_, err := d.getForm("/file/get_links.php", data, &resp)
if err != nil {
return "", err
}
if resp.Response.Result != "Success" {
return "", fmt.Errorf("MediaFire API error: %s", resp.Response.Result)
}
if len(resp.Response.Links) == 0 {
return "", fmt.Errorf("no download links found")
}
return resp.Response.Links[0].DirectDownload, nil
}
func (d *Mediafire) calculateSHA256(file *os.File) (string, error) {
hasher := sha256.New()
if _, err := file.Seek(0, 0); err != nil {
return "", err
}
if _, err := io.Copy(hasher, file); err != nil {
return "", err
}
return hex.EncodeToString(hasher.Sum(nil)), nil
}
func (d *Mediafire) uploadCheck(ctx context.Context, filename string, filesize int64, filehash, folderKey string) (*MediafireCheckResponse, error) {
actionToken, err := d.getActionToken(ctx)
if err != nil {
return nil, fmt.Errorf("failed to get action token: %w", err)
}
query := map[string]string{
"session_token": actionToken, /* d.SessionToken */
"filename": filename,
"size": strconv.FormatInt(filesize, 10),
"hash": filehash,
"folder_key": folderKey,
"resumable": "yes",
"response_format": "json",
}
var resp MediafireCheckResponse
_, err = d.postForm("/upload/check.php", query, &resp)
if err != nil {
return nil, err
}
//fmt.Printf("uploadCheck :: Raw response: %s\n", string(body))
//fmt.Printf("uploadCheck :: Parsed response: %+v\n", resp)
//fmt.Printf("uploadCheck :: ResumableUpload section: %+v\n", resp.Response.ResumableUpload)
//fmt.Printf("uploadCheck :: Upload key specifically: '%s'\n", resp.Response.ResumableUpload.UploadKey)
if resp.Response.Result != "Success" {
return nil, fmt.Errorf("MediaFire upload check failed: %s", resp.Response.Result)
}
return &resp, nil
}
func (d *Mediafire) resumableUpload(ctx context.Context, folderKey, uploadKey string, unitData []byte, unitID int, fileHash, filename string, totalFileSize int64) (string, error) {
actionToken, err := d.getActionToken(ctx)
if err != nil {
return "", err
}
url := d.apiBase + "/upload/resumable.php"
req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(unitData))
if err != nil {
return "", err
}
q := req.URL.Query()
q.Add("folder_key", folderKey)
q.Add("response_format", "json")
q.Add("session_token", actionToken)
q.Add("key", uploadKey)
req.URL.RawQuery = q.Encode()
req.Header.Set("x-filehash", fileHash)
req.Header.Set("x-filesize", strconv.FormatInt(totalFileSize, 10))
req.Header.Set("x-unit-id", strconv.Itoa(unitID))
req.Header.Set("x-unit-size", strconv.FormatInt(int64(len(unitData)), 10))
req.Header.Set("x-unit-hash", d.sha256Hex(bytes.NewReader(unitData)))
req.Header.Set("x-filename", filename)
req.Header.Set("Content-Type", "application/octet-stream")
req.ContentLength = int64(len(unitData))
/* fmt.Printf("Debug resumable upload request:\n")
fmt.Printf(" URL: %s\n", req.URL.String())
fmt.Printf(" Headers: %+v\n", req.Header)
fmt.Printf(" Unit ID: %d\n", unitID)
fmt.Printf(" Unit Size: %d\n", len(unitData))
fmt.Printf(" Upload Key: %s\n", uploadKey)
fmt.Printf(" Action Token: %s\n", actionToken) */
res, err := base.HttpClient.Do(req)
if err != nil {
return "", err
}
defer res.Body.Close()
body, err := io.ReadAll(res.Body)
if err != nil {
return "", fmt.Errorf("failed to read response body: %v", err)
}
//fmt.Printf("MediaFire resumable upload response (status %d): %s\n", res.StatusCode, string(body))
var uploadResp struct {
Response struct {
Doupload struct {
Key string `json:"key"`
} `json:"doupload"`
Result string `json:"result"`
} `json:"response"`
}
if err := json.Unmarshal(body, &uploadResp); err != nil {
return "", fmt.Errorf("failed to parse response: %v", err)
}
if res.StatusCode != 200 {
return "", fmt.Errorf("resumable upload failed with status %d", res.StatusCode)
}
return uploadResp.Response.Doupload.Key, nil
}
func (d *Mediafire) uploadUnits(ctx context.Context, file *os.File, checkResp *MediafireCheckResponse, filename, fileHash, folderKey string, up driver.UpdateProgress) (string, error) {
unitSize, _ := strconv.ParseInt(checkResp.Response.ResumableUpload.UnitSize, 10, 64)
numUnits, _ := strconv.Atoi(checkResp.Response.ResumableUpload.NumberOfUnits)
uploadKey := checkResp.Response.ResumableUpload.UploadKey
stringWords := checkResp.Response.ResumableUpload.Bitmap.Words
intWords := make([]int, len(stringWords))
for i, word := range stringWords {
intWords[i], _ = strconv.Atoi(word)
}
var finalUploadKey string
for unitID := 0; unitID < numUnits; unitID++ {
if utils.IsCanceled(ctx) {
return "", ctx.Err()
}
if d.isUnitUploaded(intWords, unitID) {
up(float64(unitID+1) * 100 / float64(numUnits))
continue
}
uploadKey, err := d.uploadSingleUnit(ctx, file, unitID, unitSize, fileHash, filename, uploadKey, folderKey)
if err != nil {
return "", err
}
finalUploadKey = uploadKey
up(float64(unitID+1) * 100 / float64(numUnits))
}
return finalUploadKey, nil
}
func (d *Mediafire) uploadSingleUnit(ctx context.Context, file *os.File, unitID int, unitSize int64, fileHash, filename, uploadKey, folderKey string) (string, error) {
start := int64(unitID) * unitSize
size := unitSize
stat, err := file.Stat()
if err != nil {
return "", err
}
fileSize := stat.Size()
if start+size > fileSize {
size = fileSize - start
}
unitData := make([]byte, size)
if _, err := file.ReadAt(unitData, start); err != nil {
return "", err
}
return d.resumableUpload(ctx, folderKey, uploadKey, unitData, unitID, fileHash, filename, fileSize)
}
func (d *Mediafire) getActionToken(_ context.Context) (string, error) {
if d.actionToken != "" {
return d.actionToken, nil
}
data := map[string]string{
"type": "upload",
"lifespan": "1440",
"response_format": "json",
"session_token": d.SessionToken,
}
var resp MediafireActionTokenResponse
_, err := d.postForm("/user/get_action_token.php", data, &resp)
if err != nil {
return "", err
}
if resp.Response.Result != "Success" {
return "", fmt.Errorf("MediaFire action token failed: %s", resp.Response.Result)
}
return resp.Response.ActionToken, nil
}
func (d *Mediafire) pollUpload(ctx context.Context, key string) (*MediafirePollResponse, error) {
actionToken, err := d.getActionToken(ctx)
if err != nil {
return nil, fmt.Errorf("failed to get action token: %w", err)
}
//fmt.Printf("Debug Key: %+v\n", key)
query := map[string]string{
"key": key,
"response_format": "json",
"session_token": actionToken, /* d.SessionToken */
}
var resp MediafirePollResponse
_, err = d.postForm("/upload/poll_upload.php", query, &resp)
if err != nil {
return nil, err
}
//fmt.Printf("pollUpload :: Raw response: %s\n", string(body))
//fmt.Printf("pollUpload :: Parsed response: %+v\n", resp)
//fmt.Printf("pollUpload :: Debug Result: %+v\n", resp.Response.Result)
if resp.Response.Result != "Success" {
return nil, fmt.Errorf("MediaFire poll upload failed: %s", resp.Response.Result)
}
return &resp, nil
}
func (d *Mediafire) sha256Hex(r io.Reader) string {
h := sha256.New()
io.Copy(h, r)
return hex.EncodeToString(h.Sum(nil))
}
func (d *Mediafire) isUnitUploaded(words []int, unitID int) bool {
wordIndex := unitID / 16
bitIndex := unitID % 16
if wordIndex >= len(words) {
return false
}
return (words[wordIndex]>>bitIndex)&1 == 1
}
func (d *Mediafire) getExistingFileInfo(ctx context.Context, fileHash, filename, folderKey string) (*model.ObjThumb, error) {
if fileInfo, err := d.getFileByHash(ctx, fileHash); err == nil && fileInfo != nil {
return fileInfo, nil
}
files, err := d.getFiles(ctx, folderKey)
if err != nil {
return nil, err
}
for _, file := range files {
if file.Name == filename && !file.IsFolder {
return d.fileToObj(file), nil
}
}
return nil, fmt.Errorf("existing file not found")
}
func (d *Mediafire) getFileByHash(_ context.Context, hash string) (*model.ObjThumb, error) {
query := map[string]string{
"session_token": d.SessionToken,
"response_format": "json",
"hash": hash,
}
var resp MediafireFileSearchResponse
_, err := d.postForm("/file/get_info.php", query, &resp)
if err != nil {
return nil, err
}
if resp.Response.Result != "Success" {
return nil, fmt.Errorf("MediaFire file search failed: %s", resp.Response.Result)
}
if len(resp.Response.FileInfo) == 0 {
return nil, fmt.Errorf("file not found by hash")
}
file := resp.Response.FileInfo[0]
return d.fileToObj(file), nil
}

View File

@@ -9,8 +9,9 @@ type Addition struct {
AccessToken string `json:"access_token" required:"true"`
ProjectID string `json:"project_id"`
driver.RootID
OrderBy string `json:"order_by" type:"select" options:"updated_at,title,size" default:"title"`
OrderDesc bool `json:"order_desc"`
OrderBy string `json:"order_by" type:"select" options:"updated_at,title,size" default:"title"`
OrderDesc bool `json:"order_desc"`
DeviceFingerprint string `json:"device_fingerprint" required:"true"`
}
var config = driver.Config{

View File

@@ -17,6 +17,9 @@ import (
func (d *MediaTrack) request(url string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
req := base.RestyClient.R()
req.SetHeader("Authorization", "Bearer "+d.AccessToken)
if d.DeviceFingerprint != "" {
req.SetHeader("X-Device-Fingerprint", d.DeviceFingerprint)
}
if callback != nil {
callback(req)
}

189
drivers/pcloud/driver.go Normal file
View File

@@ -0,0 +1,189 @@
package pcloud
import (
"context"
"fmt"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
type PCloud struct {
model.Storage
Addition
AccessToken string // Actual access token obtained from refresh token
}
func (d *PCloud) Config() driver.Config {
return config
}
func (d *PCloud) GetAddition() driver.Additional {
return &d.Addition
}
func (d *PCloud) Init(ctx context.Context) error {
// Map hostname selection to actual API endpoints
if d.Hostname == "us" {
d.Hostname = "api.pcloud.com"
} else if d.Hostname == "eu" {
d.Hostname = "eapi.pcloud.com"
}
// Set default root folder ID if not provided
if d.RootFolderID == "" {
d.RootFolderID = "d0"
}
// Use the access token directly (like rclone)
d.AccessToken = d.RefreshToken // RefreshToken field actually contains the access_token
return nil
}
func (d *PCloud) Drop(ctx context.Context) error {
return nil
}
func (d *PCloud) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
folderID := d.RootFolderID
if dir.GetID() != "" {
folderID = dir.GetID()
}
files, err := d.getFiles(folderID)
if err != nil {
return nil, err
}
return utils.SliceConvert(files, func(src FileObject) (model.Obj, error) {
return fileToObj(src), nil
})
}
func (d *PCloud) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
downloadURL, err := d.getDownloadLink(file.GetID())
if err != nil {
return nil, err
}
return &model.Link{
URL: downloadURL,
}, nil
}
// Mkdir implements driver.Mkdir
func (d *PCloud) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
parentID := d.RootFolderID
if parentDir.GetID() != "" {
parentID = parentDir.GetID()
}
return d.createFolder(parentID, dirName)
}
// Move implements driver.Move
func (d *PCloud) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
// pCloud uses renamefile/renamefolder for both rename and move
endpoint := "/renamefile"
paramName := "fileid"
if srcObj.IsDir() {
endpoint = "/renamefolder"
paramName = "folderid"
}
var resp ItemResult
_, err := d.requestWithRetry(endpoint, "POST", func(req *resty.Request) {
req.SetFormData(map[string]string{
paramName: extractID(srcObj.GetID()),
"tofolderid": extractID(dstDir.GetID()),
"toname": srcObj.GetName(),
})
}, &resp)
if err != nil {
return err
}
if resp.Result != 0 {
return fmt.Errorf("pCloud error: result code %d", resp.Result)
}
return nil
}
// Rename implements driver.Rename
func (d *PCloud) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
endpoint := "/renamefile"
paramName := "fileid"
if srcObj.IsDir() {
endpoint = "/renamefolder"
paramName = "folderid"
}
var resp ItemResult
_, err := d.requestWithRetry(endpoint, "POST", func(req *resty.Request) {
req.SetFormData(map[string]string{
paramName: extractID(srcObj.GetID()),
"toname": newName,
})
}, &resp)
if err != nil {
return err
}
if resp.Result != 0 {
return fmt.Errorf("pCloud error: result code %d", resp.Result)
}
return nil
}
// Copy implements driver.Copy
func (d *PCloud) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
endpoint := "/copyfile"
paramName := "fileid"
if srcObj.IsDir() {
endpoint = "/copyfolder"
paramName = "folderid"
}
var resp ItemResult
_, err := d.requestWithRetry(endpoint, "POST", func(req *resty.Request) {
req.SetFormData(map[string]string{
paramName: extractID(srcObj.GetID()),
"tofolderid": extractID(dstDir.GetID()),
"toname": srcObj.GetName(),
})
}, &resp)
if err != nil {
return err
}
if resp.Result != 0 {
return fmt.Errorf("pCloud error: result code %d", resp.Result)
}
return nil
}
// Remove implements driver.Remove
func (d *PCloud) Remove(ctx context.Context, obj model.Obj) error {
return d.delete(obj.GetID(), obj.IsDir())
}
// Put implements driver.Put
func (d *PCloud) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
parentID := d.RootFolderID
if dstDir.GetID() != "" {
parentID = dstDir.GetID()
}
return d.uploadFile(ctx, stream, parentID, stream.GetName(), stream.GetSize())
}

30
drivers/pcloud/meta.go Normal file
View File

@@ -0,0 +1,30 @@
package pcloud
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Using json tag "access_token" for UI display, but internally it's a refresh token
RefreshToken string `json:"access_token" required:"true" help:"OAuth token from pCloud authorization"`
Hostname string `json:"hostname" type:"select" options:"us,eu" default:"us" help:"Select pCloud server region"`
RootFolderID string `json:"root_folder_id" help:"Get folder ID from URL like https://my.pcloud.com/#/filemanager?folder=12345678901 (leave empty for root folder)"`
ClientID string `json:"client_id" help:"Custom OAuth client ID (optional)"`
ClientSecret string `json:"client_secret" help:"Custom OAuth client secret (optional)"`
}
// Implement IRootId interface
func (a Addition) GetRootId() string {
return a.RootFolderID
}
var config = driver.Config{
Name: "pCloud",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &PCloud{}
})
}

91
drivers/pcloud/types.go Normal file
View File

@@ -0,0 +1,91 @@
package pcloud
import (
"strconv"
"time"
"github.com/alist-org/alist/v3/internal/model"
)
// ErrorResult represents a pCloud API error response
type ErrorResult struct {
Result int `json:"result"`
Error string `json:"error"`
}
// TokenResponse represents OAuth token response
type TokenResponse struct {
AccessToken string `json:"access_token"`
TokenType string `json:"token_type"`
}
// ItemResult represents a common pCloud API response
type ItemResult struct {
Result int `json:"result"`
Metadata *FolderMeta `json:"metadata,omitempty"`
}
// FolderMeta contains folder metadata including contents
type FolderMeta struct {
Contents []FileObject `json:"contents,omitempty"`
}
// DownloadLinkResult represents download link response
type DownloadLinkResult struct {
Result int `json:"result"`
Hosts []string `json:"hosts"`
Path string `json:"path"`
}
// FileObject represents a file or folder object in pCloud
type FileObject struct {
Name string `json:"name"`
Created string `json:"created"` // pCloud returns RFC1123 format string
Modified string `json:"modified"` // pCloud returns RFC1123 format string
IsFolder bool `json:"isfolder"`
FolderID uint64 `json:"folderid,omitempty"`
FileID uint64 `json:"fileid,omitempty"`
Size uint64 `json:"size"`
ParentID uint64 `json:"parentfolderid"`
Icon string `json:"icon,omitempty"`
Hash uint64 `json:"hash,omitempty"`
Category int `json:"category,omitempty"`
ID string `json:"id,omitempty"`
}
// Convert FileObject to model.Obj
func fileToObj(f FileObject) model.Obj {
// Parse RFC1123 format time from pCloud
modTime, _ := time.Parse(time.RFC1123, f.Modified)
obj := model.Object{
Name: f.Name,
Size: int64(f.Size),
Modified: modTime,
IsFolder: f.IsFolder,
}
if f.IsFolder {
obj.ID = "d" + strconv.FormatUint(f.FolderID, 10)
} else {
obj.ID = "f" + strconv.FormatUint(f.FileID, 10)
}
return &obj
}
// Extract numeric ID from string ID (remove 'd' or 'f' prefix)
func extractID(id string) string {
if len(id) > 1 && (id[0] == 'd' || id[0] == 'f') {
return id[1:]
}
return id
}
// Get folder ID from path, return "0" for root
func getFolderID(path string) string {
if path == "/" || path == "" {
return "0"
}
return extractID(path)
}

297
drivers/pcloud/util.go Normal file
View File

@@ -0,0 +1,297 @@
package pcloud
import (
"context"
"fmt"
"io"
"net/http"
"strconv"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
const (
defaultClientID = "DnONSzyJXpm"
defaultClientSecret = "VKEnd3ze4jsKFGg8TJiznwFG8"
)
// Get API base URL
func (d *PCloud) getAPIURL() string {
return "https://" + d.Hostname
}
// Get OAuth client credentials
func (d *PCloud) getClientCredentials() (string, string) {
clientID := d.ClientID
clientSecret := d.ClientSecret
if clientID == "" {
clientID = defaultClientID
}
if clientSecret == "" {
clientSecret = defaultClientSecret
}
return clientID, clientSecret
}
// Refresh OAuth access token
func (d *PCloud) refreshToken() error {
clientID, clientSecret := d.getClientCredentials()
var resp TokenResponse
_, err := base.RestyClient.R().
SetFormData(map[string]string{
"client_id": clientID,
"client_secret": clientSecret,
"grant_type": "refresh_token",
"refresh_token": d.RefreshToken,
}).
SetResult(&resp).
Post(d.getAPIURL() + "/oauth2_token")
if err != nil {
return err
}
d.AccessToken = resp.AccessToken
return nil
}
// shouldRetry determines if an error should be retried based on pCloud-specific logic
func (d *PCloud) shouldRetry(statusCode int, apiError *ErrorResult) bool {
// HTTP-level retry conditions
if statusCode == 429 || statusCode >= 500 {
return true
}
// pCloud API-specific retry conditions (like rclone)
if apiError != nil && apiError.Result != 0 {
// 4xxx: rate limiting
if apiError.Result/1000 == 4 {
return true
}
// 5xxx: internal errors
if apiError.Result/1000 == 5 {
return true
}
}
return false
}
// requestWithRetry makes authenticated API request with retry logic
func (d *PCloud) requestWithRetry(endpoint string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
maxRetries := 3
baseDelay := 500 * time.Millisecond
for attempt := 0; attempt <= maxRetries; attempt++ {
body, err := d.request(endpoint, method, callback, resp)
if err == nil {
return body, nil
}
// If this is the last attempt, return the error
if attempt == maxRetries {
return nil, err
}
// Check if we should retry based on error type
if !d.shouldRetryError(err) {
return nil, err
}
// Exponential backoff
delay := baseDelay * time.Duration(1<<attempt)
time.Sleep(delay)
}
return nil, fmt.Errorf("max retries exceeded")
}
// shouldRetryError checks if an error should trigger a retry
func (d *PCloud) shouldRetryError(err error) bool {
// For now, we'll retry on any error
// In production, you'd want more specific error handling
return true
}
// Make authenticated API request
func (d *PCloud) request(endpoint string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
req := base.RestyClient.R()
// Add access token as query parameter (pCloud doesn't use Bearer auth)
req.SetQueryParam("access_token", d.AccessToken)
if callback != nil {
callback(req)
}
if resp != nil {
req.SetResult(resp)
}
var res *resty.Response
var err error
switch method {
case http.MethodGet:
res, err = req.Get(d.getAPIURL() + endpoint)
case http.MethodPost:
res, err = req.Post(d.getAPIURL() + endpoint)
default:
return nil, fmt.Errorf("unsupported method: %s", method)
}
if err != nil {
return nil, err
}
// Check for API errors with pCloud-specific logic
if res.StatusCode() != 200 {
var errResp ErrorResult
if err := utils.Json.Unmarshal(res.Body(), &errResp); err == nil {
// Check if this error should trigger a retry
if d.shouldRetry(res.StatusCode(), &errResp) {
return nil, fmt.Errorf("pCloud API error (retryable): %s (result: %d)", errResp.Error, errResp.Result)
}
return nil, fmt.Errorf("pCloud API error: %s (result: %d)", errResp.Error, errResp.Result)
}
return nil, fmt.Errorf("HTTP error: %d", res.StatusCode())
}
return res.Body(), nil
}
// List files in a folder
func (d *PCloud) getFiles(folderID string) ([]FileObject, error) {
var resp ItemResult
_, err := d.requestWithRetry("/listfolder", http.MethodGet, func(req *resty.Request) {
req.SetQueryParam("folderid", extractID(folderID))
}, &resp)
if err != nil {
return nil, err
}
if resp.Result != 0 {
return nil, fmt.Errorf("pCloud error: result code %d", resp.Result)
}
if resp.Metadata == nil {
return []FileObject{}, nil
}
return resp.Metadata.Contents, nil
}
// Get download link for a file
func (d *PCloud) getDownloadLink(fileID string) (string, error) {
var resp DownloadLinkResult
_, err := d.requestWithRetry("/getfilelink", http.MethodGet, func(req *resty.Request) {
req.SetQueryParam("fileid", extractID(fileID))
}, &resp)
if err != nil {
return "", err
}
if resp.Result != 0 {
return "", fmt.Errorf("pCloud error: result code %d", resp.Result)
}
if len(resp.Hosts) == 0 {
return "", fmt.Errorf("no download hosts available")
}
return "https://" + resp.Hosts[0] + resp.Path, nil
}
// Create a folder
func (d *PCloud) createFolder(parentID, name string) error {
var resp ItemResult
_, err := d.requestWithRetry("/createfolder", http.MethodPost, func(req *resty.Request) {
req.SetFormData(map[string]string{
"folderid": extractID(parentID),
"name": name,
})
}, &resp)
if err != nil {
return err
}
if resp.Result != 0 {
return fmt.Errorf("pCloud error: result code %d", resp.Result)
}
return nil
}
// Delete a file or folder
func (d *PCloud) delete(objID string, isFolder bool) error {
endpoint := "/deletefile"
paramName := "fileid"
if isFolder {
endpoint = "/deletefolderrecursive"
paramName = "folderid"
}
var resp ItemResult
_, err := d.requestWithRetry(endpoint, http.MethodPost, func(req *resty.Request) {
req.SetFormData(map[string]string{
paramName: extractID(objID),
})
}, &resp)
if err != nil {
return err
}
if resp.Result != 0 {
return fmt.Errorf("pCloud error: result code %d", resp.Result)
}
return nil
}
// Upload a file using direct /uploadfile endpoint like rclone
func (d *PCloud) uploadFile(ctx context.Context, file io.Reader, parentID, name string, size int64) error {
// pCloud requires Content-Length, so we need to know the size
if size <= 0 {
return fmt.Errorf("file size must be provided for pCloud upload")
}
// Upload directly to /uploadfile endpoint like rclone
var resp ItemResult
req := base.RestyClient.R().
SetQueryParam("access_token", d.AccessToken).
SetHeader("Content-Length", strconv.FormatInt(size, 10)).
SetFileReader("content", name, file).
SetFormData(map[string]string{
"filename": name,
"folderid": extractID(parentID),
"nopartial": "1",
})
// Use PUT method like rclone
res, err := req.Put(d.getAPIURL() + "/uploadfile")
if err != nil {
return err
}
// Parse response
if err := utils.Json.Unmarshal(res.Body(), &resp); err != nil {
return err
}
if resp.Result != 0 {
return fmt.Errorf("pCloud upload error: result code %d", resp.Result)
}
return nil
}

View File

@@ -0,0 +1,418 @@
package protondrive
/*
Package protondrive
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-18
Thanks to @henrybear327 for modded go-proton-api & Proton-API-Bridge
The power of open-source, the force of teamwork and the magic of reverse engineering!
D@' 3z K!7 - The King Of Cracking
Да здравствует Родина))
*/
import (
"context"
"encoding/base64"
"fmt"
"net/http"
"sync"
"time"
"github.com/ProtonMail/gopenpgp/v2/crypto"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
proton_api_bridge "github.com/henrybear327/Proton-API-Bridge"
"github.com/henrybear327/Proton-API-Bridge/common"
"github.com/henrybear327/go-proton-api"
)
type ProtonDrive struct {
model.Storage
Addition
protonDrive *proton_api_bridge.ProtonDrive
credentials *common.ProtonDriveCredential
apiBase string
appVersion string
protonJson string
userAgent string
sdkVersion string
webDriveAV string
tempServer *http.Server
tempServerPort int
downloadTokens map[string]*downloadInfo
tokenMutex sync.RWMutex
c *proton.Client
//m *proton.Manager
credentialCacheFile string
//userKR *crypto.KeyRing
addrKRs map[string]*crypto.KeyRing
addrData map[string]proton.Address
MainShare *proton.Share
RootLink *proton.Link
DefaultAddrKR *crypto.KeyRing
MainShareKR *crypto.KeyRing
}
func (d *ProtonDrive) Config() driver.Config {
return config
}
func (d *ProtonDrive) GetAddition() driver.Additional {
return &d.Addition
}
func (d *ProtonDrive) Init(ctx context.Context) error {
defer func() {
if r := recover(); r != nil {
fmt.Printf("ProtonDrive initialization panic: %v", r)
}
}()
if d.Username == "" {
return fmt.Errorf("username is required")
}
if d.Password == "" {
return fmt.Errorf("password is required")
}
//fmt.Printf("ProtonDrive Init: Username=%s, TwoFACode=%s", d.Username, d.TwoFACode)
if ctx == nil {
return fmt.Errorf("context cannot be nil")
}
cachedCredentials, err := d.loadCachedCredentials()
useReusableLogin := false
var reusableCredential *common.ReusableCredentialData
if err == nil && cachedCredentials != nil &&
cachedCredentials.UID != "" && cachedCredentials.AccessToken != "" &&
cachedCredentials.RefreshToken != "" && cachedCredentials.SaltedKeyPass != "" {
useReusableLogin = true
reusableCredential = cachedCredentials
} else {
useReusableLogin = false
reusableCredential = &common.ReusableCredentialData{}
}
config := &common.Config{
AppVersion: d.appVersion,
UserAgent: d.userAgent,
FirstLoginCredential: &common.FirstLoginCredentialData{
Username: d.Username,
Password: d.Password,
TwoFA: d.TwoFACode,
},
EnableCaching: true,
ConcurrentBlockUploadCount: 5,
ConcurrentFileCryptoCount: 2,
UseReusableLogin: false,
ReplaceExistingDraft: true,
ReusableCredential: reusableCredential,
CredentialCacheFile: d.credentialCacheFile,
}
if config.FirstLoginCredential == nil {
return fmt.Errorf("failed to create login credentials, FirstLoginCredential cannot be nil")
}
//fmt.Printf("Calling NewProtonDrive...")
protonDrive, credentials, err := proton_api_bridge.NewProtonDrive(
ctx,
config,
func(auth proton.Auth) {},
func() {},
)
if credentials == nil && !useReusableLogin {
return fmt.Errorf("failed to get credentials from NewProtonDrive")
}
if err != nil {
return fmt.Errorf("failed to initialize ProtonDrive: %w", err)
}
d.protonDrive = protonDrive
var finalCredentials *common.ProtonDriveCredential
if useReusableLogin {
// For reusable login, create credentials from cached data
finalCredentials = &common.ProtonDriveCredential{
UID: reusableCredential.UID,
AccessToken: reusableCredential.AccessToken,
RefreshToken: reusableCredential.RefreshToken,
SaltedKeyPass: reusableCredential.SaltedKeyPass,
}
d.credentials = finalCredentials
} else {
d.credentials = credentials
}
clientOptions := []proton.Option{
proton.WithAppVersion(d.appVersion),
proton.WithUserAgent(d.userAgent),
}
manager := proton.New(clientOptions...)
d.c = manager.NewClient(d.credentials.UID, d.credentials.AccessToken, d.credentials.RefreshToken)
saltedKeyPassBytes, err := base64.StdEncoding.DecodeString(d.credentials.SaltedKeyPass)
if err != nil {
return fmt.Errorf("failed to decode salted key pass: %w", err)
}
_, addrKRs, addrs, _, err := getAccountKRs(ctx, d.c, nil, saltedKeyPassBytes)
if err != nil {
return fmt.Errorf("failed to get account keyrings: %w", err)
}
d.MainShare = protonDrive.MainShare
d.RootLink = protonDrive.RootLink
d.MainShareKR = protonDrive.MainShareKR
d.DefaultAddrKR = protonDrive.DefaultAddrKR
d.addrKRs = addrKRs
d.addrData = addrs
return nil
}
func (d *ProtonDrive) Drop(ctx context.Context) error {
if d.tempServer != nil {
d.tempServer.Shutdown(ctx)
}
return nil
}
func (d *ProtonDrive) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var linkID string
if dir.GetPath() == "/" {
linkID = d.protonDrive.RootLink.LinkID
} else {
link, err := d.searchByPath(ctx, dir.GetPath(), true)
if err != nil {
return nil, err
}
linkID = link.LinkID
}
entries, err := d.protonDrive.ListDirectory(ctx, linkID)
if err != nil {
return nil, fmt.Errorf("failed to list directory: %w", err)
}
//fmt.Printf("Found %d entries for path %s\n", len(entries), dir.GetPath())
//fmt.Printf("Found %d entries\n", len(entries))
if len(entries) == 0 {
emptySlice := []model.Obj{}
//fmt.Printf("Returning empty slice (entries): %+v\n", emptySlice)
return emptySlice, nil
}
var objects []model.Obj
for _, entry := range entries {
obj := &model.Object{
Name: entry.Name,
Size: entry.Link.Size,
Modified: time.Unix(entry.Link.ModifyTime, 0),
IsFolder: entry.IsFolder,
}
objects = append(objects, obj)
}
return objects, nil
}
func (d *ProtonDrive) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
link, err := d.searchByPath(ctx, file.GetPath(), false)
if err != nil {
return nil, err
}
if err := d.ensureTempServer(); err != nil {
return nil, fmt.Errorf("failed to start temp server: %w", err)
}
token := d.generateDownloadToken(link.LinkID, file.GetName())
/* return &model.Link{
URL: fmt.Sprintf("protondrive://download/%s", link.LinkID),
}, nil */
return &model.Link{
URL: fmt.Sprintf("http://localhost:%d/temp/%s", d.tempServerPort, token),
}, nil
}
func (d *ProtonDrive) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
var parentLinkID string
if parentDir.GetPath() == "/" {
parentLinkID = d.protonDrive.RootLink.LinkID
} else {
link, err := d.searchByPath(ctx, parentDir.GetPath(), true)
if err != nil {
return nil, err
}
parentLinkID = link.LinkID
}
_, err := d.protonDrive.CreateNewFolderByID(ctx, parentLinkID, dirName)
if err != nil {
return nil, fmt.Errorf("failed to create directory: %w", err)
}
newDir := &model.Object{
Name: dirName,
IsFolder: true,
Modified: time.Now(),
}
return newDir, nil
}
func (d *ProtonDrive) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
return d.DirectMove(ctx, srcObj, dstDir)
}
func (d *ProtonDrive) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
if d.protonDrive == nil {
return nil, fmt.Errorf("protonDrive bridge is nil")
}
return d.DirectRename(ctx, srcObj, newName)
}
func (d *ProtonDrive) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if srcObj.IsDir() {
return nil, fmt.Errorf("directory copy not supported")
}
srcLink, err := d.searchByPath(ctx, srcObj.GetPath(), false)
if err != nil {
return nil, err
}
reader, linkSize, fileSystemAttrs, err := d.protonDrive.DownloadFile(ctx, srcLink, 0)
if err != nil {
return nil, fmt.Errorf("failed to download source file: %w", err)
}
defer reader.Close()
actualSize := linkSize
if fileSystemAttrs != nil && fileSystemAttrs.Size > 0 {
actualSize = fileSystemAttrs.Size
}
tempFile, err := utils.CreateTempFile(reader, actualSize)
if err != nil {
return nil, fmt.Errorf("failed to create temp file: %w", err)
}
defer tempFile.Close()
updatedObj := &model.Object{
Name: srcObj.GetName(),
// Use the accurate and real size
Size: actualSize,
Modified: srcObj.ModTime(),
IsFolder: false,
}
return d.Put(ctx, dstDir, &fileStreamer{
ReadCloser: tempFile,
obj: updatedObj,
}, nil)
}
func (d *ProtonDrive) Remove(ctx context.Context, obj model.Obj) error {
link, err := d.searchByPath(ctx, obj.GetPath(), obj.IsDir())
if err != nil {
return err
}
if obj.IsDir() {
return d.protonDrive.MoveFolderToTrashByID(ctx, link.LinkID, false)
} else {
return d.protonDrive.MoveFileToTrashByID(ctx, link.LinkID)
}
}
func (d *ProtonDrive) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
var parentLinkID string
if dstDir.GetPath() == "/" {
parentLinkID = d.protonDrive.RootLink.LinkID
} else {
link, err := d.searchByPath(ctx, dstDir.GetPath(), true)
if err != nil {
return nil, err
}
parentLinkID = link.LinkID
}
tempFile, err := utils.CreateTempFile(file, file.GetSize())
if err != nil {
return nil, fmt.Errorf("failed to create temp file: %w", err)
}
defer tempFile.Close()
err = d.uploadFile(ctx, parentLinkID, file.GetName(), tempFile, file.GetSize(), up)
if err != nil {
return nil, err
}
uploadedObj := &model.Object{
Name: file.GetName(),
Size: file.GetSize(),
Modified: file.ModTime(),
IsFolder: false,
}
return uploadedObj, nil
}
func (d *ProtonDrive) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *ProtonDrive) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *ProtonDrive) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *ProtonDrive) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// return errs.NotImplement to use an internal archive tool
return nil, errs.NotImplement
}
var _ driver.Driver = (*ProtonDrive)(nil)

View File

@@ -0,0 +1,69 @@
package protondrive
/*
Package protondrive
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-18
Thanks to @henrybear327 for modded go-proton-api & Proton-API-Bridge
The power of open-source, the force of teamwork and the magic of reverse engineering!
D@' 3z K!7 - The King Of Cracking
Да здравствует Родина))
*/
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootPath
//driver.RootID
Username string `json:"username" required:"true" type:"string"`
Password string `json:"password" required:"true" type:"string"`
TwoFACode string `json:"two_fa_code,omitempty" type:"string"`
}
type Config struct {
Name string `json:"name"`
LocalSort bool `json:"local_sort"`
OnlyLocal bool `json:"only_local"`
OnlyProxy bool `json:"only_proxy"`
NoCache bool `json:"no_cache"`
NoUpload bool `json:"no_upload"`
NeedMs bool `json:"need_ms"`
DefaultRoot string `json:"default_root"`
}
var config = driver.Config{
Name: "ProtonDrive",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "/",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &ProtonDrive{
apiBase: "https://drive.proton.me/api",
appVersion: "windows-drive@1.11.3+rclone+proton",
credentialCacheFile: ".prtcrd",
protonJson: "application/vnd.protonmail.v1+json",
sdkVersion: "js@0.3.0",
userAgent: "ProtonDrive/v1.70.0 (Windows NT 10.0.22000; Win64; x64)",
webDriveAV: "web-drive@5.2.0+0f69f7a8",
}
})
}

View File

@@ -0,0 +1,124 @@
package protondrive
/*
Package protondrive
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-18
Thanks to @henrybear327 for modded go-proton-api & Proton-API-Bridge
The power of open-source, the force of teamwork and the magic of reverse engineering!
D@' 3z K!7 - The King Of Cracking
Да здравствует Родина))
*/
import (
"errors"
"io"
"os"
"time"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/henrybear327/go-proton-api"
)
type ProtonFile struct {
*proton.Link
Name string
IsFolder bool
}
func (p *ProtonFile) GetName() string {
return p.Name
}
func (p *ProtonFile) GetSize() int64 {
return p.Link.Size
}
func (p *ProtonFile) GetPath() string {
return p.Name
}
func (p *ProtonFile) IsDir() bool {
return p.IsFolder
}
func (p *ProtonFile) ModTime() time.Time {
return time.Unix(p.Link.ModifyTime, 0)
}
func (p *ProtonFile) CreateTime() time.Time {
return time.Unix(p.Link.CreateTime, 0)
}
type downloadInfo struct {
LinkID string
FileName string
}
type fileStreamer struct {
io.ReadCloser
obj model.Obj
}
func (fs *fileStreamer) GetMimetype() string { return "" }
func (fs *fileStreamer) NeedStore() bool { return false }
func (fs *fileStreamer) IsForceStreamUpload() bool { return false }
func (fs *fileStreamer) GetExist() model.Obj { return nil }
func (fs *fileStreamer) SetExist(model.Obj) {}
func (fs *fileStreamer) RangeRead(http_range.Range) (io.Reader, error) {
return nil, errors.New("not supported")
}
func (fs *fileStreamer) CacheFullInTempFile() (model.File, error) {
return nil, errors.New("not supported")
}
func (fs *fileStreamer) SetTmpFile(r *os.File) {}
func (fs *fileStreamer) GetFile() model.File { return nil }
func (fs *fileStreamer) GetName() string { return fs.obj.GetName() }
func (fs *fileStreamer) GetSize() int64 { return fs.obj.GetSize() }
func (fs *fileStreamer) GetPath() string { return fs.obj.GetPath() }
func (fs *fileStreamer) IsDir() bool { return fs.obj.IsDir() }
func (fs *fileStreamer) ModTime() time.Time { return fs.obj.ModTime() }
func (fs *fileStreamer) CreateTime() time.Time { return fs.obj.ModTime() }
func (fs *fileStreamer) GetHash() utils.HashInfo { return fs.obj.GetHash() }
func (fs *fileStreamer) GetID() string { return fs.obj.GetID() }
type httpRange struct {
start, end int64
}
type MoveRequest struct {
ParentLinkID string `json:"ParentLinkID"`
NodePassphrase string `json:"NodePassphrase"`
NodePassphraseSignature *string `json:"NodePassphraseSignature"`
Name string `json:"Name"`
NameSignatureEmail string `json:"NameSignatureEmail"`
Hash string `json:"Hash"`
OriginalHash string `json:"OriginalHash"`
ContentHash *string `json:"ContentHash"` // Maybe null
}
type progressReader struct {
reader io.Reader
total int64
current int64
callback driver.UpdateProgress
}
type RenameRequest struct {
Name string `json:"Name"` // PGP encrypted name
NameSignatureEmail string `json:"NameSignatureEmail"` // User's signature email
Hash string `json:"Hash"` // New name hash
OriginalHash string `json:"OriginalHash"` // Current name hash
}
type RenameResponse struct {
Code int `json:"Code"`
}

View File

@@ -0,0 +1,918 @@
package protondrive
/*
Package protondrive
Author: Da3zKi7<da3zki7@duck.com>
Date: 2025-09-18
Thanks to @henrybear327 for modded go-proton-api & Proton-API-Bridge
The power of open-source, the force of teamwork and the magic of reverse engineering!
D@' 3z K!7 - The King Of Cracking
Да здравствует Родина))
*/
import (
"bufio"
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"mime"
"net"
"net/http"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/ProtonMail/gopenpgp/v2/crypto"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/henrybear327/Proton-API-Bridge/common"
"github.com/henrybear327/go-proton-api"
)
func (d *ProtonDrive) loadCachedCredentials() (*common.ReusableCredentialData, error) {
if d.credentialCacheFile == "" {
return nil, nil
}
if _, err := os.Stat(d.credentialCacheFile); os.IsNotExist(err) {
return nil, nil
}
data, err := os.ReadFile(d.credentialCacheFile)
if err != nil {
return nil, fmt.Errorf("failed to read credential cache file: %w", err)
}
var credentials common.ReusableCredentialData
if err := json.Unmarshal(data, &credentials); err != nil {
return nil, fmt.Errorf("failed to parse cached credentials: %w", err)
}
if credentials.UID == "" || credentials.AccessToken == "" ||
credentials.RefreshToken == "" || credentials.SaltedKeyPass == "" {
return nil, fmt.Errorf("cached credentials are incomplete")
}
return &credentials, nil
}
func (d *ProtonDrive) searchByPath(ctx context.Context, fullPath string, isFolder bool) (*proton.Link, error) {
if fullPath == "/" {
return d.protonDrive.RootLink, nil
}
cleanPath := strings.Trim(fullPath, "/")
pathParts := strings.Split(cleanPath, "/")
currentLink := d.protonDrive.RootLink
for i, part := range pathParts {
isLastPart := i == len(pathParts)-1
searchForFolder := !isLastPart || isFolder
entries, err := d.protonDrive.ListDirectory(ctx, currentLink.LinkID)
if err != nil {
return nil, fmt.Errorf("failed to list directory: %w", err)
}
found := false
for _, entry := range entries {
// entry.Name is already decrypted!
if entry.Name == part && entry.IsFolder == searchForFolder {
currentLink = entry.Link
found = true
break
}
}
if !found {
return nil, fmt.Errorf("path not found: %s (looking for part: %s)", fullPath, part)
}
}
return currentLink, nil
}
func (pr *progressReader) Read(p []byte) (int, error) {
n, err := pr.reader.Read(p)
pr.current += int64(n)
if pr.callback != nil {
percentage := float64(pr.current) / float64(pr.total) * 100
pr.callback(percentage)
}
return n, err
}
func (d *ProtonDrive) uploadFile(ctx context.Context, parentLinkID, fileName string, file *os.File, size int64, up driver.UpdateProgress) error {
fileInfo, err := file.Stat()
if err != nil {
return fmt.Errorf("failed to get file info: %w", err)
}
_, err = d.protonDrive.GetLink(ctx, parentLinkID)
if err != nil {
return fmt.Errorf("failed to get parent link: %w", err)
}
reader := &progressReader{
reader: bufio.NewReader(file),
total: size,
current: 0,
callback: up,
}
_, _, err = d.protonDrive.UploadFileByReader(ctx, parentLinkID, fileName, fileInfo.ModTime(), reader, 0)
if err != nil {
return fmt.Errorf("failed to upload file: %w", err)
}
return nil
}
func (d *ProtonDrive) ensureTempServer() error {
if d.tempServer != nil {
// Already running
return nil
}
listener, err := net.Listen("tcp", ":0")
if err != nil {
return err
}
d.tempServerPort = listener.Addr().(*net.TCPAddr).Port
mux := http.NewServeMux()
mux.HandleFunc("/temp/", d.handleTempDownload)
d.tempServer = &http.Server{
Handler: mux,
}
go func() {
d.tempServer.Serve(listener)
}()
return nil
}
func (d *ProtonDrive) handleTempDownload(w http.ResponseWriter, r *http.Request) {
token := strings.TrimPrefix(r.URL.Path, "/temp/")
d.tokenMutex.RLock()
info, exists := d.downloadTokens[token]
d.tokenMutex.RUnlock()
if !exists {
http.Error(w, "Invalid or expired token", http.StatusNotFound)
return
}
link, err := d.protonDrive.GetLink(r.Context(), info.LinkID)
if err != nil {
http.Error(w, "Failed to get file link", http.StatusInternalServerError)
return
}
// Get file size for range calculations
_, _, attrs, err := d.protonDrive.DownloadFile(r.Context(), link, 0)
if err != nil {
http.Error(w, "Failed to get file info", http.StatusInternalServerError)
return
}
fileSize := attrs.Size
rangeHeader := r.Header.Get("Range")
if rangeHeader != "" {
// Parse range header like "bytes=0-1023" or "bytes=1024-"
ranges, err := parseRange(rangeHeader, fileSize)
if err != nil {
http.Error(w, "Invalid range", http.StatusRequestedRangeNotSatisfiable)
return
}
if len(ranges) == 1 {
// Single range request, small
start, end := ranges[0].start, ranges[0].end
contentLength := end - start + 1
// Start download from offset
reader, _, _, err := d.protonDrive.DownloadFile(r.Context(), link, start)
if err != nil {
http.Error(w, "Failed to start download", http.StatusInternalServerError)
return
}
defer reader.Close()
w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", start, end, fileSize))
w.Header().Set("Content-Length", fmt.Sprintf("%d", contentLength))
w.Header().Set("Content-Type", mime.TypeByExtension(filepath.Ext(link.Name)))
// Partial content...
// Setting fileName is more cosmetical here
//.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", link.Name))
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", info.FileName))
w.Header().Set("Accept-Ranges", "bytes")
w.WriteHeader(http.StatusPartialContent)
io.CopyN(w, reader, contentLength)
return
}
}
// Full file download (non-range request)
reader, _, _, err := d.protonDrive.DownloadFile(r.Context(), link, 0)
if err != nil {
http.Error(w, "Failed to start download", http.StatusInternalServerError)
return
}
defer reader.Close()
// Set headers for full content
w.Header().Set("Content-Length", fmt.Sprintf("%d", fileSize))
w.Header().Set("Content-Type", mime.TypeByExtension(filepath.Ext(link.Name)))
// Setting fileName is needed since ProtonDrive fileName is more like a random string
//w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", link.Name))
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", info.FileName))
w.Header().Set("Accept-Ranges", "bytes")
// Stream the full file
io.Copy(w, reader)
}
func (d *ProtonDrive) generateDownloadToken(linkID, fileName string) string {
token := fmt.Sprintf("%d_%s", time.Now().UnixNano(), linkID[:8])
d.tokenMutex.Lock()
if d.downloadTokens == nil {
d.downloadTokens = make(map[string]*downloadInfo)
}
d.downloadTokens[token] = &downloadInfo{
LinkID: linkID,
FileName: fileName,
}
d.tokenMutex.Unlock()
go func() {
// Token expires in 1 hour
time.Sleep(1 * time.Hour)
d.tokenMutex.Lock()
delete(d.downloadTokens, token)
d.tokenMutex.Unlock()
}()
return token
}
func parseRange(rangeHeader string, size int64) ([]httpRange, error) {
if !strings.HasPrefix(rangeHeader, "bytes=") {
return nil, fmt.Errorf("invalid range header")
}
rangeSpec := strings.TrimPrefix(rangeHeader, "bytes=")
ranges := strings.Split(rangeSpec, ",")
var result []httpRange
for _, r := range ranges {
r = strings.TrimSpace(r)
if strings.Contains(r, "-") {
parts := strings.Split(r, "-")
if len(parts) != 2 {
return nil, fmt.Errorf("invalid range format")
}
var start, end int64
var err error
if parts[0] == "" {
// Suffix range (e.g., "-500")
if parts[1] == "" {
return nil, fmt.Errorf("invalid range format")
}
end = size - 1
start, err = strconv.ParseInt(parts[1], 10, 64)
if err != nil {
return nil, err
}
start = size - start
if start < 0 {
start = 0
}
} else if parts[1] == "" {
// Prefix range (e.g., "500-")
start, err = strconv.ParseInt(parts[0], 10, 64)
if err != nil {
return nil, err
}
end = size - 1
} else {
// Full range (e.g., "0-1023")
start, err = strconv.ParseInt(parts[0], 10, 64)
if err != nil {
return nil, err
}
end, err = strconv.ParseInt(parts[1], 10, 64)
if err != nil {
return nil, err
}
}
if start >= size || end >= size || start > end {
return nil, fmt.Errorf("range out of bounds")
}
result = append(result, httpRange{start: start, end: end})
}
}
return result, nil
}
func (d *ProtonDrive) encryptFileName(ctx context.Context, name string, parentLinkID string) (string, error) {
parentLink, err := d.getLink(ctx, parentLinkID)
if err != nil {
return "", fmt.Errorf("failed to get parent link: %w", err)
}
// Get parent node keyring
parentNodeKR, err := d.getLinkKR(ctx, parentLink)
if err != nil {
return "", fmt.Errorf("failed to get parent keyring: %w", err)
}
// Temporary file (request)
tempReq := proton.CreateFileReq{
SignatureAddress: d.MainShare.Creator,
}
// Encrypt the filename
err = tempReq.SetName(name, d.DefaultAddrKR, parentNodeKR)
if err != nil {
return "", fmt.Errorf("failed to encrypt filename: %w", err)
}
return tempReq.Name, nil
}
func (d *ProtonDrive) generateFileNameHash(ctx context.Context, name string, parentLinkID string) (string, error) {
parentLink, err := d.getLink(ctx, parentLinkID)
if err != nil {
return "", fmt.Errorf("failed to get parent link: %w", err)
}
// Get parent node keyring
parentNodeKR, err := d.getLinkKR(ctx, parentLink)
if err != nil {
return "", fmt.Errorf("failed to get parent keyring: %w", err)
}
signatureVerificationKR, err := d.getSignatureVerificationKeyring([]string{parentLink.SignatureEmail}, parentNodeKR)
if err != nil {
return "", fmt.Errorf("failed to get signature verification keyring: %w", err)
}
parentHashKey, err := parentLink.GetHashKey(parentNodeKR, signatureVerificationKR)
if err != nil {
return "", fmt.Errorf("failed to get parent hash key: %w", err)
}
nameHash, err := proton.GetNameHash(name, parentHashKey)
if err != nil {
return "", fmt.Errorf("failed to generate name hash: %w", err)
}
return nameHash, nil
}
func (d *ProtonDrive) getOriginalNameHash(link *proton.Link) (string, error) {
if link == nil {
return "", fmt.Errorf("link cannot be nil")
}
if link.Hash == "" {
return "", fmt.Errorf("link hash is empty")
}
return link.Hash, nil
}
func (d *ProtonDrive) getLink(ctx context.Context, linkID string) (*proton.Link, error) {
if linkID == "" {
return nil, fmt.Errorf("linkID cannot be empty")
}
link, err := d.c.GetLink(ctx, d.MainShare.ShareID, linkID)
if err != nil {
return nil, err
}
return &link, nil
}
func (d *ProtonDrive) getLinkKR(ctx context.Context, link *proton.Link) (*crypto.KeyRing, error) {
if link == nil {
return nil, fmt.Errorf("link cannot be nil")
}
// Root Link or Root Dir
if link.ParentLinkID == "" {
signatureVerificationKR, err := d.getSignatureVerificationKeyring([]string{link.SignatureEmail})
if err != nil {
return nil, err
}
return link.GetKeyRing(d.MainShareKR, signatureVerificationKR)
}
// Get parent keyring recursively
parentLink, err := d.getLink(ctx, link.ParentLinkID)
if err != nil {
return nil, err
}
parentNodeKR, err := d.getLinkKR(ctx, parentLink)
if err != nil {
return nil, err
}
signatureVerificationKR, err := d.getSignatureVerificationKeyring([]string{link.SignatureEmail})
if err != nil {
return nil, err
}
return link.GetKeyRing(parentNodeKR, signatureVerificationKR)
}
var (
ErrKeyPassOrSaltedKeyPassMustBeNotNil = errors.New("either keyPass or saltedKeyPass must be not nil")
ErrFailedToUnlockUserKeys = errors.New("failed to unlock user keys")
)
func getAccountKRs(ctx context.Context, c *proton.Client, keyPass, saltedKeyPass []byte) (*crypto.KeyRing, map[string]*crypto.KeyRing, map[string]proton.Address, []byte, error) {
user, err := c.GetUser(ctx)
if err != nil {
return nil, nil, nil, nil, err
}
// fmt.Printf("user %#v", user)
addrsArr, err := c.GetAddresses(ctx)
if err != nil {
return nil, nil, nil, nil, err
}
// fmt.Printf("addr %#v", addr)
if saltedKeyPass == nil {
if keyPass == nil {
return nil, nil, nil, nil, ErrKeyPassOrSaltedKeyPassMustBeNotNil
}
// Due to limitations, salts are stored using cacheCredentialToFile
salts, err := c.GetSalts(ctx)
if err != nil {
return nil, nil, nil, nil, err
}
// fmt.Printf("salts %#v", salts)
saltedKeyPass, err = salts.SaltForKey(keyPass, user.Keys.Primary().ID)
if err != nil {
return nil, nil, nil, nil, err
}
// fmt.Printf("saltedKeyPass ok")
}
userKR, addrKRs, err := proton.Unlock(user, addrsArr, saltedKeyPass, nil)
if err != nil {
return nil, nil, nil, nil, err
} else if userKR.CountDecryptionEntities() == 0 {
return nil, nil, nil, nil, ErrFailedToUnlockUserKeys
}
addrs := make(map[string]proton.Address)
for _, addr := range addrsArr {
addrs[addr.Email] = addr
}
return userKR, addrKRs, addrs, saltedKeyPass, nil
}
func (d *ProtonDrive) getSignatureVerificationKeyring(emailAddresses []string, verificationAddrKRs ...*crypto.KeyRing) (*crypto.KeyRing, error) {
ret, err := crypto.NewKeyRing(nil)
if err != nil {
return nil, err
}
for _, emailAddress := range emailAddresses {
if addr, ok := d.addrData[emailAddress]; ok {
if addrKR, exists := d.addrKRs[addr.ID]; exists {
err = d.addKeysFromKR(ret, addrKR)
if err != nil {
return nil, err
}
}
}
}
for _, kr := range verificationAddrKRs {
err = d.addKeysFromKR(ret, kr)
if err != nil {
return nil, err
}
}
if ret.CountEntities() == 0 {
return nil, fmt.Errorf("no keyring for signature verification")
}
return ret, nil
}
func (d *ProtonDrive) addKeysFromKR(kr *crypto.KeyRing, newKRs ...*crypto.KeyRing) error {
for i := range newKRs {
for _, key := range newKRs[i].GetKeys() {
err := kr.AddKey(key)
if err != nil {
return err
}
}
}
return nil
}
func (d *ProtonDrive) DirectRename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
//fmt.Printf("DEBUG DirectRename: path=%s, newName=%s", srcObj.GetPath(), newName)
if d.MainShare == nil || d.DefaultAddrKR == nil {
return nil, fmt.Errorf("missing required fields: MainShare=%v, DefaultAddrKR=%v",
d.MainShare != nil, d.DefaultAddrKR != nil)
}
if d.protonDrive == nil {
return nil, fmt.Errorf("protonDrive bridge is nil")
}
srcLink, err := d.searchByPath(ctx, srcObj.GetPath(), srcObj.IsDir())
if err != nil {
return nil, fmt.Errorf("failed to find source: %w", err)
}
parentLinkID := srcLink.ParentLinkID
if parentLinkID == "" {
return nil, fmt.Errorf("cannot rename root folder")
}
encryptedName, err := d.encryptFileName(ctx, newName, parentLinkID)
if err != nil {
return nil, fmt.Errorf("failed to encrypt filename: %w", err)
}
newHash, err := d.generateFileNameHash(ctx, newName, parentLinkID)
if err != nil {
return nil, fmt.Errorf("failed to generate new hash: %w", err)
}
originalHash, err := d.getOriginalNameHash(srcLink)
if err != nil {
return nil, fmt.Errorf("failed to get original hash: %w", err)
}
renameReq := RenameRequest{
Name: encryptedName,
NameSignatureEmail: d.MainShare.Creator,
Hash: newHash,
OriginalHash: originalHash,
}
err = d.executeRenameAPI(ctx, srcLink.LinkID, renameReq)
if err != nil {
return nil, fmt.Errorf("rename API call failed: %w", err)
}
return &model.Object{
Name: newName,
Size: srcObj.GetSize(),
Modified: srcObj.ModTime(),
IsFolder: srcObj.IsDir(),
}, nil
}
func (d *ProtonDrive) executeRenameAPI(ctx context.Context, linkID string, req RenameRequest) error {
renameURL := fmt.Sprintf(d.apiBase+"/drive/v2/volumes/%s/links/%s/rename",
d.MainShare.VolumeID, linkID)
reqBody, err := json.Marshal(req)
if err != nil {
return fmt.Errorf("failed to marshal rename request: %w", err)
}
httpReq, err := http.NewRequestWithContext(ctx, "PUT", renameURL, bytes.NewReader(reqBody))
if err != nil {
return fmt.Errorf("failed to create HTTP request: %w", err)
}
httpReq.Header.Set("Content-Type", "application/json")
httpReq.Header.Set("Accept", d.protonJson)
httpReq.Header.Set("X-Pm-Appversion", d.webDriveAV)
httpReq.Header.Set("X-Pm-Drive-Sdk-Version", d.sdkVersion)
httpReq.Header.Set("X-Pm-Uid", d.credentials.UID)
httpReq.Header.Set("Authorization", "Bearer "+d.credentials.AccessToken)
client := &http.Client{}
resp, err := client.Do(httpReq)
if err != nil {
return fmt.Errorf("failed to execute rename request: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("rename failed with status %d", resp.StatusCode)
}
var renameResp RenameResponse
if err := json.NewDecoder(resp.Body).Decode(&renameResp); err != nil {
return fmt.Errorf("failed to decode rename response: %w", err)
}
if renameResp.Code != 1000 {
return fmt.Errorf("rename failed with code %d", renameResp.Code)
}
return nil
}
func (d *ProtonDrive) executeMoveAPI(ctx context.Context, linkID string, req MoveRequest) error {
//fmt.Printf("DEBUG Move Request - Name: %s\n", req.Name)
//fmt.Printf("DEBUG Move Request - Hash: %s\n", req.Hash)
//fmt.Printf("DEBUG Move Request - OriginalHash: %s\n", req.OriginalHash)
//fmt.Printf("DEBUG Move Request - ParentLinkID: %s\n", req.ParentLinkID)
//fmt.Printf("DEBUG Move Request - Name length: %d\n", len(req.Name))
//fmt.Printf("DEBUG Move Request - NameSignatureEmail: %s\n", req.NameSignatureEmail)
//fmt.Printf("DEBUG Move Request - ContentHash: %v\n", req.ContentHash)
//fmt.Printf("DEBUG Move Request - NodePassphrase length: %d\n", len(req.NodePassphrase))
//fmt.Printf("DEBUG Move Request - NodePassphraseSignature length: %d\n", len(req.NodePassphraseSignature))
//fmt.Printf("DEBUG Move Request - SrcLinkID: %s\n", linkID)
//fmt.Printf("DEBUG Move Request - DstParentLinkID: %s\n", req.ParentLinkID)
//fmt.Printf("DEBUG Move Request - ShareID: %s\n", d.MainShare.ShareID)
srcLink, _ := d.getLink(ctx, linkID)
if srcLink != nil && srcLink.ParentLinkID == req.ParentLinkID {
return fmt.Errorf("cannot move to same parent directory")
}
moveURL := fmt.Sprintf(d.apiBase+"/drive/v2/volumes/%s/links/%s/move",
d.MainShare.VolumeID, linkID)
reqBody, err := json.Marshal(req)
if err != nil {
return fmt.Errorf("failed to marshal move request: %w", err)
}
httpReq, err := http.NewRequestWithContext(ctx, "PUT", moveURL, bytes.NewReader(reqBody))
if err != nil {
return fmt.Errorf("failed to create HTTP request: %w", err)
}
httpReq.Header.Set("Authorization", "Bearer "+d.credentials.AccessToken)
httpReq.Header.Set("Accept", d.protonJson)
httpReq.Header.Set("X-Pm-Appversion", d.webDriveAV)
httpReq.Header.Set("X-Pm-Drive-Sdk-Version", d.sdkVersion)
httpReq.Header.Set("X-Pm-Uid", d.credentials.UID)
httpReq.Header.Set("Content-Type", "application/json")
client := &http.Client{}
resp, err := client.Do(httpReq)
if err != nil {
return fmt.Errorf("failed to execute move request: %w", err)
}
defer resp.Body.Close()
var moveResp RenameResponse
if err := json.NewDecoder(resp.Body).Decode(&moveResp); err != nil {
return fmt.Errorf("failed to decode move response: %w", err)
}
if moveResp.Code != 1000 {
return fmt.Errorf("move operation failed with code: %d", moveResp.Code)
}
return nil
}
func (d *ProtonDrive) DirectMove(ctx context.Context, srcObj model.Obj, dstDir model.Obj) (model.Obj, error) {
//fmt.Printf("DEBUG DirectMove: srcPath=%s, dstPath=%s", srcObj.GetPath(), dstDir.GetPath())
srcLink, err := d.searchByPath(ctx, srcObj.GetPath(), srcObj.IsDir())
if err != nil {
return nil, fmt.Errorf("failed to find source: %w", err)
}
var dstParentLinkID string
if dstDir.GetPath() == "/" {
dstParentLinkID = d.RootLink.LinkID
} else {
dstLink, err := d.searchByPath(ctx, dstDir.GetPath(), true)
if err != nil {
return nil, fmt.Errorf("failed to find destination: %w", err)
}
dstParentLinkID = dstLink.LinkID
}
if srcObj.IsDir() {
// Check if destination is a descendant of source
if err := d.checkCircularMove(ctx, srcLink.LinkID, dstParentLinkID); err != nil {
return nil, err
}
}
// Encrypt the filename for the new location
encryptedName, err := d.encryptFileName(ctx, srcObj.GetName(), dstParentLinkID)
if err != nil {
return nil, fmt.Errorf("failed to encrypt filename: %w", err)
}
newHash, err := d.generateNameHash(ctx, srcObj.GetName(), dstParentLinkID)
if err != nil {
return nil, fmt.Errorf("failed to generate new hash: %w", err)
}
originalHash, err := d.getOriginalNameHash(srcLink)
if err != nil {
return nil, fmt.Errorf("failed to get original hash: %w", err)
}
// Re-encrypt node passphrase for new parent context
reencryptedPassphrase, err := d.reencryptNodePassphrase(ctx, srcLink, dstParentLinkID)
if err != nil {
return nil, fmt.Errorf("failed to re-encrypt node passphrase: %w", err)
}
moveReq := MoveRequest{
ParentLinkID: dstParentLinkID,
NodePassphrase: reencryptedPassphrase,
Name: encryptedName,
NameSignatureEmail: d.MainShare.Creator,
Hash: newHash,
OriginalHash: originalHash,
ContentHash: nil,
// *** Causes rejection ***
/* NodePassphraseSignature: srcLink.NodePassphraseSignature, */
}
//fmt.Printf("DEBUG MoveRequest validation:\n")
//fmt.Printf(" Name length: %d\n", len(moveReq.Name))
//fmt.Printf(" Hash: %s\n", moveReq.Hash)
//fmt.Printf(" OriginalHash: %s\n", moveReq.OriginalHash)
//fmt.Printf(" NodePassphrase length: %d\n", len(moveReq.NodePassphrase))
/* fmt.Printf(" NodePassphraseSignature length: %d\n", len(moveReq.NodePassphraseSignature)) */
//fmt.Printf(" NameSignatureEmail: %s\n", moveReq.NameSignatureEmail)
err = d.executeMoveAPI(ctx, srcLink.LinkID, moveReq)
if err != nil {
return nil, fmt.Errorf("move API call failed: %w", err)
}
return &model.Object{
Name: srcObj.GetName(),
Size: srcObj.GetSize(),
Modified: srcObj.ModTime(),
IsFolder: srcObj.IsDir(),
}, nil
}
func (d *ProtonDrive) reencryptNodePassphrase(ctx context.Context, srcLink *proton.Link, dstParentLinkID string) (string, error) {
// Get source parent link with metadata
srcParentLink, err := d.getLink(ctx, srcLink.ParentLinkID)
if err != nil {
return "", fmt.Errorf("failed to get source parent link: %w", err)
}
// Get source parent keyring using link object
srcParentKR, err := d.getLinkKR(ctx, srcParentLink)
if err != nil {
return "", fmt.Errorf("failed to get source parent keyring: %w", err)
}
// Get destination parent link with metadata
dstParentLink, err := d.getLink(ctx, dstParentLinkID)
if err != nil {
return "", fmt.Errorf("failed to get destination parent link: %w", err)
}
// Get destination parent keyring using link object
dstParentKR, err := d.getLinkKR(ctx, dstParentLink)
if err != nil {
return "", fmt.Errorf("failed to get destination parent keyring: %w", err)
}
// Re-encrypt the node passphrase from source parent context to destination parent context
reencryptedPassphrase, err := reencryptKeyPacket(srcParentKR, dstParentKR, d.DefaultAddrKR, srcLink.NodePassphrase)
if err != nil {
return "", fmt.Errorf("failed to re-encrypt key packet: %w", err)
}
return reencryptedPassphrase, nil
}
func (d *ProtonDrive) generateNameHash(ctx context.Context, name string, parentLinkID string) (string, error) {
parentLink, err := d.getLink(ctx, parentLinkID)
if err != nil {
return "", fmt.Errorf("failed to get parent link: %w", err)
}
// Get parent node keyring
parentNodeKR, err := d.getLinkKR(ctx, parentLink)
if err != nil {
return "", fmt.Errorf("failed to get parent keyring: %w", err)
}
// Get signature verification keyring
signatureVerificationKR, err := d.getSignatureVerificationKeyring([]string{parentLink.SignatureEmail}, parentNodeKR)
if err != nil {
return "", fmt.Errorf("failed to get signature verification keyring: %w", err)
}
parentHashKey, err := parentLink.GetHashKey(parentNodeKR, signatureVerificationKR)
if err != nil {
return "", fmt.Errorf("failed to get parent hash key: %w", err)
}
nameHash, err := proton.GetNameHash(name, parentHashKey)
if err != nil {
return "", fmt.Errorf("failed to generate name hash: %w", err)
}
return nameHash, nil
}
func reencryptKeyPacket(srcKR, dstKR, _ *crypto.KeyRing, passphrase string) (string, error) { // addrKR (3)
oldSplitMessage, err := crypto.NewPGPSplitMessageFromArmored(passphrase)
if err != nil {
return "", err
}
sessionKey, err := srcKR.DecryptSessionKey(oldSplitMessage.KeyPacket)
if err != nil {
return "", err
}
newKeyPacket, err := dstKR.EncryptSessionKey(sessionKey)
if err != nil {
return "", err
}
newSplitMessage := crypto.NewPGPSplitMessage(newKeyPacket, oldSplitMessage.DataPacket)
return newSplitMessage.GetArmored()
}
func (d *ProtonDrive) checkCircularMove(ctx context.Context, srcLinkID, dstParentLinkID string) error {
currentLinkID := dstParentLinkID
for currentLinkID != "" && currentLinkID != d.RootLink.LinkID {
if currentLinkID == srcLinkID {
return fmt.Errorf("cannot move folder into itself or its subfolder")
}
currentLink, err := d.getLink(ctx, currentLinkID)
if err != nil {
return err
}
currentLinkID = currentLink.ParentLinkID
}
return nil
}

View File

@@ -15,6 +15,7 @@ import (
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/cron"
"github.com/alist-org/alist/v3/server/common"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
@@ -32,6 +33,33 @@ type S3 struct {
cron *cron.Cron
}
var storageClassLookup = map[string]string{
"standard": s3.ObjectStorageClassStandard,
"reduced_redundancy": s3.ObjectStorageClassReducedRedundancy,
"glacier": s3.ObjectStorageClassGlacier,
"standard_ia": s3.ObjectStorageClassStandardIa,
"onezone_ia": s3.ObjectStorageClassOnezoneIa,
"intelligent_tiering": s3.ObjectStorageClassIntelligentTiering,
"deep_archive": s3.ObjectStorageClassDeepArchive,
"outposts": s3.ObjectStorageClassOutposts,
"glacier_ir": s3.ObjectStorageClassGlacierIr,
"snow": s3.ObjectStorageClassSnow,
"express_onezone": s3.ObjectStorageClassExpressOnezone,
}
func (d *S3) resolveStorageClass() *string {
value := strings.TrimSpace(d.StorageClass)
if value == "" {
return nil
}
normalized := strings.ToLower(strings.ReplaceAll(value, "-", "_"))
if v, ok := storageClassLookup[normalized]; ok {
return aws.String(v)
}
log.Warnf("s3: unknown storage class %q, using raw value", d.StorageClass)
return aws.String(value)
}
func (d *S3) Config() driver.Config {
return d.config
}
@@ -179,8 +207,14 @@ func (d *S3) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up
}),
ContentType: &contentType,
}
if storageClass := d.resolveStorageClass(); storageClass != nil {
input.StorageClass = storageClass
}
_, err := uploader.UploadWithContext(ctx, input)
return err
}
var _ driver.Driver = (*S3)(nil)
var (
_ driver.Driver = (*S3)(nil)
_ driver.Other = (*S3)(nil)
)

View File

@@ -21,6 +21,7 @@ type Addition struct {
ListObjectVersion string `json:"list_object_version" type:"select" options:"v1,v2" default:"v1"`
RemoveBucket bool `json:"remove_bucket" help:"Remove bucket name from path when using custom host."`
AddFilenameToDisposition bool `json:"add_filename_to_disposition" help:"Add filename to Content-Disposition header."`
StorageClass string `json:"storage_class" type:"select" options:",standard,standard_ia,onezone_ia,intelligent_tiering,glacier,glacier_ir,deep_archive,archive" help:"Storage class for new objects. AWS and Tencent COS support different subsets (COS uses ARCHIVE/DEEP_ARCHIVE)."`
}
func init() {

286
drivers/s3/other.go Normal file
View File

@@ -0,0 +1,286 @@
package s3
import (
"context"
"encoding/json"
"fmt"
"net/url"
"strings"
"time"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/s3"
)
const (
OtherMethodArchive = "archive"
OtherMethodArchiveStatus = "archive_status"
OtherMethodThaw = "thaw"
OtherMethodThawStatus = "thaw_status"
)
type ArchiveRequest struct {
StorageClass string `json:"storage_class"`
}
type ThawRequest struct {
Days int64 `json:"days"`
Tier string `json:"tier"`
}
type ObjectDescriptor struct {
Path string `json:"path"`
Bucket string `json:"bucket"`
Key string `json:"key"`
}
type ArchiveResponse struct {
Action string `json:"action"`
Object ObjectDescriptor `json:"object"`
StorageClass string `json:"storage_class"`
RequestID string `json:"request_id,omitempty"`
VersionID string `json:"version_id,omitempty"`
ETag string `json:"etag,omitempty"`
LastModified string `json:"last_modified,omitempty"`
}
type ThawResponse struct {
Action string `json:"action"`
Object ObjectDescriptor `json:"object"`
RequestID string `json:"request_id,omitempty"`
Status *RestoreStatus `json:"status,omitempty"`
}
type RestoreStatus struct {
Ongoing bool `json:"ongoing"`
Expiry string `json:"expiry,omitempty"`
Raw string `json:"raw"`
}
func (d *S3) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
if args.Obj == nil {
return nil, fmt.Errorf("missing object reference")
}
if args.Obj.IsDir() {
return nil, errs.NotSupport
}
switch strings.ToLower(strings.TrimSpace(args.Method)) {
case "archive":
return d.archive(ctx, args)
case "archive_status":
return d.archiveStatus(ctx, args)
case "thaw":
return d.thaw(ctx, args)
case "thaw_status":
return d.thawStatus(ctx, args)
default:
return nil, errs.NotSupport
}
}
func (d *S3) archive(ctx context.Context, args model.OtherArgs) (interface{}, error) {
key := getKey(args.Obj.GetPath(), false)
payload := ArchiveRequest{}
if err := DecodeOtherArgs(args.Data, &payload); err != nil {
return nil, fmt.Errorf("parse archive request: %w", err)
}
if payload.StorageClass == "" {
return nil, fmt.Errorf("storage_class is required")
}
storageClass := NormalizeStorageClass(payload.StorageClass)
input := &s3.CopyObjectInput{
Bucket: &d.Bucket,
Key: &key,
CopySource: aws.String(url.PathEscape(d.Bucket + "/" + key)),
MetadataDirective: aws.String(s3.MetadataDirectiveCopy),
StorageClass: aws.String(storageClass),
}
copyReq, output := d.client.CopyObjectRequest(input)
copyReq.SetContext(ctx)
if err := copyReq.Send(); err != nil {
return nil, err
}
resp := ArchiveResponse{
Action: "archive",
Object: d.describeObject(args.Obj, key),
StorageClass: storageClass,
RequestID: copyReq.RequestID,
}
if output.VersionId != nil {
resp.VersionID = aws.StringValue(output.VersionId)
}
if result := output.CopyObjectResult; result != nil {
resp.ETag = aws.StringValue(result.ETag)
if result.LastModified != nil {
resp.LastModified = result.LastModified.UTC().Format(time.RFC3339)
}
}
if status, err := d.describeObjectStatus(ctx, key); err == nil {
if status.StorageClass != "" {
resp.StorageClass = status.StorageClass
}
}
return resp, nil
}
func (d *S3) archiveStatus(ctx context.Context, args model.OtherArgs) (interface{}, error) {
key := getKey(args.Obj.GetPath(), false)
status, err := d.describeObjectStatus(ctx, key)
if err != nil {
return nil, err
}
return ArchiveResponse{
Action: "archive_status",
Object: d.describeObject(args.Obj, key),
StorageClass: status.StorageClass,
}, nil
}
func (d *S3) thaw(ctx context.Context, args model.OtherArgs) (interface{}, error) {
key := getKey(args.Obj.GetPath(), false)
payload := ThawRequest{Days: 1}
if err := DecodeOtherArgs(args.Data, &payload); err != nil {
return nil, fmt.Errorf("parse thaw request: %w", err)
}
if payload.Days <= 0 {
payload.Days = 1
}
restoreRequest := &s3.RestoreRequest{
Days: aws.Int64(payload.Days),
}
if tier := NormalizeRestoreTier(payload.Tier); tier != "" {
restoreRequest.GlacierJobParameters = &s3.GlacierJobParameters{Tier: aws.String(tier)}
}
input := &s3.RestoreObjectInput{
Bucket: &d.Bucket,
Key: &key,
RestoreRequest: restoreRequest,
}
restoreReq, _ := d.client.RestoreObjectRequest(input)
restoreReq.SetContext(ctx)
if err := restoreReq.Send(); err != nil {
return nil, err
}
status, _ := d.describeObjectStatus(ctx, key)
resp := ThawResponse{
Action: "thaw",
Object: d.describeObject(args.Obj, key),
RequestID: restoreReq.RequestID,
}
if status != nil {
resp.Status = status.Restore
}
return resp, nil
}
func (d *S3) thawStatus(ctx context.Context, args model.OtherArgs) (interface{}, error) {
key := getKey(args.Obj.GetPath(), false)
status, err := d.describeObjectStatus(ctx, key)
if err != nil {
return nil, err
}
return ThawResponse{
Action: "thaw_status",
Object: d.describeObject(args.Obj, key),
Status: status.Restore,
}, nil
}
func (d *S3) describeObject(obj model.Obj, key string) ObjectDescriptor {
return ObjectDescriptor{
Path: obj.GetPath(),
Bucket: d.Bucket,
Key: key,
}
}
type objectStatus struct {
StorageClass string
Restore *RestoreStatus
}
func (d *S3) describeObjectStatus(ctx context.Context, key string) (*objectStatus, error) {
head, err := d.client.HeadObjectWithContext(ctx, &s3.HeadObjectInput{Bucket: &d.Bucket, Key: &key})
if err != nil {
return nil, err
}
status := &objectStatus{
StorageClass: aws.StringValue(head.StorageClass),
Restore: parseRestoreHeader(head.Restore),
}
return status, nil
}
func parseRestoreHeader(header *string) *RestoreStatus {
if header == nil {
return nil
}
value := strings.TrimSpace(*header)
if value == "" {
return nil
}
status := &RestoreStatus{Raw: value}
parts := strings.Split(value, ",")
for _, part := range parts {
part = strings.TrimSpace(part)
if part == "" {
continue
}
if strings.HasPrefix(part, "ongoing-request=") {
status.Ongoing = strings.Contains(part, "\"true\"")
}
if strings.HasPrefix(part, "expiry-date=") {
expiry := strings.Trim(part[len("expiry-date="):], "\"")
if expiry != "" {
if t, err := time.Parse(time.RFC1123, expiry); err == nil {
status.Expiry = t.UTC().Format(time.RFC3339)
} else {
status.Expiry = expiry
}
}
}
}
return status
}
func DecodeOtherArgs(data interface{}, target interface{}) error {
if data == nil {
return nil
}
raw, err := json.Marshal(data)
if err != nil {
return err
}
return json.Unmarshal(raw, target)
}
func NormalizeStorageClass(value string) string {
normalized := strings.ToLower(strings.TrimSpace(strings.ReplaceAll(value, "-", "_")))
if normalized == "" {
return value
}
if v, ok := storageClassLookup[normalized]; ok {
return v
}
return value
}
func NormalizeRestoreTier(value string) string {
normalized := strings.ToLower(strings.TrimSpace(value))
switch normalized {
case "", "default":
return ""
case "bulk":
return s3.TierBulk
case "standard":
return s3.TierStandard
case "expedited":
return s3.TierExpedited
default:
return value
}
}

View File

@@ -109,13 +109,13 @@ func (d *S3) listV1(prefix string, args model.ListArgs) ([]model.Obj, error) {
if !args.S3ShowPlaceholder && (name == getPlaceholderName(d.Placeholder) || name == d.Placeholder) {
continue
}
file := model.Object{
file := &model.Object{
//Id: *object.Key,
Name: name,
Size: *object.Size,
Modified: *object.LastModified,
}
files = append(files, &file)
files = append(files, model.WrapObjStorageClass(file, aws.StringValue(object.StorageClass)))
}
if listObjectsResult.IsTruncated == nil {
return nil, errors.New("IsTruncated nil")
@@ -164,13 +164,13 @@ func (d *S3) listV2(prefix string, args model.ListArgs) ([]model.Obj, error) {
if !args.S3ShowPlaceholder && (name == getPlaceholderName(d.Placeholder) || name == d.Placeholder) {
continue
}
file := model.Object{
file := &model.Object{
//Id: *object.Key,
Name: name,
Size: *object.Size,
Modified: *object.LastModified,
}
files = append(files, &file)
files = append(files, model.WrapObjStorageClass(file, aws.StringValue(object.StorageClass)))
}
if !aws.BoolValue(listObjectsResult.IsTruncated) {
break
@@ -202,6 +202,9 @@ func (d *S3) copyFile(ctx context.Context, src string, dst string) error {
CopySource: aws.String(url.PathEscape(d.Bucket + "/" + srcKey)),
Key: &dstKey,
}
if storageClass := d.resolveStorageClass(); storageClass != nil {
input.StorageClass = storageClass
}
_, err := d.client.CopyObject(input)
return err
}

28
go.mod
View File

@@ -3,10 +3,13 @@ module github.com/alist-org/alist/v3
go 1.23.4
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.0
github.com/KirCute/ftpserverlib-pasvportmap v1.25.0
github.com/KirCute/sftpd-alist v0.0.12
github.com/ProtonMail/go-crypto v1.0.0
github.com/SheltonZhu/115driver v1.0.34
github.com/ProtonMail/gopenpgp/v2 v2.7.4
github.com/SheltonZhu/115driver v1.1.2
github.com/Xhofe/go-cache v0.0.0-20240804043513-b1a71927bc21
github.com/Xhofe/rateg v0.0.0-20230728072201-251a4e1adad4
github.com/alist-org/gofakes3 v0.0.7
@@ -36,6 +39,8 @@ require (
github.com/google/uuid v1.6.0
github.com/gorilla/websocket v1.5.3
github.com/hekmon/transmissionrpc/v3 v3.0.0
github.com/henrybear327/Proton-API-Bridge v1.0.0
github.com/henrybear327/go-proton-api v1.0.0
github.com/hirochachacha/go-smb2 v1.1.0
github.com/ipfs/go-ipfs-api v0.7.0
github.com/jlaffaye/ftp v0.2.0
@@ -80,9 +85,19 @@ require (
)
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.0 // indirect
github.com/ProtonMail/bcrypt v0.0.0-20211005172633-e235017c1baf // indirect
github.com/ProtonMail/gluon v0.17.1-0.20230724134000-308be39be96e // indirect
github.com/ProtonMail/go-mime v0.0.0-20230322103455-7d82a3887f2f // indirect
github.com/ProtonMail/go-srp v0.0.7 // indirect
github.com/PuerkitoBio/goquery v1.8.1 // indirect
github.com/andybalholm/cascadia v1.3.2 // indirect
github.com/bradenaw/juniper v0.15.2 // indirect
github.com/cronokirby/saferith v0.33.0 // indirect
github.com/emersion/go-message v0.18.0 // indirect
github.com/emersion/go-textwrapper v0.0.0-20200911093747-65d896831594 // indirect
github.com/emersion/go-vcard v0.0.0-20230815062825-8fda7d206ec9 // indirect
github.com/relvacode/iso8601 v1.3.0 // indirect
)
require (
@@ -109,7 +124,6 @@ require (
github.com/ipfs/boxo v0.12.0 // indirect
github.com/jackc/puddle/v2 v2.2.1 // indirect
github.com/klauspost/pgzip v1.2.6 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/matoous/go-nanoid/v2 v2.1.0 // indirect
github.com/microcosm-cc/bluemonday v1.0.27
github.com/nwaples/rardecode/v2 v2.0.0-beta.4.0.20241112120701-034e449c6e78
@@ -268,4 +282,8 @@ require (
lukechampine.com/blake3 v1.1.7 // indirect
)
// replace github.com/xhofe/115-sdk-go => ../../xhofe/115-sdk-go
replace github.com/ProtonMail/go-proton-api => github.com/henrybear327/go-proton-api v1.0.0
replace github.com/cronokirby/saferith => github.com/Da3zKi7/saferith v0.33.0-fixed
replace github.com/SheltonZhu/115driver => github.com/okatu-loli/115driver v1.1.2

62
go.sum
View File

@@ -21,27 +21,50 @@ cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0Zeo
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0 h1:g0EZJwz7xkXQiZAI5xi9f3WWFYBlX1CPTrR+NDToRkQ=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0/go.mod h1:XCW7KnZet0Opnr7HccfUw1PLc4CjHqpcaxW8DHklNkQ=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.0 h1:B/dfvscEQtew9dVuoxqxrUKKv8Ih2f55PydknDamU+g=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.0/go.mod h1:fiPSssYvltE08HJchL04dOy+RD4hgrjph0cwGGMntdI=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 h1:ywEEhmNahHBihViHepv3xPBn1663uRv2t2q/ESv9seY=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0/go.mod h1:iZDifYGJTIgIIkYRNWPENUnqx6bJ2xnSDFI2tjwZNuY=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.6.0 h1:PiSrjRPpkQNjrM8H0WwKMnZUdu1RGMtd/LdGKUrOo+c=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.6.0/go.mod h1:oDrbWx4ewMylP7xHivfgixbfGBT6APAwsSoHRKotnIc=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.0 h1:UXT0o77lXQrikd1kgwIPQOUect7EoR/+sbP4wQKdzxM=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.0/go.mod h1:cTvi54pg19DoT07ekoeMgE/taAwNtCShVeZqA+Iv2xI=
github.com/AzureAD/microsoft-authentication-library-for-go v1.3.2 h1:kYRSnvJju5gYVyhkij+RTJ/VR6QIUaCfWeaFm2ycsjQ=
github.com/AzureAD/microsoft-authentication-library-for-go v1.3.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/Da3zKi7/saferith v0.33.0-fixed h1:fnIWTk7EP9mZAICf7aQjeoAwpfrlCrkOvqmi6CbWdTk=
github.com/Da3zKi7/saferith v0.33.0-fixed/go.mod h1:QKJhjoqUtBsXCAVEjw38mFqoi7DebT7kthcD7UzbnoA=
github.com/KirCute/ftpserverlib-pasvportmap v1.25.0 h1:ikwCzeqoqN6wvBHOB9OI6dde/jbV7EoTMpUcxtYl5Po=
github.com/KirCute/ftpserverlib-pasvportmap v1.25.0/go.mod h1:v0NgMtKDDi/6CM6r4P+daCljCW3eO9yS+Z+pZDTKo1E=
github.com/KirCute/sftpd-alist v0.0.12 h1:GNVM5QLbQLAfXP4wGUlXFA2IO6fVek0n0IsGnOuISdg=
github.com/KirCute/sftpd-alist v0.0.12/go.mod h1:2wNK7yyW2XfjyJq10OY6xB4COLac64hOwfV6clDJn6s=
github.com/Masterminds/semver/v3 v3.2.0 h1:3MEsd0SM6jqZojhjLWWeBY+Kcjy9i6MQAeY7YgDP83g=
github.com/Masterminds/semver/v3 v3.2.0/go.mod h1:qvl/7zhW3nngYb5+80sSMF+FG2BjYrf8m9wsX0PNOMQ=
github.com/Max-Sum/base32768 v0.0.0-20230304063302-18e6ce5945fd h1:nzE1YQBdx1bq9IlZinHa+HVffy+NmVRoKr+wHN8fpLE=
github.com/Max-Sum/base32768 v0.0.0-20230304063302-18e6ce5945fd/go.mod h1:C8yoIfvESpM3GD07OCHU7fqI7lhwyZ2Td1rbNbTAhnc=
github.com/ProtonMail/bcrypt v0.0.0-20210511135022-227b4adcab57/go.mod h1:HecWFHognK8GfRDGnFQbW/LiV7A3MX3gZVs45vk5h8I=
github.com/ProtonMail/bcrypt v0.0.0-20211005172633-e235017c1baf h1:yc9daCCYUefEs69zUkSzubzjBbL+cmOXgnmt9Fyd9ug=
github.com/ProtonMail/bcrypt v0.0.0-20211005172633-e235017c1baf/go.mod h1:o0ESU9p83twszAU8LBeJKFAAMX14tISa0yk4Oo5TOqo=
github.com/ProtonMail/gluon v0.17.1-0.20230724134000-308be39be96e h1:lCsqUUACrcMC83lg5rTo9Y0PnPItE61JSfvMyIcANwk=
github.com/ProtonMail/gluon v0.17.1-0.20230724134000-308be39be96e/go.mod h1:Og5/Dz1MiGpCJn51XujZwxiLG7WzvvjE5PRpZBQmAHo=
github.com/ProtonMail/go-crypto v0.0.0-20230321155629-9a39f2531310/go.mod h1:8TI4H3IbrackdNgv+92dI+rhpCaLqM0IfpgCgenFvRE=
github.com/ProtonMail/go-crypto v0.0.0-20230717121422-5aa5874ade95/go.mod h1:EjAoLdwvbIOoOQr3ihjnSoLZRtE8azugULFRteWMNc0=
github.com/ProtonMail/go-crypto v1.0.0 h1:LRuvITjQWX+WIfr930YHG2HNfjR1uOfyf5vE0kC2U78=
github.com/ProtonMail/go-crypto v1.0.0/go.mod h1:EjAoLdwvbIOoOQr3ihjnSoLZRtE8azugULFRteWMNc0=
github.com/ProtonMail/go-mime v0.0.0-20230322103455-7d82a3887f2f h1:tCbYj7/299ekTTXpdwKYF8eBlsYsDVoggDAuAjoK66k=
github.com/ProtonMail/go-mime v0.0.0-20230322103455-7d82a3887f2f/go.mod h1:gcr0kNtGBqin9zDW9GOHcVntrwnjrK+qdJ06mWYBybw=
github.com/ProtonMail/go-srp v0.0.7 h1:Sos3Qk+th4tQR64vsxGIxYpN3rdnG9Wf9K4ZloC1JrI=
github.com/ProtonMail/go-srp v0.0.7/go.mod h1:giCp+7qRnMIcCvI6V6U3S1lDDXDQYx2ewJ6F/9wdlJk=
github.com/ProtonMail/gopenpgp/v2 v2.7.4 h1:Vz/8+HViFFnf2A6XX8JOvZMrA6F5puwNvvF21O1mRlo=
github.com/ProtonMail/gopenpgp/v2 v2.7.4/go.mod h1:IhkNEDaxec6NyzSI0PlxapinnwPVIESk8/76da3Ct3g=
github.com/PuerkitoBio/goquery v1.8.1 h1:uQxhNlArOIdbrH1tr0UXwdVFgDcZDrZVdcpygAcwmWM=
github.com/PuerkitoBio/goquery v1.8.1/go.mod h1:Q8ICL1kNUJ2sXGoAhPGUdYDJvgQgHzJsnnd3H7Ho5jQ=
github.com/RoaringBitmap/roaring v1.9.3 h1:t4EbC5qQwnisr5PrP9nt0IRhRTb9gMUgQF4t4S2OByM=
github.com/RoaringBitmap/roaring v1.9.3/go.mod h1:6AXUsoIEzDTFFQCe1RbGA6uFONMhvejWj5rqITANK90=
github.com/STARRY-S/zip v0.2.1 h1:pWBd4tuSGm3wtpoqRZZ2EAwOmcHK6XFf7bU9qcJXyFg=
github.com/STARRY-S/zip v0.2.1/go.mod h1:xNvshLODWtC4EJ702g7cTYn13G53o1+X9BWnPFpcWV4=
github.com/SheltonZhu/115driver v1.0.34 h1:zhMLp4vgq7GksqvSxQQDOVfK6EOHldQl4b2n8tnZ+EE=
github.com/SheltonZhu/115driver v1.0.34/go.mod h1:rKvNd4Y4OkXv1TMbr/SKjGdcvMQxh6AW5Tw9w0CJb7E=
github.com/Unknwon/goconfig v1.0.0 h1:9IAu/BYbSLQi8puFjUQApZTxIHqSwrj5d8vpP8vTq4A=
github.com/Unknwon/goconfig v1.0.0/go.mod h1:wngxua9XCNjvHjDiTiV26DaKDT+0c63QR6H5hjVUUxw=
github.com/Xhofe/go-cache v0.0.0-20240804043513-b1a71927bc21 h1:h6q5E9aMBhhdqouW81LozVPI1I+Pu6IxL2EKpfm5OjY=
@@ -63,6 +86,9 @@ github.com/andreburgaud/crypt2go v1.8.0/go.mod h1:L5nfShQ91W78hOWhUH2tlGRPO+POAP
github.com/andybalholm/brotli v1.0.4/go.mod h1:fO7iG3H7G2nSZ7m0zPUDn85XEX2GTukHGRSepvi9Eig=
github.com/andybalholm/brotli v1.1.1 h1:PR2pgnyFznKEugtsUo0xLdDop5SKXd5Qf5ysW+7XdTA=
github.com/andybalholm/brotli v1.1.1/go.mod h1:05ib4cKhjx3OQYUY22hTVd34Bc8upXjOLL2rKwwZBoA=
github.com/andybalholm/cascadia v1.3.1/go.mod h1:R4bJ1UQfqADjvDa4P6HZHLh/3OxWWEqc0Sk8XGwHqvA=
github.com/andybalholm/cascadia v1.3.2 h1:3Xi6Dw5lHF15JtdcmAHD3i1+T8plmv7BQ/nsViSLyss=
github.com/andybalholm/cascadia v1.3.2/go.mod h1:7gtRlve5FxPPgIgX36uWBX58OdBsSS6lUvCFb+h7KvU=
github.com/avast/retry-go v3.0.0+incompatible h1:4SOWQ7Qs+oroOTQOYnAHqelpCO0biHSxpiH9JdtuBj0=
github.com/avast/retry-go v3.0.0+incompatible/go.mod h1:XtSnn+n/sHqQIpZ10K1qAevBhOOCWBLXXy3hyiqqBrY=
github.com/aws/aws-sdk-go v1.38.20/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
@@ -128,6 +154,9 @@ github.com/bodgit/windows v1.0.1 h1:tF7K6KOluPYygXa3Z2594zxlkbKPAOvqr97etrGNIz4=
github.com/bodgit/windows v1.0.1/go.mod h1:a6JLwrB4KrTR5hBpp8FI9/9W9jJfeQ2h4XDXU74ZCdM=
github.com/boombuler/barcode v1.0.1-0.20190219062509-6c824513bacc h1:biVzkmvwrH8WK8raXaxBx6fRVTlJILwEwQGL1I/ByEI=
github.com/boombuler/barcode v1.0.1-0.20190219062509-6c824513bacc/go.mod h1:paBWMcWSl3LHKBqUq+rly7CNSldXjb2rDl3JlRe0mD8=
github.com/bradenaw/juniper v0.15.2 h1:0JdjBGEF2jP1pOxmlNIrPhAoQN7Ng5IMAY5D0PHMW4U=
github.com/bradenaw/juniper v0.15.2/go.mod h1:UX4FX57kVSaDp4TPqvSjkAAewmRFAfXf27BOs5z9dq8=
github.com/bwesterb/go-ristretto v1.2.0/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0=
github.com/bwesterb/go-ristretto v1.2.3/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0=
github.com/bytedance/sonic v1.11.6 h1:oUp34TzMlL+OY1OUWxHqsdkgC/Zfc85zGqw9siXjrc0=
github.com/bytedance/sonic v1.11.6/go.mod h1:LysEHSvpvDySVdC2f87zGWf6CIKJcAvqab1ZaiQtds4=
@@ -158,6 +187,7 @@ github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMn
github.com/city404/v6-public-rpc-proto/go v0.0.0-20240817070657-90f8e24b653e h1:GLC8iDDcbt1H8+RkNao2nRGjyNTIo81e1rAJT9/uWYA=
github.com/city404/v6-public-rpc-proto/go v0.0.0-20240817070657-90f8e24b653e/go.mod h1:ln9Whp+wVY/FTbn2SK0ag+SKD2fC0yQCF/Lqowc1LmU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cloudflare/circl v1.1.0/go.mod h1:prBCrKB9DV4poKZY1l9zBXg2QJY7mvgRvtMxxK7fi4I=
github.com/cloudflare/circl v1.3.3/go.mod h1:5XYMA4rFBvNIrhs50XuiBJ15vF2pZn4nnUKZrLbUZFA=
github.com/cloudflare/circl v1.3.7 h1:qlCDlTPz2n9fu58M0Nh1J/JzcFpfgkFHHX3O35r5vcU=
github.com/cloudflare/circl v1.3.7/go.mod h1:sRTcRWXGLrKw6yIGJ+l7amYJFfAXbZG0kBSc8r4zxgA=
@@ -172,7 +202,6 @@ github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03V
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/crackcomm/go-gitignore v0.0.0-20170627025303-887ab5e44cc3 h1:HVTnpeuvF6Owjd5mniCL8DEXo7uYXdQEmOP4FJbV5tg=
github.com/crackcomm/go-gitignore v0.0.0-20170627025303-887ab5e44cc3/go.mod h1:p1d6YEZWvFzEh4KLyvBcVSnrfNDDvK2zfK/4x2v/4pE=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
@@ -194,6 +223,12 @@ github.com/dsnet/compress v0.0.2-0.20230904184137-39efe44ab707/go.mod h1:qssHWj6
github.com/dsnet/golib v0.0.0-20171103203638-1ea166775780/go.mod h1:Lj+Z9rebOhdfkVLjJ8T6VcRQv3SXugXy999NBtR9aFY=
github.com/dustinxie/ecc v0.0.0-20210511000915-959544187564 h1:I6KUy4CI6hHjqnyJLNCEi7YHVMkwwtfSr2k9splgdSM=
github.com/dustinxie/ecc v0.0.0-20210511000915-959544187564/go.mod h1:yekO+3ZShy19S+bsmnERmznGy9Rfg6dWWWpiGJjNAz8=
github.com/emersion/go-message v0.18.0 h1:7LxAXHRpSeoO/Wom3ZApVZYG7c3d17yCScYce8WiXA8=
github.com/emersion/go-message v0.18.0/go.mod h1:Zi69ACvzaoV/MBnrxfVBPV3xWEuCmC2nEN39oJF4B8A=
github.com/emersion/go-textwrapper v0.0.0-20200911093747-65d896831594 h1:IbFBtwoTQyw0fIM5xv1HF+Y+3ZijDR839WMulgxCcUY=
github.com/emersion/go-textwrapper v0.0.0-20200911093747-65d896831594/go.mod h1:aqO8z8wPrjkscevZJFVE1wXJrLpC5LtJG7fqLOsPb2U=
github.com/emersion/go-vcard v0.0.0-20230815062825-8fda7d206ec9 h1:ATgqloALX6cHCranzkLb8/zjivwQ9DWWDCQRnxTPfaA=
github.com/emersion/go-vcard v0.0.0-20230815062825-8fda7d206ec9/go.mod h1:HMJKR5wlh/ziNp+sHEDV2ltblO4JD2+IdDOWtGcQBTM=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
@@ -334,6 +369,10 @@ github.com/hekmon/cunits/v2 v2.1.0 h1:k6wIjc4PlacNOHwKEMBgWV2/c8jyD4eRMs5mR1BBhI
github.com/hekmon/cunits/v2 v2.1.0/go.mod h1:9r1TycXYXaTmEWlAIfFV8JT+Xo59U96yUJAYHxzii2M=
github.com/hekmon/transmissionrpc/v3 v3.0.0 h1:0Fb11qE0IBh4V4GlOwHNYpqpjcYDp5GouolwrpmcUDQ=
github.com/hekmon/transmissionrpc/v3 v3.0.0/go.mod h1:38SlNhFzinVUuY87wGj3acOmRxeYZAZfrj6Re7UgCDg=
github.com/henrybear327/Proton-API-Bridge v1.0.0 h1:gjKAaWfKu++77WsZTHg6FUyPC5W0LTKWQciUm8PMZb0=
github.com/henrybear327/Proton-API-Bridge v1.0.0/go.mod h1:gunH16hf6U74W2b9CGDaWRadiLICsoJ6KRkSt53zLts=
github.com/henrybear327/go-proton-api v1.0.0 h1:zYi/IbjLwFAW7ltCeqXneUGJey0TN//Xo851a/BgLXw=
github.com/henrybear327/go-proton-api v1.0.0/go.mod h1:w63MZuzufKcIZ93pwRgiOtxMXYafI8H74D77AxytOBc=
github.com/hirochachacha/go-smb2 v1.1.0 h1:b6hs9qKIql9eVXAiN0M2wSFY5xnhbHAQoCwRKbaRTZI=
github.com/hirochachacha/go-smb2 v1.1.0/go.mod h1:8F1A4d5EZzrGu5R7PU163UcMRDJQl4FtcxjBfsY8TZE=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
@@ -398,6 +437,8 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/larksuite/oapi-sdk-go/v3 v3.3.1 h1:DLQQEgHUAGZB6RVlceB1f6A94O206exxW2RIMH+gMUc=
github.com/larksuite/oapi-sdk-go/v3 v3.3.1/go.mod h1:ZEplY+kwuIrj/nqw5uSCINNATcH3KdxSN7y+UxYY5fI=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
@@ -483,6 +524,8 @@ github.com/ncw/swift/v2 v2.0.3 h1:8R9dmgFIWs+RiVlisCEfiQiik1hjuR0JnOkLxaP9ihg=
github.com/ncw/swift/v2 v2.0.3/go.mod h1:cbAO76/ZwcFrFlHdXPjaqWZ9R7Hdar7HpjRXBfbjigk=
github.com/nwaples/rardecode/v2 v2.0.0-beta.4.0.20241112120701-034e449c6e78 h1:MYzLheyVx1tJVDqfu3YnN4jtnyALNzLvwl+f58TcvQY=
github.com/nwaples/rardecode/v2 v2.0.0-beta.4.0.20241112120701-034e449c6e78/go.mod h1:yntwv/HfMc/Hbvtq9I19D1n58te3h6KsqCf3GxyfBGY=
github.com/okatu-loli/115driver v1.1.2 h1:XZT3r/51SZRQGzre2IeA+0/k4T1FneqArdhE4Wd600Q=
github.com/okatu-loli/115driver v1.1.2/go.mod h1:rKvNd4Y4OkXv1TMbr/SKjGdcvMQxh6AW5Tw9w0CJb7E=
github.com/otiai10/copy v1.14.0 h1:dCI/t1iTdYGtkvCuBG2BgR6KZa83PTclw4U5n2wAllU=
github.com/otiai10/copy v1.14.0/go.mod h1:ECfuL02W+/FkTWZWgQqXPWZgW9oeKCSQ5qVfSc4qc4w=
github.com/otiai10/mint v1.5.1 h1:XaPLeE+9vGbuyEHem1JNk3bYc7KKqyI/na0/mLd/Kks=
@@ -492,6 +535,8 @@ github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
github.com/pierrec/lz4/v4 v4.1.21 h1:yOVMLb6qSIDP67pl/5F7RepeKYu/VmTyEXvuMI5d9mQ=
github.com/pierrec/lz4/v4 v4.1.21/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
@@ -520,6 +565,8 @@ github.com/prometheus/procfs v0.12.0 h1:jluTpSng7V9hY0O2R9DzzJHYb2xULk9VTR1V1R/k
github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3cnaOZAZEfOo=
github.com/rclone/rclone v1.67.0 h1:yLRNgHEG2vQ60HCuzFqd0hYwKCRuWuvPUhvhMJ2jI5E=
github.com/rclone/rclone v1.67.0/go.mod h1:Cb3Ar47M/SvwfhAjZTbVXdtrP/JLtPFCq2tkdtBVC6w=
github.com/relvacode/iso8601 v1.3.0 h1:HguUjsGpIMh/zsTczGN3DVJFxTU/GX+MMmzcKoMO7ko=
github.com/relvacode/iso8601 v1.3.0/go.mod h1:FlNp+jz+TXpyRqgmM7tnzHHzBnz776kmAH2h3sZCn0I=
github.com/rfjakob/eme v1.1.2 h1:SxziR8msSOElPayZNFfQw4Tjx/Sbaeeh3eRvrHVMUs4=
github.com/rfjakob/eme v1.1.2/go.mod h1:cVvpasglm/G3ngEfcfT/Wt0GwhkuO32pf/poW6Nyk1k=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
@@ -649,6 +696,8 @@ go.opentelemetry.io/otel/metric v1.24.0 h1:6EhoGWWK28x1fbpA4tYTOWBkPefTDQnb8WSGX
go.opentelemetry.io/otel/metric v1.24.0/go.mod h1:VYhLe1rFfxuTXLgj4CBiyz+9WYBA8pNGJgDcSFRKBco=
go.opentelemetry.io/otel/trace v1.24.0 h1:CsKnnL4dUAr/0llH9FKuc698G04IrpWV0MQA/Y1YELI=
go.opentelemetry.io/otel/trace v1.24.0/go.mod h1:HPc3Xr/cOApsBI154IU0OI0HJexz+aw5uPdbs3UCjNU=
go.uber.org/goleak v1.2.1 h1:NBol2c7O1ZokfZ0LEU9K6Whx/KnwvepVetCUhtKja4A=
go.uber.org/goleak v1.2.1/go.mod h1:qlT2yGI9QafXHhZZLxlSuNsMw3FFLxBr+tBRlmO1xH4=
go4.org v0.0.0-20230225012048-214862532bf5 h1:nifaUDeh+rPaBCMPMQHZmvJf+QdpLFnuQPwx+LxVmtc=
go4.org v0.0.0-20230225012048-214862532bf5/go.mod h1:F57wTi5Lrj6WLyswp5EYV1ncrEbFGHD4hhz6S1ZYeaU=
gocv.io/x/gocv v0.25.0/go.mod h1:Rar2PS6DV+T4FL+PM535EImD/h13hGVaHhnCu1xarBs=
@@ -726,6 +775,7 @@ golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210916014120-12bc252f5db8/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
@@ -734,13 +784,12 @@ golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.27.0/go.mod h1:dDi0PyhWNoiUOrAS8uXv/vnScO4wnHQO4mj9fn/RytE=
golang.org/x/net v0.37.0 h1:1zLorHbz+LYj7MQlSf1+2tPIIgibq2eL5xkrGk6f+2c=
golang.org/x/net v0.37.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
@@ -784,6 +833,7 @@ golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -796,6 +846,7 @@ golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -812,6 +863,7 @@ golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=

View File

@@ -91,6 +91,7 @@ func InitialSettings() []model.SettingItem {
} else {
token = random.Token()
}
defaultRoleID := strconv.Itoa(model.GUEST)
initialSettingItems = []model.SettingItem{
// site settings
{Key: conf.VERSION, Value: conf.Version, Type: conf.TypeString, Group: model.SITE, Flag: model.READONLY},
@@ -103,6 +104,10 @@ func InitialSettings() []model.SettingItem {
{Key: conf.AllowIndexed, Value: "false", Type: conf.TypeBool, Group: model.SITE},
{Key: conf.AllowMounted, Value: "true", Type: conf.TypeBool, Group: model.SITE},
{Key: conf.RobotsTxt, Value: "User-agent: *\nAllow: /", Type: conf.TypeText, Group: model.SITE},
{Key: conf.AllowRegister, Value: "false", Type: conf.TypeBool, Group: model.SITE},
{Key: conf.DefaultRole, Value: defaultRoleID, Type: conf.TypeSelect, Group: model.SITE},
// newui settings
{Key: conf.UseNewui, Value: "false", Type: conf.TypeBool, Group: model.SITE},
// style settings
{Key: conf.Logo, Value: "https://cdn.jsdelivr.net/gh/alist-org/logo@main/logo.svg", Type: conf.TypeText, Group: model.STYLE},
{Key: conf.Favicon, Value: "https://cdn.jsdelivr.net/gh/alist-org/logo@main/logo.svg", Type: conf.TypeString, Group: model.STYLE},
@@ -160,6 +165,9 @@ func InitialSettings() []model.SettingItem {
{Key: conf.ForwardDirectLinkParams, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL},
{Key: conf.IgnoreDirectLinkParams, Value: "sign,alist_ts", Type: conf.TypeString, Group: model.GLOBAL},
{Key: conf.WebauthnLoginEnabled, Value: "false", Type: conf.TypeBool, Group: model.GLOBAL, Flag: model.PUBLIC},
{Key: conf.MaxDevices, Value: "0", Type: conf.TypeNumber, Group: model.GLOBAL},
{Key: conf.DeviceEvictPolicy, Value: "deny", Type: conf.TypeSelect, Options: "deny,evict_oldest", Group: model.GLOBAL},
{Key: conf.DeviceSessionTTL, Value: "86400", Type: conf.TypeNumber, Group: model.GLOBAL},
// single settings
{Key: conf.Token, Value: token, Type: conf.TypeString, Group: model.SINGLE, Flag: model.PRIVATE},

View File

@@ -14,10 +14,14 @@ import (
func init() {
formatter := logrus.TextFormatter{
ForceColors: true,
EnvironmentOverrideColors: true,
TimestampFormat: "2006-01-02 15:04:05",
FullTimestamp: true,
TimestampFormat: "2006-01-02 15:04:05",
FullTimestamp: true,
}
if os.Getenv("NO_COLOR") != "" || os.Getenv("ALIST_NO_COLOR") == "1" {
formatter.DisableColors = true
} else {
formatter.ForceColors = true
formatter.EnvironmentOverrideColors = true
}
logrus.SetFormatter(&formatter)
utils.Log.SetFormatter(&formatter)

View File

@@ -37,6 +37,18 @@ func InitTaskManager() {
if len(tool.TransferTaskManager.GetAll()) == 0 { //prevent offline downloaded files from being deleted
CleanTempDir()
}
workers := conf.Conf.Tasks.S3Transition.Workers
if workers < 0 {
workers = 0
}
fs.S3TransitionTaskManager = tache.NewManager[*fs.S3TransitionTask](
tache.WithWorks(workers),
tache.WithPersistFunction(
db.GetTaskDataFunc("s3_transition", conf.Conf.Tasks.S3Transition.TaskPersistant),
db.UpdateTaskDataFunc("s3_transition", conf.Conf.Tasks.S3Transition.TaskPersistant),
),
tache.WithMaxRetry(conf.Conf.Tasks.S3Transition.MaxRetry),
)
fs.ArchiveDownloadTaskManager = tache.NewManager[*fs.ArchiveDownloadTask](tache.WithWorks(setting.GetInt(conf.TaskDecompressDownloadThreadsNum, conf.Conf.Tasks.Decompress.Workers)), tache.WithPersistFunction(db.GetTaskDataFunc("decompress", conf.Conf.Tasks.Decompress.TaskPersistant), db.UpdateTaskDataFunc("decompress", conf.Conf.Tasks.Decompress.TaskPersistant)), tache.WithMaxRetry(conf.Conf.Tasks.Decompress.MaxRetry))
op.RegisterSettingChangingCallback(func() {
fs.ArchiveDownloadTaskManager.SetWorkersNumActive(taskFilterNegative(setting.GetInt(conf.TaskDecompressDownloadThreadsNum, conf.Conf.Tasks.Decompress.Workers)))

View File

@@ -60,6 +60,7 @@ type TasksConfig struct {
Copy TaskConfig `json:"copy" envPrefix:"COPY_"`
Decompress TaskConfig `json:"decompress" envPrefix:"DECOMPRESS_"`
DecompressUpload TaskConfig `json:"decompress_upload" envPrefix:"DECOMPRESS_UPLOAD_"`
S3Transition TaskConfig `json:"s3_transition" envPrefix:"S3_TRANSITION_"`
AllowRetryCanceled bool `json:"allow_retry_canceled" env:"ALLOW_RETRY_CANCELED"`
}
@@ -184,6 +185,11 @@ func DefaultConfig() *Config {
Workers: 5,
MaxRetry: 2,
},
S3Transition: TaskConfig{
Workers: 5,
MaxRetry: 2,
// TaskPersistant: true,
},
AllowRetryCanceled: false,
},
Cors: Cors{

View File

@@ -10,12 +10,15 @@ const (
const (
// site
VERSION = "version"
SiteTitle = "site_title"
Announcement = "announcement"
AllowIndexed = "allow_indexed"
AllowMounted = "allow_mounted"
RobotsTxt = "robots_txt"
VERSION = "version"
SiteTitle = "site_title"
Announcement = "announcement"
AllowIndexed = "allow_indexed"
AllowMounted = "allow_mounted"
RobotsTxt = "robots_txt"
AllowRegister = "allow_register"
DefaultRole = "default_role"
UseNewui = "use_newui"
Logo = "logo"
Favicon = "favicon"
@@ -45,6 +48,9 @@ const (
ForwardDirectLinkParams = "forward_direct_link_params"
IgnoreDirectLinkParams = "ignore_direct_link_params"
WebauthnLoginEnabled = "webauthn_login_enabled"
MaxDevices = "max_devices"
DeviceEvictPolicy = "device_evict_policy"
DeviceSessionTTL = "device_session_ttl"
// index
SearchIndex = "search_index"

View File

@@ -12,7 +12,7 @@ var db *gorm.DB
func Init(d *gorm.DB) {
db = d
err := AutoMigrate(new(model.Storage), new(model.User), new(model.Meta), new(model.SettingItem), new(model.SearchNode), new(model.TaskItem), new(model.SSHPublicKey), new(model.Role), new(model.Label), new(model.LabelFileBinDing), new(model.ObjFile))
err := AutoMigrate(new(model.Storage), new(model.User), new(model.Meta), new(model.SettingItem), new(model.SearchNode), new(model.TaskItem), new(model.SSHPublicKey), new(model.Role), new(model.Label), new(model.LabelFileBinding), new(model.ObjFile), new(model.Session))
if err != nil {
log.Fatalf("failed migrate database: %s", err.Error())
}

View File

@@ -1,15 +1,18 @@
package db
import (
"fmt"
"github.com/alist-org/alist/v3/internal/model"
"github.com/pkg/errors"
"gorm.io/gorm"
"gorm.io/gorm/clause"
"time"
)
// GetLabelIds Get all label_ids from database order by file_name
func GetLabelIds(userId uint, fileName string) ([]uint, error) {
labelFileBinDingDB := db.Model(&model.LabelFileBinDing{})
//fmt.Printf(">>> [GetLabelIds] userId: %d, fileName: %s\n", userId, fileName)
labelFileBinDingDB := db.Model(&model.LabelFileBinding{})
var labelIds []uint
if err := labelFileBinDingDB.Where("file_name = ?", fileName).Where("user_id = ?", userId).Pluck("label_id", &labelIds).Error; err != nil {
return nil, errors.WithStack(err)
@@ -18,7 +21,7 @@ func GetLabelIds(userId uint, fileName string) ([]uint, error) {
}
func CreateLabelFileBinDing(fileName string, labelId, userId uint) error {
var labelFileBinDing model.LabelFileBinDing
var labelFileBinDing model.LabelFileBinding
labelFileBinDing.UserId = userId
labelFileBinDing.LabelId = labelId
labelFileBinDing.FileName = fileName
@@ -32,7 +35,7 @@ func CreateLabelFileBinDing(fileName string, labelId, userId uint) error {
// GetLabelFileBinDingByLabelIdExists Get Label by label_id, used to del label usually
func GetLabelFileBinDingByLabelIdExists(labelId, userId uint) bool {
var labelFileBinDing model.LabelFileBinDing
var labelFileBinDing model.LabelFileBinding
result := db.Where("label_id = ?", labelId).Where("user_id = ?", userId).First(&labelFileBinDing)
exists := !errors.Is(result.Error, gorm.ErrRecordNotFound)
return exists
@@ -40,17 +43,150 @@ func GetLabelFileBinDingByLabelIdExists(labelId, userId uint) bool {
// DelLabelFileBinDingByFileName used to del usually
func DelLabelFileBinDingByFileName(userId uint, fileName string) error {
return errors.WithStack(db.Where("file_name = ?", fileName).Where("user_id = ?", userId).Delete(model.LabelFileBinDing{}).Error)
return errors.WithStack(db.Where("file_name = ?", fileName).Where("user_id = ?", userId).Delete(model.LabelFileBinding{}).Error)
}
// DelLabelFileBinDingById used to del usually
func DelLabelFileBinDingById(labelId, userId uint, fileName string) error {
return errors.WithStack(db.Where("label_id = ?", labelId).Where("file_name = ?", fileName).Where("user_id = ?", userId).Delete(model.LabelFileBinDing{}).Error)
return errors.WithStack(db.Where("label_id = ?", labelId).Where("file_name = ?", fileName).Where("user_id = ?", userId).Delete(model.LabelFileBinding{}).Error)
}
func GetLabelFileBinDingByLabelId(labelIds []uint, userId uint) (result []model.LabelFileBinDing, err error) {
func GetLabelFileBinDingByLabelId(labelIds []uint, userId uint) (result []model.LabelFileBinding, err error) {
if err := db.Where("label_id in (?)", labelIds).Where("user_id = ?", userId).Find(&result).Error; err != nil {
return nil, errors.WithStack(err)
}
return result, nil
}
func GetLabelBindingsByFileNamesPublic(fileNames []string) (map[string][]uint, error) {
var binds []model.LabelFileBinding
if err := db.Where("file_name IN ?", fileNames).Find(&binds).Error; err != nil {
return nil, errors.WithStack(err)
}
out := make(map[string][]uint, len(fileNames))
seen := make(map[string]struct{}, len(binds))
for _, b := range binds {
key := fmt.Sprintf("%s-%d", b.FileName, b.LabelId)
if _, ok := seen[key]; ok {
continue
}
seen[key] = struct{}{}
out[b.FileName] = append(out[b.FileName], b.LabelId)
}
return out, nil
}
func GetLabelsByFileNamesPublic(fileNames []string) (map[string][]model.Label, error) {
bindMap, err := GetLabelBindingsByFileNamesPublic(fileNames)
if err != nil {
return nil, err
}
idSet := make(map[uint]struct{})
for _, ids := range bindMap {
for _, id := range ids {
idSet[id] = struct{}{}
}
}
if len(idSet) == 0 {
return make(map[string][]model.Label, 0), nil
}
allIDs := make([]uint, 0, len(idSet))
for id := range idSet {
allIDs = append(allIDs, id)
}
labels, err := GetLabelByIds(allIDs) // 你已有的函数
if err != nil {
return nil, err
}
labelByID := make(map[uint]model.Label, len(labels))
for _, l := range labels {
labelByID[l.ID] = l
}
out := make(map[string][]model.Label, len(bindMap))
for fname, ids := range bindMap {
for _, id := range ids {
if lab, ok := labelByID[id]; ok {
out[fname] = append(out[fname], lab)
}
}
}
return out, nil
}
func ListLabelFileBinDing(userId uint, labelIDs []uint, fileName string, page, pageSize int) ([]model.LabelFileBinding, int64, error) {
q := db.Model(&model.LabelFileBinding{}).Where("user_id = ?", userId)
if len(labelIDs) > 0 {
q = q.Where("label_id IN ?", labelIDs)
}
if fileName != "" {
q = q.Where("file_name LIKE ?", "%"+fileName+"%")
}
var total int64
if err := q.Count(&total).Error; err != nil {
return nil, 0, errors.WithStack(err)
}
var rows []model.LabelFileBinding
if err := q.
Order("id DESC").
Offset((page - 1) * pageSize).
Limit(pageSize).
Find(&rows).Error; err != nil {
return nil, 0, errors.WithStack(err)
}
return rows, total, nil
}
func RestoreLabelFileBindings(bindings []model.LabelFileBinding, keepIDs bool, override bool) error {
if len(bindings) == 0 {
return nil
}
tx := db.Begin()
if override {
type key struct {
uid uint
name string
}
toDel := make(map[key]struct{}, len(bindings))
for i := range bindings {
k := key{uid: bindings[i].UserId, name: bindings[i].FileName}
toDel[k] = struct{}{}
}
for k := range toDel {
if err := tx.Where("user_id = ? AND file_name = ?", k.uid, k.name).
Delete(&model.LabelFileBinding{}).Error; err != nil {
tx.Rollback()
return errors.WithStack(err)
}
}
}
for i := range bindings {
b := bindings[i]
if !keepIDs {
b.ID = 0
}
if b.CreateTime.IsZero() {
b.CreateTime = time.Now()
}
if override {
if err := tx.Create(&b).Error; err != nil {
tx.Rollback()
return errors.WithStack(err)
}
} else {
if err := tx.Clauses(clause.OnConflict{DoNothing: true}).Create(&b).Error; err != nil {
tx.Rollback()
return errors.WithStack(err)
}
}
}
return errors.WithStack(tx.Commit().Error)
}

View File

@@ -34,12 +34,36 @@ func GetRoles(pageIndex, pageSize int) (roles []model.Role, count int64, err err
return roles, count, nil
}
func GetAllRoles() ([]model.Role, error) {
var roles []model.Role
if err := db.Find(&roles).Error; err != nil {
return nil, errors.WithStack(err)
}
return roles, nil
}
func CreateRole(r *model.Role) error {
return errors.WithStack(db.Create(r).Error)
if err := db.Create(r).Error; err != nil {
return errors.WithStack(err)
}
if r.Default {
if err := db.Model(&model.Role{}).Where("id <> ?", r.ID).Update("default", false).Error; err != nil {
return errors.WithStack(err)
}
}
return nil
}
func UpdateRole(r *model.Role) error {
return errors.WithStack(db.Save(r).Error)
if err := db.Save(r).Error; err != nil {
return errors.WithStack(err)
}
if r.Default {
if err := db.Model(&model.Role{}).Where("id <> ?", r.ID).Update("default", false).Error; err != nil {
return errors.WithStack(err)
}
}
return nil
}
func DeleteRole(id uint) error {

69
internal/db/session.go Normal file
View File

@@ -0,0 +1,69 @@
package db
import (
"github.com/alist-org/alist/v3/internal/model"
"github.com/pkg/errors"
"gorm.io/gorm/clause"
)
func GetSession(userID uint, deviceKey string) (*model.Session, error) {
s := model.Session{UserID: userID, DeviceKey: deviceKey}
if err := db.Select("user_id, device_key, last_active, status, user_agent, ip").Where(&s).First(&s).Error; err != nil {
return nil, errors.Wrap(err, "failed find session")
}
return &s, nil
}
func CreateSession(s *model.Session) error {
return errors.WithStack(db.Create(s).Error)
}
func UpsertSession(s *model.Session) error {
return errors.WithStack(db.Clauses(clause.OnConflict{UpdateAll: true}).Create(s).Error)
}
func DeleteSession(userID uint, deviceKey string) error {
return errors.WithStack(db.Where("user_id = ? AND device_key = ?", userID, deviceKey).Delete(&model.Session{}).Error)
}
func CountActiveSessionsByUser(userID uint) (int64, error) {
var count int64
err := db.Model(&model.Session{}).
Where("user_id = ? AND status = ?", userID, model.SessionActive).
Count(&count).Error
return count, errors.WithStack(err)
}
func DeleteSessionsBefore(ts int64) error {
return errors.WithStack(db.Where("last_active < ?", ts).Delete(&model.Session{}).Error)
}
// GetOldestActiveSession returns the oldest active session for the specified user.
func GetOldestActiveSession(userID uint) (*model.Session, error) {
var s model.Session
if err := db.Where("user_id = ? AND status = ?", userID, model.SessionActive).
Order("last_active ASC").First(&s).Error; err != nil {
return nil, errors.Wrap(err, "failed get oldest active session")
}
return &s, nil
}
func UpdateSessionLastActive(userID uint, deviceKey string, lastActive int64) error {
return errors.WithStack(db.Model(&model.Session{}).Where("user_id = ? AND device_key = ?", userID, deviceKey).Update("last_active", lastActive).Error)
}
func ListSessionsByUser(userID uint) ([]model.Session, error) {
var sessions []model.Session
err := db.Select("user_id, device_key, last_active, status, user_agent, ip").Where("user_id = ? AND status = ?", userID, model.SessionActive).Find(&sessions).Error
return sessions, errors.WithStack(err)
}
func ListSessions() ([]model.Session, error) {
var sessions []model.Session
err := db.Select("user_id, device_key, last_active, status, user_agent, ip").Where("status = ?", model.SessionActive).Find(&sessions).Error
return sessions, errors.WithStack(err)
}
func MarkInactive(sessionID string) error {
return errors.WithStack(db.Model(&model.Session{}).Where("device_key = ?", sessionID).Update("status", model.SessionInactive).Error)
}

View File

@@ -2,12 +2,14 @@ package db
import (
"encoding/base64"
"fmt"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-webauthn/webauthn/webauthn"
"github.com/pkg/errors"
"gorm.io/gorm"
"path"
"slices"
"strings"
)
@@ -24,6 +26,20 @@ func GetUserByRole(role int) (*model.User, error) {
return nil, gorm.ErrRecordNotFound
}
func GetUsersByRole(roleID int) ([]model.User, error) {
var users []model.User
if err := db.Find(&users).Error; err != nil {
return nil, err
}
var result []model.User
for _, u := range users {
if slices.Contains(u.Role, roleID) {
result = append(result, u)
}
}
return result, nil
}
func GetUserByName(username string) (*model.User, error) {
user := model.User{Username: username}
if err := db.Where(user).First(&user).Error; err != nil {
@@ -67,6 +83,14 @@ func GetUsers(pageIndex, pageSize int) (users []model.User, count int64, err err
return users, count, nil
}
func GetAllUsers() ([]model.User, error) {
var users []model.User
if err := db.Find(&users).Error; err != nil {
return nil, errors.WithStack(err)
}
return users, nil
}
func DeleteUserById(id uint) error {
return errors.WithStack(db.Delete(&model.User{}, id).Error)
}
@@ -108,25 +132,29 @@ func RemoveAuthn(u *model.User, id string) error {
return UpdateAuthn(u.ID, string(res))
}
func UpdateUserBasePathPrefix(oldPath, newPath string) ([]string, error) {
func UpdateUserBasePathPrefix(oldPath, newPath string, usersOpt ...[]model.User) ([]string, error) {
var users []model.User
var modifiedUsernames []string
if err := db.Find(&users).Error; err != nil {
return nil, errors.WithMessage(err, "failed to load users")
}
oldPathClean := path.Clean(oldPath)
if len(usersOpt) > 0 {
users = usersOpt[0]
} else {
if err := db.Find(&users).Error; err != nil {
return nil, errors.WithMessage(err, "failed to load users")
}
}
for _, user := range users {
basePath := path.Clean(user.BasePath)
updated := false
if basePath == oldPathClean {
user.BasePath = newPath
user.BasePath = path.Clean(newPath)
updated = true
} else if strings.HasPrefix(basePath, oldPathClean+"/") {
user.BasePath = newPath + basePath[len(oldPathClean):]
user.BasePath = path.Clean(newPath + basePath[len(oldPathClean):])
updated = true
}
@@ -140,3 +168,13 @@ func UpdateUserBasePathPrefix(oldPath, newPath string) ([]string, error) {
return modifiedUsernames, nil
}
func CountUsersByRoleAndEnabledExclude(roleID uint, excludeUserID uint) (int64, error) {
var count int64
jsonValue := fmt.Sprintf("[%d]", roleID)
err := db.Model(&model.User{}).
Where("disabled = ? AND id != ?", false, excludeUserID).
Where("JSON_CONTAINS(role, ?)", jsonValue).
Count(&count).Error
return count, err
}

138
internal/device/session.go Normal file
View File

@@ -0,0 +1,138 @@
package device
import (
"time"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/db"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/setting"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/pkg/errors"
"gorm.io/gorm"
)
// Handle verifies device sessions for a user and upserts current session.
func Handle(userID uint, deviceKey, ua, ip string) error {
ttl := setting.GetInt(conf.DeviceSessionTTL, 86400)
if ttl > 0 {
_ = db.DeleteSessionsBefore(time.Now().Unix() - int64(ttl))
}
ip = utils.MaskIP(ip)
now := time.Now().Unix()
sess, err := db.GetSession(userID, deviceKey)
if err == nil {
if sess.Status == model.SessionInactive {
return errors.WithStack(errs.SessionInactive)
}
sess.Status = model.SessionActive
sess.LastActive = now
sess.UserAgent = ua
sess.IP = ip
return db.UpsertSession(sess)
}
if err != nil && !errors.Is(err, gorm.ErrRecordNotFound) {
return err
}
max := setting.GetInt(conf.MaxDevices, 0)
if max > 0 {
count, err := db.CountActiveSessionsByUser(userID)
if err != nil {
return err
}
if count >= int64(max) {
policy := setting.GetStr(conf.DeviceEvictPolicy, "deny")
if policy == "evict_oldest" {
if oldest, err := db.GetOldestActiveSession(userID); err == nil {
if err := db.MarkInactive(oldest.DeviceKey); err != nil {
return err
}
}
} else {
return errors.WithStack(errs.TooManyDevices)
}
}
}
s := &model.Session{UserID: userID, DeviceKey: deviceKey, UserAgent: ua, IP: ip, LastActive: now, Status: model.SessionActive}
return db.CreateSession(s)
}
// EnsureActiveOnLogin is used only in login flow:
// - If session exists (even Inactive): reactivate and refresh fields.
// - If not exists: apply max-devices policy, then create Active session.
func EnsureActiveOnLogin(userID uint, deviceKey, ua, ip string) error {
ip = utils.MaskIP(ip)
now := time.Now().Unix()
sess, err := db.GetSession(userID, deviceKey)
if err == nil {
if sess.Status == model.SessionInactive {
max := setting.GetInt(conf.MaxDevices, 0)
if max > 0 {
count, err := db.CountActiveSessionsByUser(userID)
if err != nil {
return err
}
if count >= int64(max) {
policy := setting.GetStr(conf.DeviceEvictPolicy, "deny")
if policy == "evict_oldest" {
if oldest, gerr := db.GetOldestActiveSession(userID); gerr == nil {
if err := db.MarkInactive(oldest.DeviceKey); err != nil {
return err
}
}
} else {
return errors.WithStack(errs.TooManyDevices)
}
}
}
}
sess.Status = model.SessionActive
sess.LastActive = now
sess.UserAgent = ua
sess.IP = ip
return db.UpsertSession(sess)
}
if err != nil && !errors.Is(err, gorm.ErrRecordNotFound) {
return err
}
max := setting.GetInt(conf.MaxDevices, 0)
if max > 0 {
count, err := db.CountActiveSessionsByUser(userID)
if err != nil {
return err
}
if count >= int64(max) {
policy := setting.GetStr(conf.DeviceEvictPolicy, "deny")
if policy == "evict_oldest" {
if oldest, gerr := db.GetOldestActiveSession(userID); gerr == nil {
if err := db.MarkInactive(oldest.DeviceKey); err != nil {
return err
}
}
} else {
return errors.WithStack(errs.TooManyDevices)
}
}
}
return db.CreateSession(&model.Session{
UserID: userID,
DeviceKey: deviceKey,
UserAgent: ua,
IP: ip,
LastActive: now,
Status: model.SessionActive,
})
}
// Refresh updates last_active for the session.
func Refresh(userID uint, deviceKey string) {
_ = db.UpdateSessionLastActive(userID, deviceKey, time.Now().Unix())
}

8
internal/errs/device.go Normal file
View File

@@ -0,0 +1,8 @@
package errs
import "errors"
var (
TooManyDevices = errors.New("too many active devices")
SessionInactive = errors.New("session inactive")
)

View File

@@ -4,4 +4,5 @@ import "errors"
var (
EmptyToken = errors.New("empty token")
LinkIsDir = errors.New("link is dir")
)

View File

@@ -3,5 +3,5 @@ package errs
import "errors"
var (
ErrChangeDefaultRole = errors.New("cannot modify admin or guest role")
ErrChangeDefaultRole = errors.New("cannot modify admin role")
)

View File

@@ -2,10 +2,15 @@ package fs
import (
"context"
"encoding/json"
stdpath "path"
"strings"
"github.com/alist-org/alist/v3/drivers/s3"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/task"
"github.com/pkg/errors"
)
@@ -53,6 +58,38 @@ func other(ctx context.Context, args model.FsOtherArgs) (interface{}, error) {
if err != nil {
return nil, errors.WithMessage(err, "failed get storage")
}
originalPath := args.Path
if _, ok := storage.(*s3.S3); ok {
method := strings.ToLower(strings.TrimSpace(args.Method))
if method == s3.OtherMethodArchive || method == s3.OtherMethodThaw {
if S3TransitionTaskManager == nil {
return nil, errors.New("s3 transition task manager is not initialized")
}
var payload json.RawMessage
if args.Data != nil {
raw, err := json.Marshal(args.Data)
if err != nil {
return nil, errors.WithMessage(err, "failed to encode request payload")
}
payload = raw
}
taskCreator, _ := ctx.Value("user").(*model.User)
tsk := &S3TransitionTask{
TaskExtension: task.TaskExtension{Creator: taskCreator},
status: "queued",
StorageMountPath: storage.GetStorage().MountPath,
ObjectPath: actualPath,
DisplayPath: originalPath,
ObjectName: stdpath.Base(actualPath),
Transition: method,
Payload: payload,
}
S3TransitionTaskManager.Add(tsk)
return map[string]string{"task_id": tsk.GetID()}, nil
}
}
args.Path = actualPath
return op.Other(ctx, storage, args)
}

View File

@@ -0,0 +1,310 @@
package fs
import (
"encoding/json"
"fmt"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/s3"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/task"
"github.com/pkg/errors"
"github.com/xhofe/tache"
)
const s3TransitionPollInterval = 15 * time.Second
// S3TransitionTask represents an asynchronous S3 archive/thaw request that is
// tracked via the task manager so that clients can monitor the progress of the
// operation.
type S3TransitionTask struct {
task.TaskExtension
status string
StorageMountPath string `json:"storage_mount_path"`
ObjectPath string `json:"object_path"`
DisplayPath string `json:"display_path"`
ObjectName string `json:"object_name"`
Transition string `json:"transition"`
Payload json.RawMessage `json:"payload,omitempty"`
TargetStorageClass string `json:"target_storage_class,omitempty"`
RequestID string `json:"request_id,omitempty"`
VersionID string `json:"version_id,omitempty"`
storage driver.Driver `json:"-"`
}
// S3TransitionTaskManager holds asynchronous S3 archive/thaw tasks.
var S3TransitionTaskManager *tache.Manager[*S3TransitionTask]
var _ task.TaskExtensionInfo = (*S3TransitionTask)(nil)
func (t *S3TransitionTask) GetName() string {
action := strings.ToLower(t.Transition)
if action == "" {
action = "transition"
}
display := t.DisplayPath
if display == "" {
display = t.ObjectPath
}
if display == "" {
display = t.ObjectName
}
return fmt.Sprintf("s3 %s %s", action, display)
}
func (t *S3TransitionTask) GetStatus() string {
return t.status
}
func (t *S3TransitionTask) Run() error {
t.ReinitCtx()
t.ClearEndTime()
start := time.Now()
t.SetStartTime(start)
defer func() { t.SetEndTime(time.Now()) }()
if err := t.ensureStorage(); err != nil {
t.status = fmt.Sprintf("locate storage failed: %v", err)
return err
}
payload, err := t.decodePayload()
if err != nil {
t.status = fmt.Sprintf("decode payload failed: %v", err)
return err
}
method := strings.ToLower(strings.TrimSpace(t.Transition))
switch method {
case s3.OtherMethodArchive:
t.status = "submitting archive request"
t.SetProgress(0)
resp, err := op.Other(t.Ctx(), t.storage, model.FsOtherArgs{
Path: t.ObjectPath,
Method: s3.OtherMethodArchive,
Data: payload,
})
if err != nil {
t.status = fmt.Sprintf("archive request failed: %v", err)
return err
}
archiveResp, ok := toArchiveResponse(resp)
if ok {
if t.TargetStorageClass == "" {
t.TargetStorageClass = archiveResp.StorageClass
}
t.RequestID = archiveResp.RequestID
t.VersionID = archiveResp.VersionID
if archiveResp.StorageClass != "" {
t.status = fmt.Sprintf("archive requested, waiting for %s", archiveResp.StorageClass)
} else {
t.status = "archive requested"
}
} else if sc := t.extractTargetStorageClass(); sc != "" {
t.TargetStorageClass = sc
t.status = fmt.Sprintf("archive requested, waiting for %s", sc)
} else {
t.status = "archive requested"
}
if t.TargetStorageClass != "" {
t.TargetStorageClass = s3.NormalizeStorageClass(t.TargetStorageClass)
}
t.SetProgress(25)
return t.waitForArchive()
case s3.OtherMethodThaw:
t.status = "submitting thaw request"
t.SetProgress(0)
resp, err := op.Other(t.Ctx(), t.storage, model.FsOtherArgs{
Path: t.ObjectPath,
Method: s3.OtherMethodThaw,
Data: payload,
})
if err != nil {
t.status = fmt.Sprintf("thaw request failed: %v", err)
return err
}
thawResp, ok := toThawResponse(resp)
if ok {
t.RequestID = thawResp.RequestID
if thawResp.Status != nil && !thawResp.Status.Ongoing {
t.SetProgress(100)
t.status = thawCompletionMessage(thawResp.Status)
return nil
}
}
t.status = "thaw requested"
t.SetProgress(25)
return t.waitForThaw()
default:
return errors.Errorf("unsupported transition method: %s", t.Transition)
}
}
func (t *S3TransitionTask) ensureStorage() error {
if t.storage != nil {
return nil
}
storage, err := op.GetStorageByMountPath(t.StorageMountPath)
if err != nil {
return err
}
t.storage = storage
return nil
}
func (t *S3TransitionTask) decodePayload() (interface{}, error) {
if len(t.Payload) == 0 {
return nil, nil
}
var payload interface{}
if err := json.Unmarshal(t.Payload, &payload); err != nil {
return nil, err
}
return payload, nil
}
func (t *S3TransitionTask) extractTargetStorageClass() string {
if len(t.Payload) == 0 {
return ""
}
var req s3.ArchiveRequest
if err := json.Unmarshal(t.Payload, &req); err != nil {
return ""
}
return s3.NormalizeStorageClass(req.StorageClass)
}
func (t *S3TransitionTask) waitForArchive() error {
ticker := time.NewTicker(s3TransitionPollInterval)
defer ticker.Stop()
ctx := t.Ctx()
for {
select {
case <-ctx.Done():
t.status = "archive canceled"
return ctx.Err()
case <-ticker.C:
resp, err := op.Other(ctx, t.storage, model.FsOtherArgs{
Path: t.ObjectPath,
Method: s3.OtherMethodArchiveStatus,
})
if err != nil {
t.status = fmt.Sprintf("archive status error: %v", err)
return err
}
archiveResp, ok := toArchiveResponse(resp)
if !ok {
t.status = fmt.Sprintf("unexpected archive status response: %T", resp)
return errors.Errorf("unexpected archive status response: %T", resp)
}
currentClass := strings.TrimSpace(archiveResp.StorageClass)
target := strings.TrimSpace(t.TargetStorageClass)
if target == "" {
target = currentClass
t.TargetStorageClass = currentClass
}
if currentClass == "" {
t.status = "waiting for storage class update"
t.SetProgress(50)
continue
}
if strings.EqualFold(currentClass, target) {
t.SetProgress(100)
t.status = fmt.Sprintf("archive complete (%s)", currentClass)
return nil
}
t.status = fmt.Sprintf("storage class %s (target %s)", currentClass, target)
t.SetProgress(75)
}
}
}
func (t *S3TransitionTask) waitForThaw() error {
ticker := time.NewTicker(s3TransitionPollInterval)
defer ticker.Stop()
ctx := t.Ctx()
for {
select {
case <-ctx.Done():
t.status = "thaw canceled"
return ctx.Err()
case <-ticker.C:
resp, err := op.Other(ctx, t.storage, model.FsOtherArgs{
Path: t.ObjectPath,
Method: s3.OtherMethodThawStatus,
})
if err != nil {
t.status = fmt.Sprintf("thaw status error: %v", err)
return err
}
thawResp, ok := toThawResponse(resp)
if !ok {
t.status = fmt.Sprintf("unexpected thaw status response: %T", resp)
return errors.Errorf("unexpected thaw status response: %T", resp)
}
status := thawResp.Status
if status == nil {
t.status = "waiting for thaw status"
t.SetProgress(50)
continue
}
if status.Ongoing {
t.status = fmt.Sprintf("thaw in progress (%s)", status.Raw)
t.SetProgress(75)
continue
}
t.SetProgress(100)
t.status = thawCompletionMessage(status)
return nil
}
}
}
func thawCompletionMessage(status *s3.RestoreStatus) string {
if status == nil {
return "thaw complete"
}
if status.Expiry != "" {
return fmt.Sprintf("thaw complete, expires %s", status.Expiry)
}
return "thaw complete"
}
func toArchiveResponse(v interface{}) (s3.ArchiveResponse, bool) {
switch resp := v.(type) {
case s3.ArchiveResponse:
return resp, true
case *s3.ArchiveResponse:
if resp != nil {
return *resp, true
}
}
return s3.ArchiveResponse{}, false
}
func toThawResponse(v interface{}) (s3.ThawResponse, bool) {
switch resp := v.(type) {
case s3.ThawResponse:
return resp, true
case *s3.ThawResponse:
if resp != nil {
return *resp, true
}
}
return s3.ThawResponse{}, false
}
// Ensure compatibility with persistence when tasks are restored.
func (t *S3TransitionTask) OnRestore() {
// The storage handle is not persisted intentionally; it will be lazily
// re-fetched on the next Run invocation.
t.storage = nil
}

View File

@@ -2,7 +2,7 @@ package model
import "time"
type LabelFileBinDing struct {
type LabelFileBinding struct {
ID uint `json:"id" gorm:"primaryKey"` // unique key
UserId uint `json:"user_id"` // use to user_id
LabelId uint `json:"label_id"` // use to label_id

View File

@@ -20,6 +20,10 @@ type ObjUnwrap interface {
Unwrap() Obj
}
type StorageClassProvider interface {
StorageClass() string
}
type Obj interface {
GetSize() int64
GetName() string
@@ -55,6 +59,21 @@ type FileStreamer interface {
type UpdateProgress func(percentage float64)
// Reference implementation from OpenListTeam:
// https://github.com/OpenListTeam/OpenList/blob/a703b736c9346c483bae56905a39bc07bf781cff/internal/model/obj.go#L58
func UpdateProgressWithRange(inner UpdateProgress, start, end float64) UpdateProgress {
return func(p float64) {
if p < 0 {
p = 0
}
if p > 100 {
p = 100
}
scaled := start + (end-start)*(p/100.0)
inner(scaled)
}
}
type URL interface {
URL() string
}
@@ -126,6 +145,13 @@ func WrapObjsName(objs []Obj) {
}
}
func WrapObjStorageClass(obj Obj, storageClass string) Obj {
if storageClass == "" {
return obj
}
return &ObjWrapStorageClass{Obj: obj, storageClass: storageClass}
}
func UnwrapObj(obj Obj) Obj {
if unwrap, ok := obj.(ObjUnwrap); ok {
obj = unwrap.Unwrap()
@@ -153,6 +179,20 @@ func GetUrl(obj Obj) (url string, ok bool) {
return url, false
}
func GetStorageClass(obj Obj) (string, bool) {
if provider, ok := obj.(StorageClassProvider); ok {
value := provider.StorageClass()
if value == "" {
return "", false
}
return value, true
}
if unwrap, ok := obj.(ObjUnwrap); ok {
return GetStorageClass(unwrap.Unwrap())
}
return "", false
}
func GetRawObject(obj Obj) *Object {
switch v := obj.(type) {
case *ObjThumbURL:

View File

@@ -11,6 +11,11 @@ type ObjWrapName struct {
Obj
}
type ObjWrapStorageClass struct {
storageClass string
Obj
}
func (o *ObjWrapName) Unwrap() Obj {
return o.Obj
}
@@ -19,6 +24,20 @@ func (o *ObjWrapName) GetName() string {
return o.Name
}
func (o *ObjWrapStorageClass) Unwrap() Obj {
return o.Obj
}
func (o *ObjWrapStorageClass) StorageClass() string {
return o.storageClass
}
func (o *ObjWrapStorageClass) SetPath(path string) {
if setter, ok := o.Obj.(SetPath); ok {
setter.SetPath(path)
}
}
type Object struct {
ID string
Path string

View File

@@ -17,6 +17,7 @@ type Role struct {
ID uint `json:"id" gorm:"primaryKey"`
Name string `json:"name" gorm:"unique" binding:"required"`
Description string `json:"description"`
Default bool `json:"default" gorm:"default:false"`
// PermissionScopes stores structured permission list and is ignored by gorm.
PermissionScopes []PermissionEntry `json:"permission_scopes" gorm:"-"`
// RawPermission is the JSON representation of PermissionScopes stored in DB.

16
internal/model/session.go Normal file
View File

@@ -0,0 +1,16 @@
package model
// Session represents a device session of a user.
type Session struct {
UserID uint `json:"user_id" gorm:"index"`
DeviceKey string `json:"device_key" gorm:"primaryKey;size:64"`
UserAgent string `json:"user_agent" gorm:"size:255"`
IP string `json:"ip" gorm:"size:64"`
LastActive int64 `json:"last_active"`
Status int `json:"status"`
}
const (
SessionActive = iota
SessionInactive
)

View File

@@ -145,13 +145,28 @@ func (u *User) CheckPathLimit() bool {
}
func (u *User) JoinPath(reqPath string) (string, error) {
if reqPath == "/" {
return utils.FixAndCleanPath(u.BasePath), nil
}
path, err := utils.JoinBasePath(u.BasePath, reqPath)
if err != nil {
return "", err
}
if u.CheckPathLimit() && !utils.IsSubPath(u.BasePath, path) {
return "", errs.PermissionDenied
if path != "/" && u.CheckPathLimit() {
basePaths := GetAllBasePathsFromRoles(u)
match := false
for _, base := range basePaths {
if utils.IsSubPath(base, path) {
match = true
break
}
}
if !match {
return "", errs.PermissionDenied
}
}
return path, nil
}
@@ -193,3 +208,33 @@ func (u *User) WebAuthnCredentials() []webauthn.Credential {
func (u *User) WebAuthnIcon() string {
return "https://alistgo.com/logo.svg"
}
// FetchRole is used to load role details by id. It should be set by the op package
// to avoid an import cycle between model and op.
var FetchRole func(uint) (*Role, error)
// GetAllBasePathsFromRoles returns all permission paths from user's roles
func GetAllBasePathsFromRoles(u *User) []string {
basePaths := make([]string, 0)
seen := make(map[string]struct{})
for _, rid := range u.Role {
if FetchRole == nil {
continue
}
role, err := FetchRole(uint(rid))
if err != nil || role == nil {
continue
}
for _, entry := range role.PermissionScopes {
if entry.Path == "" {
continue
}
if _, ok := seen[entry.Path]; !ok {
basePaths = append(basePaths, entry.Path)
seen[entry.Path] = struct{}{}
}
}
}
return basePaths
}

View File

@@ -2,6 +2,7 @@ package op
import (
"regexp"
"strconv"
"strings"
"github.com/alist-org/alist/v3/internal/conf"
@@ -82,6 +83,18 @@ var settingItemHooks = map[string]SettingItemHook{
conf.SlicesMap[conf.IgnoreDirectLinkParams] = strings.Split(item.Value, ",")
return nil
},
conf.DefaultRole: func(item *model.SettingItem) error {
v := strings.TrimSpace(item.Value)
if v == "" {
return nil
}
id, err := strconv.Atoi(v)
if err != nil {
return errors.WithStack(err)
}
_, err = GetRole(uint(id))
return err
},
}
func RegisterSettingItemHook(key string, hook SettingItemHook) {

View File

@@ -23,6 +23,7 @@ type CreateLabelFileBinDingReq struct {
Type int `json:"type"`
HashInfoStr string `json:"hashinfo"`
LabelIds string `json:"label_ids"`
LabelIDs []uint64 `json:"labelIdList"`
}
type ObjLabelResp struct {
@@ -54,23 +55,29 @@ func GetLabelByFileName(userId uint, fileName string) ([]model.Label, error) {
return labels, nil
}
func GetLabelsByFileNamesPublic(fileNames []string) (map[string][]model.Label, error) {
return db.GetLabelsByFileNamesPublic(fileNames)
}
func CreateLabelFileBinDing(req CreateLabelFileBinDingReq, userId uint) error {
if err := db.DelLabelFileBinDingByFileName(userId, req.Name); err != nil {
return errors.WithMessage(err, "failed del label_file_bin_ding in database")
}
if req.LabelIds == "" {
ids, err := collectLabelIDs(req)
if err != nil {
return err
}
if len(ids) == 0 {
return nil
}
labelMap := strings.Split(req.LabelIds, ",")
for _, value := range labelMap {
labelId, err := strconv.ParseUint(value, 10, 64)
if err != nil {
return fmt.Errorf("invalid label ID '%s': %v", value, err)
}
if err = db.CreateLabelFileBinDing(req.Name, uint(labelId), userId); err != nil {
for _, id := range ids {
if err = db.CreateLabelFileBinDing(req.Name, uint(id), userId); err != nil {
return errors.WithMessage(err, "failed labels in database")
}
}
if !db.GetFileByNameExists(req.Name) {
objFile := model.ObjFile{
Id: req.Id,
@@ -86,8 +93,7 @@ func CreateLabelFileBinDing(req CreateLabelFileBinDingReq, userId uint) error {
Type: req.Type,
HashInfoStr: req.HashInfoStr,
}
err := db.CreateObjFile(objFile)
if err != nil {
if err := db.CreateObjFile(objFile); err != nil {
return errors.WithMessage(err, "failed file in database")
}
}
@@ -97,7 +103,7 @@ func CreateLabelFileBinDing(req CreateLabelFileBinDingReq, userId uint) error {
func GetFileByLabel(userId uint, labelId string) (result []ObjLabelResp, err error) {
labelMap := strings.Split(labelId, ",")
var labelIds []uint
var labelsFile []model.LabelFileBinDing
var labelsFile []model.LabelFileBinding
var labels []model.Label
var labelsFileMap = make(map[string][]model.Label)
var labelsMap = make(map[uint]model.Label)
@@ -157,3 +163,33 @@ func StringSliceToUintSlice(strSlice []string) ([]uint, error) {
}
return uintSlice, nil
}
func RestoreLabelFileBindings(bindings []model.LabelFileBinding, keepIDs bool, override bool) error {
return db.RestoreLabelFileBindings(bindings, keepIDs, override)
}
func collectLabelIDs(req CreateLabelFileBinDingReq) ([]uint64, error) {
if len(req.LabelIDs) > 0 {
return req.LabelIDs, nil
}
s := strings.TrimSpace(req.LabelIds)
if s == "" {
return nil, nil
}
replacer := strings.NewReplacer("", ",", "、", ",", "", ",", ";", ",")
s = replacer.Replace(s)
parts := strings.Split(s, ",")
ids := make([]uint64, 0, len(parts))
for _, p := range parts {
p = strings.TrimSpace(p)
if p == "" {
continue
}
id, err := strconv.ParseUint(p, 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid label ID '%s': %v", p, err)
}
ids = append(ids, id)
}
return ids, nil
}

View File

@@ -2,9 +2,11 @@ package op
import (
"fmt"
"strconv"
"time"
"github.com/Xhofe/go-cache"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/db"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
@@ -15,6 +17,10 @@ import (
var roleCache = cache.NewMemCache[*model.Role](cache.WithShards[*model.Role](2))
var roleG singleflight.Group[*model.Role]
func init() {
model.FetchRole = GetRole
}
func GetRole(id uint) (*model.Role, error) {
key := fmt.Sprint(id)
if r, ok := roleCache.Get(key); ok {
@@ -46,6 +52,23 @@ func GetRoleByName(name string) (*model.Role, error) {
return r, err
}
func GetDefaultRoleID() int {
item, err := GetSettingItemByKey(conf.DefaultRole)
if err == nil && item != nil && item.Value != "" {
if id, err := strconv.Atoi(item.Value); err == nil && id != 0 {
return id
}
if r, err := db.GetRoleByName(item.Value); err == nil {
return int(r.ID)
}
}
var r model.Role
if err := db.GetDb().Where("`default` = ?", true).First(&r).Error; err == nil {
return int(r.ID)
}
return int(model.GUEST)
}
func GetRolesByUserID(userID uint) ([]model.Role, error) {
user, err := GetUserById(userID)
if err != nil {
@@ -88,7 +111,21 @@ func CreateRole(r *model.Role) error {
}
roleCache.Del(fmt.Sprint(r.ID))
roleCache.Del(r.Name)
return db.CreateRole(r)
if err := db.CreateRole(r); err != nil {
return err
}
if r.Default {
roleCache.Clear()
item, err := GetSettingItemByKey(conf.DefaultRole)
if err != nil {
return err
}
item.Value = strconv.Itoa(int(r.ID))
if err := SaveSettingItem(item); err != nil {
return err
}
}
return nil
}
func UpdateRole(r *model.Role) error {
@@ -96,15 +133,52 @@ func UpdateRole(r *model.Role) error {
if err != nil {
return err
}
if old.Name == "admin" || old.Name == "guest" {
switch old.Name {
case "admin":
return errs.ErrChangeDefaultRole
case "guest":
r.Name = "guest"
}
for i := range r.PermissionScopes {
r.PermissionScopes[i].Path = utils.FixAndCleanPath(r.PermissionScopes[i].Path)
}
//if len(old.PermissionScopes) > 0 && len(r.PermissionScopes) > 0 &&
// old.PermissionScopes[0].Path != r.PermissionScopes[0].Path {
//
// oldPath := old.PermissionScopes[0].Path
// newPath := r.PermissionScopes[0].Path
//
// users, err := db.GetUsersByRole(int(r.ID))
// if err != nil {
// return errors.WithMessage(err, "failed to get users by role")
// }
//
// modifiedUsernames, err := db.UpdateUserBasePathPrefix(oldPath, newPath, users)
// if err != nil {
// return errors.WithMessage(err, "failed to update user base path when role updated")
// }
//
// for _, name := range modifiedUsernames {
// userCache.Del(name)
// }
//}
roleCache.Del(fmt.Sprint(r.ID))
roleCache.Del(r.Name)
return db.UpdateRole(r)
if err := db.UpdateRole(r); err != nil {
return err
}
if r.Default {
roleCache.Clear()
item, err := GetSettingItemByKey(conf.DefaultRole)
if err != nil {
return err
}
item.Value = strconv.Itoa(int(r.ID))
if err := SaveSettingItem(item); err != nil {
return err
}
}
return nil
}
func DeleteRole(id uint) error {

View File

@@ -41,11 +41,28 @@ func GetStorageByMountPath(mountPath string) (driver.Driver, error) {
return storageDriver, nil
}
func firstPathSegment(p string) string {
p = utils.FixAndCleanPath(p)
p = strings.TrimPrefix(p, "/")
if p == "" {
return ""
}
if i := strings.Index(p, "/"); i >= 0 {
return p[:i]
}
return p
}
// CreateStorage Save the storage to database so storage can get an id
// then instantiate corresponding driver and save it in memory
func CreateStorage(ctx context.Context, storage model.Storage) (uint, error) {
storage.Modified = time.Now()
storage.MountPath = utils.FixAndCleanPath(storage.MountPath)
//if storage.MountPath == "/" {
// return 0, errors.New("Mount path cannot be '/'")
//}
var err error
// check driver first
driverName := storage.Driver
@@ -205,6 +222,9 @@ func UpdateStorage(ctx context.Context, storage model.Storage) error {
}
storage.Modified = time.Now()
storage.MountPath = utils.FixAndCleanPath(storage.MountPath)
//if storage.MountPath == "/" {
// return errors.New("Mount path cannot be '/'")
//}
err = db.UpdateStorage(&storage)
if err != nil {
return errors.WithMessage(err, "failed update storage in database")
@@ -224,12 +244,20 @@ func UpdateStorage(ctx context.Context, storage model.Storage) error {
roleCache.Del(fmt.Sprint(id))
}
modifiedUsernames, err := db.UpdateUserBasePathPrefix(oldStorage.MountPath, storage.MountPath)
if err != nil {
return errors.WithMessage(err, "failed to update user base path")
}
for _, name := range modifiedUsernames {
userCache.Del(name)
//modifiedUsernames, err := db.UpdateUserBasePathPrefix(oldStorage.MountPath, storage.MountPath)
//if err != nil {
// return errors.WithMessage(err, "failed to update user base path")
//}
for _, id := range modifiedRoleIDs {
roleCache.Del(fmt.Sprint(id))
users, err := db.GetUsersByRole(int(id))
if err != nil {
return errors.WithMessage(err, "failed to get users by role")
}
for _, user := range users {
userCache.Del(user.Username)
}
}
}
if err != nil {
@@ -251,6 +279,34 @@ func DeleteStorageById(ctx context.Context, id uint) error {
if err != nil {
return errors.WithMessage(err, "failed get storage")
}
firstMount := firstPathSegment(storage.MountPath)
if firstMount != "" {
roles, err := db.GetAllRoles()
if err != nil {
return errors.WithMessage(err, "failed to load roles")
}
users, err := db.GetAllUsers()
if err != nil {
return errors.WithMessage(err, "failed to load users")
}
var usedBy []string
for _, r := range roles {
for _, entry := range r.PermissionScopes {
if firstPathSegment(entry.Path) == firstMount {
usedBy = append(usedBy, "role:"+r.Name)
break
}
}
}
for _, u := range users {
if firstPathSegment(u.BasePath) == firstMount {
usedBy = append(usedBy, "user:"+u.Username)
}
}
if len(usedBy) > 0 {
return errors.Errorf("storage is used by %s, please cancel usage first", strings.Join(usedBy, ", "))
}
}
if !storage.Disabled {
storageDriver, err := GetStorageByMountPath(storage.MountPath)
if err != nil {

View File

@@ -50,6 +50,10 @@ func GetUserByRole(role int) (*model.User, error) {
return db.GetUserByRole(role)
}
func GetUsersByRole(role int) ([]model.User, error) {
return db.GetUsersByRole(role)
}
func GetUserByName(username string) (*model.User, error) {
if username == "" {
return nil, errs.EmptyUsername
@@ -78,7 +82,25 @@ func GetUsers(pageIndex, pageSize int) (users []model.User, count int64, err err
func CreateUser(u *model.User) error {
u.BasePath = utils.FixAndCleanPath(u.BasePath)
return db.CreateUser(u)
err := db.CreateUser(u)
if err != nil {
return err
}
roles, err := GetRolesByUserID(u.ID)
if err == nil {
for _, role := range roles {
if len(role.PermissionScopes) > 0 {
u.BasePath = utils.FixAndCleanPath(role.PermissionScopes[0].Path)
break
}
}
_ = db.UpdateUser(u)
userCache.Del(u.Username)
}
return nil
}
func DeleteUserById(id uint) error {
@@ -106,6 +128,17 @@ func UpdateUser(u *model.User) error {
}
userCache.Del(old.Username)
u.BasePath = utils.FixAndCleanPath(u.BasePath)
//if len(u.Role) > 0 {
// roles, err := GetRolesByUserID(u.ID)
// if err == nil {
// for _, role := range roles {
// if len(role.PermissionScopes) > 0 {
// u.BasePath = utils.FixAndCleanPath(role.PermissionScopes[0].Path)
// break
// }
// }
// }
//}
return db.UpdateUser(u)
}
@@ -136,3 +169,11 @@ func DelUserCache(username string) error {
userCache.Del(username)
return nil
}
func CountEnabledAdminsExcluding(userID uint) (int64, error) {
adminRole, err := GetRoleByName("admin")
if err != nil {
return 0, err
}
return db.CountUsersByRoleAndEnabledExclude(adminRole.ID, userID)
}

View File

@@ -0,0 +1,8 @@
package session
import "github.com/alist-org/alist/v3/internal/db"
// MarkInactive marks the session with the given ID as inactive.
func MarkInactive(sessionID string) error {
return db.MarkInactive(sessionID)
}

30
pkg/utils/mask.go Normal file
View File

@@ -0,0 +1,30 @@
package utils
import "strings"
// MaskIP anonymizes middle segments of an IP address.
func MaskIP(ip string) string {
if ip == "" {
return ""
}
if strings.Contains(ip, ":") {
parts := strings.Split(ip, ":")
if len(parts) > 2 {
for i := 1; i < len(parts)-1; i++ {
if parts[i] != "" {
parts[i] = "*"
}
}
return strings.Join(parts, ":")
}
return ip
}
parts := strings.Split(ip, ".")
if len(parts) == 4 {
for i := 1; i < len(parts)-1; i++ {
parts[i] = "*"
}
return strings.Join(parts, ".")
}
return ip
}

View File

@@ -88,6 +88,13 @@ func JoinBasePath(basePath, reqPath string) (string, error) {
strings.Contains(reqPath, "/../") {
return "", errs.RelativePath
}
reqPath = FixAndCleanPath(reqPath)
if strings.HasPrefix(reqPath, "/") {
return reqPath, nil
}
return stdpath.Join(FixAndCleanPath(basePath), FixAndCleanPath(reqPath)), nil
}

View File

@@ -43,17 +43,23 @@ func MergeRolePermissions(u *model.User, reqPath string) int32 {
if err != nil {
continue
}
for _, entry := range role.PermissionScopes {
if utils.IsSubPath(entry.Path, reqPath) {
if reqPath == "/" || utils.PathEqual(reqPath, u.BasePath) {
for _, entry := range role.PermissionScopes {
perm |= entry.Permission
}
} else {
for _, entry := range role.PermissionScopes {
if utils.IsSubPath(entry.Path, reqPath) {
perm |= entry.Permission
}
}
}
}
return perm
}
func CanAccessWithRoles(u *model.User, meta *model.Meta, reqPath, password string) bool {
if !canReadPathByRole(u, reqPath) {
if !CanReadPathByRole(u, reqPath) {
return false
}
perm := MergeRolePermissions(u, reqPath)
@@ -78,7 +84,30 @@ func CanAccessWithRoles(u *model.User, meta *model.Meta, reqPath, password strin
return meta.Password == password
}
func canReadPathByRole(u *model.User, reqPath string) bool {
func CanReadPathByRole(u *model.User, reqPath string) bool {
if u == nil {
return false
}
if reqPath == "/" || utils.PathEqual(reqPath, u.BasePath) {
return len(u.Role) > 0
}
for _, rid := range u.Role {
role, err := op.GetRole(uint(rid))
if err != nil {
continue
}
for _, entry := range role.PermissionScopes {
if utils.PathEqual(entry.Path, reqPath) || utils.IsSubPath(entry.Path, reqPath) || utils.IsSubPath(reqPath, entry.Path) {
return true
}
}
}
return false
}
// HasChildPermission checks whether any child path under reqPath grants the
// specified permission bit.
func HasChildPermission(u *model.User, reqPath string, bit uint) bool {
if u == nil {
return false
}
@@ -88,7 +117,7 @@ func canReadPathByRole(u *model.User, reqPath string) bool {
continue
}
for _, entry := range role.PermissionScopes {
if utils.IsSubPath(entry.Path, reqPath) {
if utils.IsSubPath(reqPath, entry.Path) && HasPermission(entry.Permission, bit) {
return true
}
}
@@ -102,7 +131,7 @@ func canReadPathByRole(u *model.User, reqPath string) bool {
func CheckPathLimitWithRoles(u *model.User, reqPath string) bool {
perm := MergeRolePermissions(u, reqPath)
if HasPermission(perm, PermPathLimit) {
return canReadPathByRole(u, reqPath)
return CanReadPathByRole(u, reqPath)
}
return true
}

View File

@@ -44,17 +44,19 @@ type ArchiveContentResp struct {
}
func toObjsRespWithoutSignAndThumb(obj model.Obj) ObjResp {
storageClass, _ := model.GetStorageClass(obj)
return ObjResp{
Name: obj.GetName(),
Size: obj.GetSize(),
IsDir: obj.IsDir(),
Modified: obj.ModTime(),
Created: obj.CreateTime(),
HashInfoStr: obj.GetHash().String(),
HashInfo: obj.GetHash().Export(),
Sign: "",
Thumb: "",
Type: utils.GetObjType(obj.GetName(), obj.IsDir()),
Name: obj.GetName(),
Size: obj.GetSize(),
IsDir: obj.IsDir(),
Modified: obj.ModTime(),
Created: obj.CreateTime(),
HashInfoStr: obj.GetHash().String(),
HashInfo: obj.GetHash().Export(),
Sign: "",
Thumb: "",
Type: utils.GetObjType(obj.GetName(), obj.IsDir()),
StorageClass: storageClass,
}
}

View File

@@ -3,14 +3,22 @@ package handles
import (
"bytes"
"encoding/base64"
"errors"
"fmt"
"image/png"
"path"
"strings"
"time"
"github.com/Xhofe/go-cache"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/device"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/session"
"github.com/alist-org/alist/v3/internal/setting"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/server/common"
"github.com/gin-gonic/gin"
"github.com/pquerna/otp/totp"
@@ -79,16 +87,62 @@ func loginHash(c *gin.Context, req *LoginReq) {
return
}
}
clientID := c.GetHeader("Client-Id")
if clientID == "" {
clientID = c.Query("client_id")
}
key := utils.GetMD5EncodeStr(fmt.Sprintf("%d-%s",
user.ID, clientID))
if err := device.EnsureActiveOnLogin(user.ID, key, c.Request.UserAgent(), c.ClientIP()); err != nil {
if errors.Is(err, errs.TooManyDevices) {
common.ErrorResp(c, err, 403)
} else {
common.ErrorResp(c, err, 400, true)
}
return
}
// generate token
token, err := common.GenerateToken(user)
if err != nil {
common.ErrorResp(c, err, 400, true)
return
}
common.SuccessResp(c, gin.H{"token": token})
common.SuccessResp(c, gin.H{"token": token, "device_key": key})
loginCache.Del(ip)
}
type RegisterReq struct {
Username string `json:"username" binding:"required"`
Password string `json:"password" binding:"required"`
}
// Register a new user
func Register(c *gin.Context) {
if !setting.GetBool(conf.AllowRegister) {
common.ErrorStrResp(c, "registration is disabled", 403)
return
}
var req RegisterReq
if err := c.ShouldBind(&req); err != nil {
common.ErrorResp(c, err, 400)
return
}
user := &model.User{
Username: req.Username,
Role: model.Roles{op.GetDefaultRoleID()},
Authn: "[]",
}
user.SetPassword(req.Password)
if err := op.CreateUser(user); err != nil {
common.ErrorResp(c, err, 500, true)
return
}
common.SuccessResp(c)
}
type UserResp struct {
model.User
Otp bool `json:"otp"`
@@ -111,25 +165,25 @@ func CurrentUser(c *gin.Context) {
var roleNames []string
permMap := map[string]int32{}
addedPaths := map[string]bool{}
paths := make([]string, 0)
for _, role := range user.RolesDetail {
roleNames = append(roleNames, role.Name)
for _, entry := range role.PermissionScopes {
cleanPath := path.Clean("/" + strings.TrimPrefix(entry.Path, "/"))
if _, ok := permMap[cleanPath]; !ok {
paths = append(paths, cleanPath)
}
permMap[cleanPath] |= entry.Permission
}
}
userResp.RoleNames = roleNames
for fullPath, perm := range permMap {
if !addedPaths[fullPath] {
userResp.Permissions = append(userResp.Permissions, model.PermissionEntry{
Path: fullPath,
Permission: perm,
})
addedPaths[fullPath] = true
}
for _, fullPath := range paths {
userResp.Permissions = append(userResp.Permissions, model.PermissionEntry{
Path: fullPath,
Permission: permMap[fullPath],
})
}
common.SuccessResp(c, userResp)
@@ -216,6 +270,13 @@ func Verify2FA(c *gin.Context) {
}
func LogOut(c *gin.Context) {
if keyVal, ok := c.Get("device_key"); ok {
if err := session.MarkInactive(keyVal.(string)); err != nil {
common.ErrorResp(c, err, 500)
return
}
c.Set("session_inactive", true)
}
err := common.InvalidateToken(c.GetHeader("Authorization"))
if err != nil {
common.ErrorResp(c, err, 500)

View File

@@ -33,18 +33,19 @@ type DirReq struct {
}
type ObjResp struct {
Id string `json:"id"`
Path string `json:"path"`
Name string `json:"name"`
Size int64 `json:"size"`
IsDir bool `json:"is_dir"`
Modified time.Time `json:"modified"`
Created time.Time `json:"created"`
Sign string `json:"sign"`
Thumb string `json:"thumb"`
Type int `json:"type"`
HashInfoStr string `json:"hashinfo"`
HashInfo map[*utils.HashType]string `json:"hash_info"`
Id string `json:"id"`
Path string `json:"path"`
Name string `json:"name"`
Size int64 `json:"size"`
IsDir bool `json:"is_dir"`
Modified time.Time `json:"modified"`
Created time.Time `json:"created"`
Sign string `json:"sign"`
Thumb string `json:"thumb"`
Type int `json:"type"`
HashInfoStr string `json:"hashinfo"`
HashInfo map[*utils.HashType]string `json:"hash_info"`
StorageClass string `json:"storage_class,omitempty"`
}
type FsListResp struct {
@@ -57,19 +58,20 @@ type FsListResp struct {
}
type ObjLabelResp struct {
Id string `json:"id"`
Path string `json:"path"`
Name string `json:"name"`
Size int64 `json:"size"`
IsDir bool `json:"is_dir"`
Modified time.Time `json:"modified"`
Created time.Time `json:"created"`
Sign string `json:"sign"`
Thumb string `json:"thumb"`
Type int `json:"type"`
HashInfoStr string `json:"hashinfo"`
HashInfo map[*utils.HashType]string `json:"hash_info"`
LabelList []model.Label `json:"label_list"`
Id string `json:"id"`
Path string `json:"path"`
Name string `json:"name"`
Size int64 `json:"size"`
IsDir bool `json:"is_dir"`
Modified time.Time `json:"modified"`
Created time.Time `json:"created"`
Sign string `json:"sign"`
Thumb string `json:"thumb"`
Type int `json:"type"`
HashInfoStr string `json:"hashinfo"`
HashInfo map[*utils.HashType]string `json:"hash_info"`
LabelList []model.Label `json:"label_list"`
StorageClass string `json:"storage_class,omitempty"`
}
func FsList(c *gin.Context) {
@@ -107,14 +109,21 @@ func FsList(c *gin.Context) {
common.ErrorResp(c, err, 500)
return
}
total, objs := pagination(objs, &req.PageReq)
filtered := make([]model.Obj, 0, len(objs))
for _, obj := range objs {
childPath := stdpath.Join(reqPath, obj.GetName())
if common.CanReadPathByRole(user, childPath) {
filtered = append(filtered, obj)
}
}
total, objs := pagination(filtered, &req.PageReq)
provider := "unknown"
storage, err := fs.GetStorage(reqPath, &fs.GetStoragesArgs{})
if err == nil {
provider = storage.GetStorage().Driver
}
common.SuccessResp(c, FsListResp{
Content: toObjsResp(objs, reqPath, isEncrypt(meta, reqPath), user.ID),
Content: toObjsResp(objs, reqPath, isEncrypt(meta, reqPath)),
Total: int64(total),
Readme: getReadme(meta, reqPath),
Header: getHeader(meta, reqPath),
@@ -161,7 +170,14 @@ func FsDirs(c *gin.Context) {
common.ErrorResp(c, err, 500)
return
}
dirs := filterDirs(objs)
visible := make([]model.Obj, 0, len(objs))
for _, obj := range objs {
childPath := stdpath.Join(reqPath, obj.GetName())
if common.CanReadPathByRole(user, childPath) {
visible = append(visible, obj)
}
}
dirs := filterDirs(visible)
common.SuccessResp(c, dirs)
}
@@ -224,28 +240,40 @@ func pagination(objs []model.Obj, req *model.PageReq) (int, []model.Obj) {
return total, objs[start:end]
}
func toObjsResp(objs []model.Obj, parent string, encrypt bool, userId uint) []ObjLabelResp {
func toObjsResp(objs []model.Obj, parent string, encrypt bool) []ObjLabelResp {
var resp []ObjLabelResp
names := make([]string, 0, len(objs))
for _, obj := range objs {
if !obj.IsDir() {
names = append(names, obj.GetName())
}
}
labelsByName, _ := op.GetLabelsByFileNamesPublic(names)
for _, obj := range objs {
var labels []model.Label
if obj.IsDir() == false {
labels, _ = op.GetLabelByFileName(userId, obj.GetName())
if !obj.IsDir() {
labels = labelsByName[obj.GetName()]
}
thumb, _ := model.GetThumb(obj)
storageClass, _ := model.GetStorageClass(obj)
resp = append(resp, ObjLabelResp{
Id: obj.GetID(),
Path: obj.GetPath(),
Name: obj.GetName(),
Size: obj.GetSize(),
IsDir: obj.IsDir(),
Modified: obj.ModTime(),
Created: obj.CreateTime(),
HashInfoStr: obj.GetHash().String(),
HashInfo: obj.GetHash().Export(),
Sign: common.Sign(obj, parent, encrypt),
Thumb: thumb,
Type: utils.GetObjType(obj.GetName(), obj.IsDir()),
LabelList: labels,
Id: obj.GetID(),
Path: obj.GetPath(),
Name: obj.GetName(),
Size: obj.GetSize(),
IsDir: obj.IsDir(),
Modified: obj.ModTime(),
Created: obj.CreateTime(),
HashInfoStr: obj.GetHash().String(),
HashInfo: obj.GetHash().Export(),
Sign: common.Sign(obj, parent, encrypt),
Thumb: thumb,
Type: utils.GetObjType(obj.GetName(), obj.IsDir()),
LabelList: labels,
StorageClass: storageClass,
})
}
return resp
@@ -350,26 +378,28 @@ func FsGet(c *gin.Context) {
}
parentMeta, _ := op.GetNearestMeta(parentPath)
thumb, _ := model.GetThumb(obj)
storageClass, _ := model.GetStorageClass(obj)
common.SuccessResp(c, FsGetResp{
ObjResp: ObjResp{
Id: obj.GetID(),
Path: obj.GetPath(),
Name: obj.GetName(),
Size: obj.GetSize(),
IsDir: obj.IsDir(),
Modified: obj.ModTime(),
Created: obj.CreateTime(),
HashInfoStr: obj.GetHash().String(),
HashInfo: obj.GetHash().Export(),
Sign: common.Sign(obj, parentPath, isEncrypt(meta, reqPath)),
Type: utils.GetFileType(obj.GetName()),
Thumb: thumb,
Id: obj.GetID(),
Path: obj.GetPath(),
Name: obj.GetName(),
Size: obj.GetSize(),
IsDir: obj.IsDir(),
Modified: obj.ModTime(),
Created: obj.CreateTime(),
HashInfoStr: obj.GetHash().String(),
HashInfo: obj.GetHash().Export(),
Sign: common.Sign(obj, parentPath, isEncrypt(meta, reqPath)),
Type: utils.GetFileType(obj.GetName()),
Thumb: thumb,
StorageClass: storageClass,
},
RawURL: rawURL,
Readme: getReadme(meta, reqPath),
Header: getHeader(meta, reqPath),
Provider: provider,
Related: toObjsResp(related, parentPath, isEncrypt(parentMeta, parentPath), user.ID),
Related: toObjsResp(related, parentPath, isEncrypt(parentMeta, parentPath)),
})
}

View File

@@ -8,7 +8,9 @@ import (
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/server/common"
"github.com/gin-gonic/gin"
"net/url"
"strconv"
"strings"
)
type DelLabelFileBinDingReq struct {
@@ -16,18 +18,36 @@ type DelLabelFileBinDingReq struct {
LabelId string `json:"label_id"`
}
type pageResp[T any] struct {
Content []T `json:"content"`
Total int64 `json:"total"`
}
type restoreLabelBindingsReq struct {
KeepIDs bool `json:"keep_ids"`
Override bool `json:"override"`
Bindings []model.LabelFileBinding `json:"bindings"`
}
func GetLabelByFileName(c *gin.Context) {
fileName := c.Query("file_name")
if fileName == "" {
common.ErrorResp(c, errors.New("file_name must not empty"), 400)
return
}
decodedFileName, err := url.QueryUnescape(fileName)
if err != nil {
common.ErrorResp(c, errors.New("invalid file_name"), 400)
return
}
fmt.Println(">>> 原始 fileName:", fileName)
fmt.Println(">>> 解码后 fileName:", decodedFileName)
userObj, ok := c.Value("user").(*model.User)
if !ok {
common.ErrorStrResp(c, "user invalid", 401)
return
}
labels, err := op.GetLabelByFileName(userObj.ID, fileName)
labels, err := op.GetLabelByFileName(userObj.ID, decodedFileName)
if err != nil {
common.ErrorResp(c, err, 500, true)
return
@@ -101,3 +121,130 @@ func GetFileByLabel(c *gin.Context) {
}
common.SuccessResp(c, fileList)
}
func ListLabelFileBinding(c *gin.Context) {
userObj, ok := c.Value("user").(*model.User)
if !ok {
common.ErrorStrResp(c, "user invalid", 401)
return
}
pageStr := c.DefaultQuery("page", "1")
sizeStr := c.DefaultQuery("page_size", "50")
page, err := strconv.Atoi(pageStr)
if err != nil || page <= 0 {
page = 1
}
pageSize, err := strconv.Atoi(sizeStr)
if err != nil || pageSize <= 0 || pageSize > 200 {
pageSize = 50
}
fileName := c.Query("file_name")
labelIDStr := c.Query("label_id")
var labelIDs []uint
if labelIDStr != "" {
parts := strings.Split(labelIDStr, ",")
for _, p := range parts {
if p == "" {
continue
}
id64, err := strconv.ParseUint(strings.TrimSpace(p), 10, 64)
if err != nil {
common.ErrorResp(c, fmt.Errorf("invalid label_id '%s': %v", p, err), 400)
return
}
labelIDs = append(labelIDs, uint(id64))
}
}
list, total, err := db.ListLabelFileBinDing(userObj.ID, labelIDs, fileName, page, pageSize)
if err != nil {
common.ErrorResp(c, err, 500, true)
return
}
common.SuccessResp(c, pageResp[model.LabelFileBinding]{
Content: list,
Total: total,
})
}
func RestoreLabelFileBinding(c *gin.Context) {
var req restoreLabelBindingsReq
if err := c.ShouldBindJSON(&req); err != nil {
common.ErrorResp(c, err, 400)
return
}
if len(req.Bindings) == 0 {
common.ErrorStrResp(c, "empty bindings", 400)
return
}
if u, ok := c.Value("user").(*model.User); ok {
for i := range req.Bindings {
if req.Bindings[i].UserId == 0 {
req.Bindings[i].UserId = u.ID
}
}
}
for i := range req.Bindings {
b := req.Bindings[i]
if b.UserId == 0 || b.LabelId == 0 || strings.TrimSpace(b.FileName) == "" {
common.ErrorStrResp(c, "invalid binding: user_id/label_id/file_name required", 400)
return
}
}
if err := op.RestoreLabelFileBindings(req.Bindings, req.KeepIDs, req.Override); err != nil {
common.ErrorResp(c, err, 500, true)
return
}
common.SuccessResp(c, gin.H{
"msg": fmt.Sprintf("restored %d rows", len(req.Bindings)),
})
}
func CreateLabelFileBinDingBatch(c *gin.Context) {
var req struct {
Items []op.CreateLabelFileBinDingReq `json:"items" binding:"required"`
}
if err := c.ShouldBindJSON(&req); err != nil || len(req.Items) == 0 {
common.ErrorResp(c, err, 400)
return
}
userObj, ok := c.Value("user").(*model.User)
if !ok {
common.ErrorStrResp(c, "user invalid", 401)
return
}
type perResult struct {
Name string `json:"name"`
Ok bool `json:"ok"`
ErrMsg string `json:"errMsg,omitempty"`
}
results := make([]perResult, 0, len(req.Items))
succeed := 0
for _, item := range req.Items {
if item.IsDir {
results = append(results, perResult{Name: item.Name, Ok: false, ErrMsg: "Unable to bind folder"})
continue
}
if err := op.CreateLabelFileBinDing(item, userObj.ID); err != nil {
results = append(results, perResult{Name: item.Name, Ok: false, ErrMsg: err.Error()})
continue
}
succeed++
results = append(results, perResult{Name: item.Name, Ok: true})
}
common.SuccessResp(c, gin.H{
"total": len(req.Items),
"succeed": succeed,
"failed": len(req.Items) - succeed,
"results": results,
})
}

View File

@@ -44,7 +44,7 @@ func GetRole(c *gin.Context) {
func CreateRole(c *gin.Context) {
var req model.Role
if err := c.ShouldBind(&req); err != nil {
if err := c.ShouldBindJSON(&req); err != nil {
common.ErrorResp(c, err, 400)
return
}
@@ -56,8 +56,14 @@ func CreateRole(c *gin.Context) {
}
func UpdateRole(c *gin.Context) {
var req model.Role
if err := c.ShouldBind(&req); err != nil {
var req struct {
ID uint `json:"id"`
Name string `json:"name" binding:"required"`
Description string `json:"description"`
PermissionScopes []model.PermissionEntry `json:"permission_scopes"`
Default *bool `json:"default"`
}
if err := c.ShouldBindJSON(&req); err != nil {
common.ErrorResp(c, err, 400)
return
}
@@ -66,11 +72,21 @@ func UpdateRole(c *gin.Context) {
common.ErrorResp(c, err, 500, true)
return
}
if role.Name == "admin" || role.Name == "guest" {
switch role.Name {
case "admin":
common.ErrorResp(c, errs.ErrChangeDefaultRole, 403)
return
case "guest":
req.Name = "guest"
}
if err := op.UpdateRole(&req); err != nil {
role.Name = req.Name
role.Description = req.Description
role.PermissionScopes = req.PermissionScopes
if req.Default != nil {
role.Default = *req.Default
}
if err := op.UpdateRole(role); err != nil {
common.ErrorResp(c, err, 500, true)
} else {
common.SuccessResp(c)

View File

@@ -43,28 +43,39 @@ func Search(c *gin.Context) {
common.ErrorResp(c, err, 400)
return
}
nodes, total, err := search.Search(c, req.SearchReq)
if err != nil {
common.ErrorResp(c, err, 500)
return
}
var filteredNodes []model.SearchNode
for _, node := range nodes {
if !strings.HasPrefix(node.Parent, user.BasePath) {
continue
var (
filteredNodes []model.SearchNode
)
for len(filteredNodes) < req.PerPage {
nodes, _, err := search.Search(c, req.SearchReq)
if err != nil {
common.ErrorResp(c, err, 500)
return
}
meta, err := op.GetNearestMeta(node.Parent)
if err != nil && !errors.Is(errors.Cause(err), errs.MetaNotFound) {
continue
if len(nodes) == 0 {
break
}
if !common.CanAccessWithRoles(user, meta, path.Join(node.Parent, node.Name), req.Password) {
continue
for _, node := range nodes {
if !strings.HasPrefix(node.Parent, user.BasePath) {
continue
}
meta, err := op.GetNearestMeta(node.Parent)
if err != nil && !errors.Is(errors.Cause(err), errs.MetaNotFound) {
continue
}
if !common.CanAccessWithRoles(user, meta, path.Join(node.Parent, node.Name), req.Password) {
continue
}
filteredNodes = append(filteredNodes, node)
if len(filteredNodes) >= req.PerPage {
break
}
}
filteredNodes = append(filteredNodes, node)
req.Page++
}
common.SuccessResp(c, common.PageResp{
Content: utils.MustSliceConvert(filteredNodes, nodeToSearchResp),
Total: total,
Total: int64(len(filteredNodes)),
})
}

Some files were not shown because too many files have changed in this diff Show More