Compare commits

...

211 Commits

Author SHA1 Message Date
千石
b4d9beb49c fix(Mediatrack): Add support for X-Device-Fingerprint header (#9354)
Introduce a `DeviceFingerprint` field to the request metadata.
This field is used to conditionally set the `X-Device-Fingerprint`
HTTP header in outgoing requests if its value is not empty.
2025-10-24 00:31:15 +08:00
千石
4c8401855c feat: Add new driver bitqiu support (#9355)
* feat(bitqiu): Add Bitqiu cloud drive support

- Implement the new Bitqiu cloud drive.
- Add core driver logic, metadata handling, and utility functions.
- Register the Bitqiu driver for use.

* feat(driver): Implement GetLink, CreateDir, and Move operations

- Implement `GetLink` method to retrieve download links for files.
- Implement `CreateDir` method to create new directories.
- Implement `Move` method to relocate files and directories.
- Add new API endpoints and data structures for download and directory creation responses.
- Integrate retry logic with re-authentication for API calls in implemented methods.
- Update HTTP request headers to include `x-requested-with`.

* feat(bitqiu): Add rename, copy, and delete operations

- Implement `Rename` operation with retry logic and API calls.
- Implement `Copy` operation, including asynchronous handling, polling for completion, and status checks.
- Implement `Remove` operation with retry logic and API calls.
- Add new API endpoint URLs for rename, copy, and delete, and a new copy success code.
- Introduce `AsyncManagerData`, `AsyncTask`, and `AsyncTaskInfo` types to support async copy status monitoring.
- Add utility functions `updateObjectName` and `parentPathOf` for object manipulation.
- Integrate login retry mechanism for all file operations.

* feat(bitqiu-upload): Implement chunked file upload support

- Implement multi-part chunked upload logic for the BitQiu service.
- Introduce `UploadInitData` and `ChunkUploadResponse` structs for structured API communication.
- Refactor the `Save` method to orchestrate initial upload, chunked data transfer, and finalization.
- Add `uploadFileInChunks` function to handle sequential uploading of file parts.
- Add `completeChunkUpload` function to finalize the chunked upload process on the server.
- Ensure proper temporary file cleanup using `defer tmpFile.Close()`.

* feat(driver): Implement automatic root folder ID retrieval

- Add `userInfoURL` constant for fetching user information.
- Implement `ensureRootFolderID` function to retrieve and set the driver's root folder ID if not already present.
- Integrate `ensureRootFolderID` into the driver's `Init` process.
- Define `UserInfoData` struct to parse the `rootDirId` from user information responses.

* feat(client): Implement configurable user agent

*   Introduce a configurable `UserAgent` field in the client's settings.
*   Add a `userAgent()` method to retrieve the user agent, prioritizing the custom setting or using a predefined default.
*   Apply the determined user agent to all outbound HTTP requests made by the `BitQiu` client.
2025-10-24 00:29:33 +08:00
千石
e2016dd031 refactor(webdav): Use ResolvePath instead of JoinPath (#9344)
- Changed the path concatenation method between `reqPath` and `src` and `dst` to use `ResolvePath`
- Updated the implementation of path handling in multiple functions
- Improved the consistency and reliability of path resolution
2025-10-16 17:23:11 +08:00
千石
a6bd90a9b2 feat(driver/s3): Add OSS Archive Support (#9350)
* feat(s3): Add support for S3 object storage classes

Introduces a new 'storage_class' configuration option for S3 providers. Users can now specify the desired storage class (e.g., Standard, GLACIER, DEEP_ARCHIVE) for objects uploaded to S3-compatible services like AWS S3 and Tencent COS.

The input storage class string is normalized to match AWS SDK constants, supporting various common aliases. If an unknown storage class is provided, it will be used as a raw value with a warning. This enhancement provides greater control over storage costs and data access patterns.

* feat(storage): Support for displaying file storage classes

Adds storage class information to file metadata and API responses.

This change introduces the ability to store file storage classes in file metadata and display them in API responses. This allows users to view a file's storage tier (e.g., S3 Standard, Glacier), enhancing data management capabilities.

Implementation details include:
- Introducing the StorageClassProvider interface and the ObjWrapStorageClass structure to uniformly handle and communicate object storage class information.
- Updated file metadata structures (e.g., ArchiveObj, FileInfo, RespFile) to include a StorageClass field.
- Modified relevant API response functions (e.g., GetFileInfo, GetFileList) to populate and return storage classes.
- Integrated functionality for retrieving object storage classes from underlying storage systems (e.g., S3) and wrapping them in lists.

* feat(driver/s3): Added the "Other" interface and implemented it by the S3 driver.

A new `driver.Other` interface has been added and defined in the `other.go` file.
The S3 driver has been updated to implement this new interface, extending its functionality.

* feat(s3): Add S3 object archive and thaw task management

This commit introduces comprehensive support for S3 object archive and thaw operations, managed asynchronously through a new task system.

- **S3 Transition Task System**:
  - Adds a new `S3Transition` task configuration, including workers, max retries, and persistence options.
  - Initializes `S3TransitionTaskManager` to handle asynchronous S3 archive/thaw requests.
  - Registers dedicated API routes for monitoring S3 transition tasks.

- **Integrate S3 Archive/Thaw with Other API**:
  - Modifies the `Other` API handler to intercept `archive` and `thaw` methods for S3 storage drivers.
  - Dispatches these operations as `S3TransitionTask` instances to the task manager for background processing.
  - Returns a task ID to the client for tracking the status of the dispatched operation.

- **Refactor `other` package for improved API consistency**:
  - Exports previously internal structs such as `archiveRequest`, `thawRequest`, `objectDescriptor`, `archiveResponse`, `thawResponse`, and `restoreStatus` by making their names public.
  - Makes helper functions like `decodeOtherArgs`, `normalizeStorageClass`, and `normalizeRestoreTier` public.
  - Introduces new constants for various S3 `Other` API methods.
2025-10-16 17:22:54 +08:00
千石
35d322443b feat(driver): Add URL signing support (#9347)
Introduces the ability to sign generated URLs for enhanced security and access control.

This feature is activated by configuring a `PrivateKey`, `UID`, and `ValidDuration` in the driver settings. If a private key is provided, the driver will sign the output URLs, making them time-limited based on the `ValidDuration`. The `ValidDuration` defaults to 30 minutes if not specified.

The core signing logic is encapsulated in the new `sign.go` file. The `driver.go` file integrates this signing process before returning the final URL.
2025-10-11 19:14:13 +08:00
D@' 3z K!7
81a7f28ba2 feat(drivers): add ProtonDrive driver (#9331)
- Implement complete ProtonDrive storage driver with end-to-end encryption support
- Add authentication via username/password with credential caching and reusable login
- Support all core operations: List, Link, Put, Copy, Move, Remove, Rename, MakeDir
- Include encrypted file operations with PGP key management and node passphrase handling
- Add temporary HTTP server for secure file downloads with range request support
- Support media streaming using temp server range requests
- Implement progress tracking for uploads and downloads
- Support directory operations with circular move detection
- Add proper error handling and panic recovery for external library integration

Closes #9312
2025-09-30 14:18:58 +08:00
textrix
fe564c42da feat: add pCloud driver support (#9339)
- Implement OAuth2 authentication with US/EU region support
- Add file operations (list, upload, download, delete, rename, move, copy)
- Add folder operations (create, rename, move, delete)
- Enhance error handling with pCloud-specific retry logic
- Use correct API methods: GET for reads, POST for writes
- Implement direct upload approach for better performance
- Add exponential backoff for failed requests with 4xxx/5xxx classification
2025-09-30 14:17:54 +08:00
Chesyre
d17889bf8e feat(gofile): add configurable link expiration handling (#9329)
* feat(driver): add Gofile storage driver

Add support for Gofile.io cloud storage service with full CRUD operations.
Features:
- File and folder listing
- Upload and download functionality
- Create, move, rename, copy, and delete operations
- Direct link generation for file access
- API token authentication
The driver implements all required driver interfaces and follows
the existing driver patterns in the codebase.

* feat(gofile): add configurable link expiration handling

- Adjusts driver addition metadata to accept LinkExpiry and DirectLinkExpiry options for caching and API expiry control (drivers/gofile/meta.go:10).

- Applies the new options when building file links, setting optional local cache expiration (drivers/gofile/driver.go:101) and sending an expireTime to the direct-link API (drivers/gofile/util.go:202).

- Logs Gofile API error payloads and validates the structured error response before returning it (drivers/gofile/util.go:141).

- Adds the required imports and returns the configured model.Link instance (drivers/gofile/driver.go:6).
2025-09-30 14:16:28 +08:00
千石
4f8bc478d5 refactor(driver): Refactored directory link check logic (#9324)
- Use `filePath` variable to simplify path handling
- Replace `isSymlinkDir` with `isLinkedDir` in `isFolder` check
- Use simplified path variables in `times.Stat` function calls

refactor(util): Optimized directory link check functions

- Renamed `isSymlinkDir` to `isLinkedDir` to expand Windows platform support
- Corrected path resolution logic to ensure link paths are absolute
- Added error handling to prevent path resolution failures
2025-09-14 21:03:58 +08:00
千石
e1800f18e4 feat: Check usage before deleting storage (#9322)
* feat(storage): Added role and user path checking functionality

- Added `GetAllRoles` function to retrieve all roles
- Added `GetAllUsers` function to retrieve all users
- Added `firstPathSegment` function to extract the first segment of a path
- Checks whether a storage object is used by a role or user, and returns relevant information for unusing it

* fix(storage): Fixed a potential null value issue with not checking firstMount.

- Added a check to see if `firstMount` is null to prevent logic errors.
- Adjusted the loading logic of `GetAllRoles` and `GetAllUsers` to only execute when `firstMount` is non-null.
- Fixed the `usedBy` check logic to ensure that an error message is returned under the correct conditions.
- Optimized code structure to reduce unnecessary execution paths.
2025-09-12 17:56:23 +08:00
D@' 3z K!7
16cce37947 fix(drivers): add session renewal cron for MediaFire driver (#9321)
- Implement automatic session token renewal every 6-9 minutes
- Add validation for required SessionToken and Cookie fields in Init
- Handle session expiration by calling renewToken on validation failure
- Prevent storage failures due to MediaFire session timeouts

Fixes session closure issues that occur after server restarts or extended periods.

Co-authored-by: Da3zKi7 <da3zki7@duck.com>
2025-09-12 17:53:47 +08:00
千石
6e7c7d1dd0 refactor (auth): Optimize permission path processing logic (#9320)
- Changed permission path collection from map to slice to improve code readability
- Removed redundant path checks to improve path addition efficiency
- Restructured the loop logic for path processing to simplify the path permission assignment process
2025-09-11 21:16:33 +08:00
Chesyre
28a8428559 feat(driver): add Gofile storage driver (#9318)
Add support for Gofile.io cloud storage service with full CRUD operations.
Features:
- File and folder listing
- Upload and download functionality
- Create, move, rename, copy, and delete operations
- Direct link generation for file access
- API token authentication
The driver implements all required driver interfaces and follows
the existing driver patterns in the codebase.
2025-09-11 11:46:31 +08:00
D@' 3z K!7
d0026030cb feat(drivers): add MediaFire driver support (#9319)
- Implement complete MediaFire storage driver
- Add authentication via session_token and cookie
- Support all core operations: List, Get, Link, Put, Copy, Move, Remove, Rename, MakeDir
- Include thumbnail generation for media files
- Handle MediaFire's resumable upload API with multi-unit transfers
- Add proper error handling and progress reporting

Closes 请求支持Mediafire #7869

Co-authored-by: Da3zKi7 <da3zki7@duck.com>
2025-09-11 11:46:09 +08:00
千石
fcbc79cb24 feat: Support 123pan safebox (#9311)
* feat(meta): Added a SafePassword field

- Added the SafePassword field to meta.go
- Revised the field format to align with the code style
- The SafePassword field is used to supplement the extended functionality

* feat(driver): Added support for safe unlocking logic

- Added safe file unlocking logic in `driver.go`, returning an error if unlocking fails.
- Introduced the `safeBoxUnlocked` variable of type `sync.Map` to record the IDs of unlocked files.
- Enhanced error handling logic to automatically attempt to unlock safe files and re-retrieve the file list.
- Added the `IsLock` field to file types in `types.go` to identify whether they are safe files.
- Added a constant definition for the `SafeBoxUnlock` interface address in `util.go`.
- Added the `unlockSafeBox` method to unlock a safe with a specified file ID via the API.
- Optimized the file retrieval logic to automatically call the unlock method when the safe is locked.

* Refactor (driver): Optimize lock field type

- Changed the `IsLock` field type from `int` to `bool` for better semantics.
- Updated the check logic to use direct Boolean comparisons to improve code readability and accuracy.
2025-09-05 19:58:27 +08:00
Sakkyoi Cheng
930f9f6096 fix(ssologin): missing role in SSO auto-registration and minor callback issue (#9305)
* fix(ssologin): return after error response

* fix(ssologin): set default role for SSO user creation
2025-09-04 22:15:39 +08:00
千石
23107483a1 Refactor (storage): Comment out the path validation logic (#9308)
- Comment out the error return logic for paths with "/"
- Remove storage path restrictions to allow for flexible handling of root paths
2025-09-04 22:14:33 +08:00
千石
4b288a08ef fix: session invalid issue (#9301)
* feat(auth): Enhanced device login session management

- Upon login, obtain and verify `Client-Id` to ensure unique device sessions.
- If there are too many device sessions, clean up old ones according to the configured policy or return an error.
- If a device session is invalid, deregister the old token and return a 401 error.
- Added `EnsureActiveOnLogin` function to handle the creation and refresh of device sessions during login.

* feat(session): Modified session deletion logic to mark sessions as inactive.

- Changed session deletion logic to mark sessions as inactive using the `MarkInactive` method.
- Adjusted error handling to ensure an error is returned if marking fails.

* feat(session): Added device limits and eviction policies

- Added a device limit, controlling the maximum number of devices using the `MaxDevices` configuration option.
- If the number of devices exceeds the limit, the configured eviction policy is used.
- If the policy is `evict_oldest`, the oldest device is evicted.
- Otherwise, an error message indicating too many devices is returned.

* refactor(session): Filter for the user's oldest active session

- Renamed `GetOldestSession` to `GetOldestActiveSession` to more accurately reflect its functionality
- Updated the SQL query to add the `status = SessionActive` condition to retrieve only active sessions
- Replaced all callpoints and unified the new function name to ensure logical consistency
2025-08-29 21:20:29 +08:00
Sky_slience
63391a2091 fix(readme): remove outdated sponsor links from README files (#9300)
Co-authored-by: Sky_slience <Skyslience@spdzy.com>
2025-08-29 14:56:54 +08:00
JoaHuang
a11e4cfb31 Merge pull request #9299 from okatu-loli/session-manage-2
fix: session login error
2025-08-29 13:45:10 +08:00
okatu-loli
9a7c82a71e feat(auth): Optimized device session handling logic
- Introduced middleware to handle device sessions
- Changed `handleSession` to `HandleSession` in multiple places in `auth.go` to maintain consistent naming
- Updated response structure to return `device_key` and `token`
2025-08-29 13:31:44 +08:00
okatu-loli
8623da5361 feat(session): Added user session limit and device eviction logic
- Renamed `CountSessionsByUser` to `CountActiveSessionsByUser` and added session status filtering
- Added user and device session limit, with policy handling when exceeding the limit
- Introduced device eviction policy: If the maximum number of devices is exceeded, the oldest session will be evicted using the "evict_oldest" policy
- Modified `LastActive` update logic to ensure accurate session activity time
2025-08-29 11:53:55 +08:00
千石
84adba3acc feat(user): Enhanced role assignment logic (#9297)
- Imported the `utils` package
- Modified the role assignment logic to prevent assigning administrator or guest roles to users
2025-08-28 09:57:34 +08:00
千石
3bf0af1e68 fix(session): Fixed the session status update logic. (#9296)
- Removed the error returned when the session status is `SessionInactive`.
- Updated the `LastActive` field of the session to always record the current time.
2025-08-28 09:57:13 +08:00
千石
de09ba08b6 chore(deps): Update 115driver dependency to v1.1.2 (#9294)
- Upgrade `github.com/SheltonZhu/115driver` to v1.1.2 in `go.mod`
- Modify `replace` to point to `github.com/okatu-loli/115driver v1.1.2`
- Remove old version checksum from `go.sum` and add new version checksum
2025-08-27 17:46:34 +08:00
千石
c64f899a63 feat: implement session management (#9286)
* feat(auth): Added device session management

- Added the `handleSession` function to manage user device sessions and verify client identity
- Updated `auth.go` to call `handleSession` for device handling when a user logs in
- Added the `Session` model to database migrations
- Added `device.go` and `session.go` files to handle device session logic
- Updated `settings.go` to add device-related configuration items, such as the maximum number of devices, device eviction policy, and session TTL

* feat(session): Adds session management features

- Added `SessionInactive` error type in `device.go`
- Added session-related APIs in `router.go` to support listing and evicting sessions
- Added `ListSessionsByUser`, `ListSessions`, and `MarkInactive` methods in `session.go`
- Returns an appropriate error when the session state is `SessionInactive`

* feat(auth): Marks the device session as invalid.

- Import the `session` package into the `auth` module to handle device session status.
- Add a check in the login logic. If `device_key` is obtained, call `session.MarkInactive` to mark the device session as invalid.
- Store the invalid status in the context variable `session_inactive` for subsequent middleware checks.
- Add a check in the session refresh logic to abort the process if the current session has been marked invalid.

* feat(auth, session): Added device information processing and session management changes

- Updated device handling logic in `auth.go` to pass user agent and IP information
- Adjusted database queries in `session.go` to optimize session query fields and add `user_agent` and `ip` fields
- Modified the `Handle` method to add `ua` and `ip` parameters to store the user agent and IP address
- Added the `SessionResp` structure to return a session response containing `user_agent` and `ip`
- Updated the `/admin/user/create` and `/webdav` endpoints to pass the user agent and IP address to the device handler
2025-08-25 19:46:38 +08:00
千石
3319f6ea6a feat(search): Optimized search result filtering and paging logic (#9287)
- Introduced the `filteredNodes` list to optimize the node filtering process
- Filtered results based on the page limit during paging
- Modified search logic to ensure nodes are within the user's base path
- Added access permission checks for node metadata
- Adjusted paging logic to avoid redundant node retrieval
2025-08-25 19:46:24 +08:00
千石
d7723c378f chore(deps): Upgrade 115driver to v1.1.1 (#9283)
- Upgraded `github.com/SheltonZhu/115driver` from v1.0.34 to v1.1.1
- Updated the corresponding version verification information in `go.sum`
2025-08-25 19:46:10 +08:00
千石
a9fcd51bc4 fix: ensure DefaultRole stores role ID while exposing role name in APIs (#9279)
* fix(setting): ensure DefaultRole stores role ID while exposing role name in APIs

- Simplified initial settings to use `model.GUEST` as the default role ID instead of querying roles at startup.
- Updated `GetSetting`, `ListSettings` handlers to:
  - Convert stored role ID into the corresponding role name when returning data.
  - Preserve dynamic role options for selection.
- Removed unused `strings` import and role preloading logic from `InitialSettings`.
- This change avoids DB dependency during initialization while keeping consistent role display for frontend clients.

* fix(setting): ensure DefaultRole stores role ID while exposing role
name in APIs (fix/settings-get-role)

- Simplify initial settings to use `model.GUEST` as the default role ID
  instead of querying roles at startup.
- Update `GetSetting`, `ListSettings` handlers to:
  - Convert stored role ID into the corresponding role name when
    returning data.
  - Preserve dynamic role options for selection.
- Remove unused `strings` import and role preloading logic from
  `InitialSettings`.
- Avoid DB dependency during initialization while keeping consistent
  role display for frontend clients.
2025-08-19 15:01:32 +08:00
千石
74e384175b fix(lanzou): correct comment parsing logic in lanzou driver (#9278)
- Adjusted logic to skip incrementing index when exiting comments.
- Added checks to continue loop if inside a single-line or block comment.
- Prevents erroneous parsing and retains intended comment exclusion.
2025-08-19 00:53:52 +08:00
千石
eca500861a feat: add user registration endpoint and role-based default settings (#9277)
* feat(setting): add role-based default and registration settings (closed #feat/register-and-statistics)

- Added `AllowRegister` and `DefaultRole` settings to site configuration.
- Integrated dynamic role options for `DefaultRole` using `op.GetRoles`.
- Updated `setting.go` handlers to manage `DefaultRole` options dynamically.
- Modified `const.go` to include new site settings constants.
- Updated dependencies in `go.mod` and `go.sum` to support new functionality.

* feat(register-and-statistics): add user registration endpoint

- Added `POST /auth/register` endpoint to support user registration.
- Implemented registration logic in `auth.go` with dynamic role assignment.
- Integrated settings `AllowRegister` and `DefaultRole` for registration flow.
- Updated imports to include new modules: `conf`, `setting`.
- Adjusted user creation logic to use `DefaultRole` setting dynamically.

* feat(register-and-statistics): add user registration endpoint (#register-and-statistics)

- Added `POST /auth/register` endpoint to support user registration.
- Implemented registration logic in `auth.go` with dynamic role assignment.
- Integrated `AllowRegister` and `DefaultRole` settings for registration flow.
- Updated imports to include new modules: `conf`, `setting`.
- Adjusted user creation logic to use `DefaultRole` dynamically.

* feat(register-and-statistics): enhance role management logic (#register-and-statistics)

- Refactored CreateRole and UpdateRole functions to handle default role.
- Added dynamic role assignment logic in 'role.go' using conf settings.
- Improved request handling in 'handles/role.go' with structured data.
- Implemented default role logic in 'db/role.go' to update non-default roles.
- Modified 'model/role.go' to include a 'Default' field for role management.

* feat(register-and-statistics): enhance role management logic

- Refactor CreateRole and UpdateRole to handle default roles.
- Add dynamic role assignment using conf settings in 'role.go'.
- Improve request handling with structured data in 'handles/role.go'.
- Implement default role logic in 'db/role.go' for non-default roles.
- Modify 'model/role.go' to include 'Default' field for role management.

* feat(register-and-statistics): improve role handling logic

- Switch from role names to role IDs for better consistency.
- Update logic to prioritize "guest" for default role ID.
- Adjust `DefaultRole` setting to use role IDs.
- Refactor `getRoleOptions` to return role IDs as a comma-separated string.

* feat(register-and-statistics): improve role handling logic
2025-08-18 16:38:21 +08:00
千石
97d4f79b96 fix: resolve webdav decode issue (#9268)
* fix: resolve webdav handshake error in permission checks

- Updated role permission logic to handle bidirectional subpaths,
  fixing handshake termination by remote host due to path mismatch.
- Refactored function naming for consistency and clarity.
- Enhanced filtering of objects based on user permissions.
- Modified `makePropstatResponse` to preserve encoded href paths.
- Added test for `makePropstatResponse` to ensure href encoding.

* Delete server/webdav/makepropstatresponse_test.go

* ci(workflow): set GOPROXY for Go builds on GitHub Actions

- Use `GOPROXY=https://proxy.golang.org,direct` to speed up module downloads
- Mitigates network flakiness (e.g., checksum DB timeouts/rate limits)
- `,direct` provides fallback for private/unproxyable modules
- No build logic changes; only affects dependency resolution across all matrix targets

---------

Co-authored-by: AlistGo <opsgit88@gmail.com>
2025-08-16 20:55:17 +08:00
千石
fcfb3369d1 fix: webdav error location (#9266)
* feat: improve WebDAV permission handling and user role fetching

- Added logic to handle root permissions in WebDAV requests.
- Improved the user role fetching mechanism.
- Enhanced path checks and permission scopes in role_perm.go.
- Set FetchRole function to avoid import cycles between modules.

* fix(webdav): resolve connection reset issue by encoding paths

- Adjust path encoding in webdav.go to prevent connection reset.
- Utilize utils.EncodePath for correct path formatting.
- Ensure proper handling of directory paths with trailing slash.

* fix(webdav): resolve connection reset issue by encoding paths

- Adjust path encoding in webdav.go to prevent connection reset.
- Utilize utils.FixAndCleanPath for correct path formatting.
- Ensure proper handling of directory paths with trailing slash.

* fix: resolve webdav handshake error in permission checks

- Updated role permission logic to handle bidirectional subpaths.
- This adjustment fixes the issue where remote host terminates the
  handshake due to improper path matching.

* fix: resolve webdav handshake error in permission checks (fix/fix-webdav-error)

- Updated role permission logic to handle bidirectional subpaths,
  fixing handshake termination by remote host due to path mismatch.
- Refactored function naming for consistency and clarity.
- Enhanced filtering of objects based on user permissions.

* fix: resolve webdav handshake error in permission checks

- Updated role permission logic to handle bidirectional subpaths,
  fixing handshake termination by remote host due to path mismatch.
- Refactored function naming for consistency and clarity.
- Enhanced filtering of objects based on user permissions.
2025-08-15 23:10:55 +08:00
千石
aea3ba1499 feat: add tag backup and fix bugs (#9265)
* feat(label): enhance label file binding and router setup (feat/add-tag-backup)

- Add `GetLabelsByFileNamesPublic` to retrieve labels using file names.
- Refactor router setup for label and file binding routes.
- Improve `toObjsResp` for efficient label retrieval by file names.
- Comment out unnecessary user ID parameter in `toObjsResp`.

* feat(label): enhance label file binding and router setup

- Add `GetLabelsByFileNamesPublic` for label retrieval by file names.
- Refactor router setup for label and file binding routes.
- Improve `toObjsResp` for efficient label retrieval by file names.
- Comment out unnecessary user ID parameter in `toObjsResp`.

* refactor(db): comment out debug print in GetLabelIds (#feat/add-tag-backup)

- Comment out debug print statement in GetLabelIds to clean up logs.
- Enhance code readability by removing unnecessary debug output.

* feat(label-file-binding): add batch creation and improve label ID handling

- Introduced `CreateLabelFileBinDingBatch` API for batch label binding.
- Added `collectLabelIDs` helper function to handle label ID parsing.
- Enhanced label ID handling to support varied delimiters and input formats.
- Refactored `CreateLabelFileBinDing` logic for improved code readability.
- Updated router to include `POST /label_file_binding/create_batch`.
2025-08-15 23:09:00 +08:00
千石
6b2d81eede feat(user): enhance path management and role handling (#9249)
- Add `GetUsersByRole` function for fetching users by role.
- Introduce `GetAllBasePathsFromRoles` to aggregate paths from roles.
- Refine path handling in `pkg/utils/path.go` for normalization.
- Comment out base path prefix updates to simplify role operations.
2025-08-06 16:31:36 +08:00
千石
85fe4e5bb3 feat(alist_v3): add IntSlice type for JSON unmarshalling (#9247)
- Add `IntSlice` type to handle both single int and array in JSON.
- Modify `MeResp` struct to use `IntSlice` for `Role` field.
- Import `encoding/json` for JSON operations.
2025-08-04 12:02:45 +08:00
千石
52da07e8a7 feat(123_open): add new driver support for 123 Open (#9246)
- Implement new driver for 123 Open service, enabling file operations
  such as listing, uploading, moving, and removing files.
- Introduce token management for authentication and authorization.
- Add API integration for various file operations and actions.
- Include utility functions for handling API requests and responses.
- Register the new driver in the existing drivers' list.
2025-08-04 11:56:57 +08:00
Sky_slience
46de9e9ebb fix(driver): 123 download and modify request headers on the frontend (#9236)
Co-authored-by: Sky_slience <Skyslience@spdzy.com>
2025-08-03 20:00:09 +08:00
千石
ae90fb579b feat(log): enhance log formatter to respect NO_COLOR env variable (#9239)
- Adjust log formatter to disable colors when NO_COLOR or ALIST_NO_COLOR
  environment variables are set.
- Reorganize formatter settings for better readability.
2025-08-03 09:26:23 +08:00
Sky_slience
394a18cbd9 Fix 123 download (#9235)
* fix(driver): handle additional HTTP status code 210 for URL redirection

* fix(driver): 123 download url error

---------

Co-authored-by: Sky_slience <Skyslience@spdzy.com>
2025-07-30 16:55:32 +08:00
千石
280960ce3e feat(user-db): enhance user management with role-based queries (allow-edit-role-guest) (#9234)
- Add `GetUsersByRole` function to fetch users based on their roles.
- Extend `UpdateUserBasePathPrefix` to accept optional user lists.
- Ensure path cleaning in `UpdateUserBasePathPrefix` for consistency.
- Integrate guest role fetching in `auth.go` middleware.
- Utilize `GetUsersByRole` in `role.go` for base path modifications.
- Remove redundant line in `role.go` role modification logic.
2025-07-30 13:15:35 +08:00
Sky_slience
74332e91fb feat(ui): add new UI configuration option to settings (#9233)
* feat(ui): add new UI configuration option to settings

* fix(ui): disable new UI feature by default

---------

Co-authored-by: Sky_slience <Skyslience@spdzy.com>
2025-07-30 12:22:02 +08:00
Sky_slience
540d6c7064 fix(meta): update OAuth token URL and improve default client credentials (#9231) 2025-07-30 10:48:33 +08:00
千石
55b2bb6b80 feat(user-management): Enhance admin management and role handling 2025-07-29 19:45:28 +08:00
qianshi
d5df6fa4cf Merge branch 'main' into feat/allow-edit-role-guest 2025-07-29 19:13:01 +08:00
千石
3353055482 Update Dockerfile.ci (#9230)
chore(docker): Update base image from alpine:edge to alpine:3.20.7 in Dockerfile.ci
2025-07-29 18:35:47 +08:00
千石
4d7c2a09ce docs(README): Add API documentation links across multiple languages (#9225)
- Add API documentation section to `README.md` with link to Apifox
- Add API documentation section to `README_ja.md` with Japanese translation and link to Apifox
- Add API documentation section to `README_cn.md` with Chinese translation and link to Apifox
2025-07-29 09:42:34 +08:00
qianshi
5b8c26510b feat(user-management): Enhance admin management and role handling
- Add `CountEnabledAdminsExcluding` function to count enabled admins excluding a specific user.
- Implement `CountUsersByRoleAndEnabledExclude` in `internal/db/user.go` to support exclusion logic.
- Refactor role handling with switch-case for better readability in `server/handles/role.go`.
- Ensure at least one enabled admin remains when disabling an admin in `server/handles/user.go`.
- Maintain guest role name consistency when updating roles in `internal/op/role.go`.
2025-07-28 23:07:07 +08:00
千石
91cc7529a0 feat(user/role/storage): enhance user and storage operations with additional validations (#9223)
- Update `CreateUser` to adjust `BasePath` based on user roles and clean paths.
- Modify `UpdateUser` to incorporate role-based path changes.
- Add validation in `CreateStorage` and `UpdateStorage` to prevent root mount path.
- Prevent changes to admin user's role and username in user handler.
- Update `UpdateRole` to modify user base paths when role paths change, and clear user cache accordingly.
- Import `errors` package to handle error messages.
2025-07-27 22:25:45 +08:00
千石
f61d13d433 refactor(convert_role): Improve role conversion logic for legacy formats (#9219)
- Add new imports: `database/sql`, `encoding/json`, and `conf` package in `convert_role.go`.
- Simplify permission entry initialization by removing redundant struct formatting.
- Update error logging messages for better clarity.
- Replace `op.GetUsers` with direct database access for fetching user roles.
- Implement role update logic using `rawDb` and handle legacy int role conversion.
- Count the number of users whose roles are updated and log completion.
- Introduce `IsLegacyRoleDetected` function to check for legacy role formats.
- Modify `cmd/common.go` to invoke role conversion if legacy format is detected.
2025-07-26 15:20:08 +08:00
千石
00120cba27 feat: enhance permission control and label management (#9215)
* 标签管理

* pr检查优化

* feat(role): Implement role management functionality

- Add role management routes in `server/router.go` for listing, getting, creating, updating, and deleting roles
- Introduce `initRoles()` in `internal/bootstrap/data/data.go` for initializing roles during bootstrap
- Create `internal/op/role.go` to handle role operations including caching and singleflight
- Implement role handler functions in `server/handles/role.go` for API responses
- Define database operations for roles in `internal/db/role.go`
- Extend `internal/db/db.go` for role model auto-migration
- Design `internal/model/role.go` to represent role structure with ID, name, description, base path, and permissions
- Initialize default roles (`admin` and `guest`) in `internal/bootstrap/data/role.go` during startup

* refactor(user roles): Support multiple roles for users

- Change the `Role` field type from `int` to `[]int` in `drivers/alist_v3/types.go` and `drivers/quqi/types.go`.
- Update the `Role` field in `internal/model/user.go` to use a new `Roles` type with JSON and database support.
- Modify `IsGuest` and `IsAdmin` methods to check for roles using `Contains` method.
- Update `GetUserByRole` method in `internal/db/user.go` to handle multiple roles.
- Add `roles.go` to define a new `Roles` type with JSON marshalling and scanning capabilities.
- Adjust code in `server/handles/user.go` to compare roles with `utils.SliceEqual`.
- Change role initialization for users in `internal/bootstrap/data/dev.go` and `internal/bootstrap/data/user.go`.
- Update `Role` handling in `server/handles/task.go`, `server/handles/ssologin.go`, and `server/handles/ldap_login.go`.

* feat(user/role): Add path limit check for user and role permissions

- Add new permission bit for checking path limits in `user.go`
- Implement `CheckPathLimit` method in `User` struct to validate path access
- Modify `JoinPath` method in `User` to enforce path limit checks
- Update `role.go` to include path limit logic in `Role` struct
- Document new permission bit in `Role` and `User` comments for clarity

* feat(permission): Add role-based permission handling

- Introduce `role_perm.go` for managing user permissions based on roles.
- Implement `HasPermission` and `MergeRolePermissions` functions.
- Update `webdav.go` to utilize role-based permissions instead of direct user checks.
- Modify `fsup.go` to integrate `CanAccessWithRoles` function.
- Refactor `fsread.go` to use `common.HasPermission` for permission validation.
- Adjust `fsmanage.go` for role-based access control checks.
- Enhance `ftp.go` and `sftp.go` to manage FTP access via roles.
- Update `fsbatch.go` to employ `MergeRolePermissions` for batch operations.
- Replace direct user permission checks with role-based permission handling across various modules.

* refactor(user): Replace integer role values with role IDs

- Change `GetAdmin()` and `GetGuest()` functions to retrieve role by name and use role ID.
- Add patch for version `v3.45.2` to convert legacy integer roles to role IDs.
- Update `dev.go` and `user.go` to use role IDs instead of integer values for roles.
- Remove redundant code in `role.go` related to guest role creation.
- Modify `ssologin.go` and `ldap_login.go` to set user roles to nil instead of using integer roles.
- Introduce `convert_roles.go` to handle conversion of legacy roles and ensure role existence in the database.

* feat(role_perm): implement support for multiple base paths for roles

- Modify role permission checks to support multiple base paths
- Update role creation and update functions to handle multiple base paths
- Add migration script to convert old base_path to base_paths
- Define new Paths type for handling multiple paths in the model
- Adjust role model to replace BasePath with BasePaths
- Update existing patches to handle roles with multiple base paths
- Update bootstrap data to reflect the new base_paths field

* feat(role): Restrict modifications to default roles (admin and guest)

- Add validation to prevent changes to "admin" and "guest" roles in `UpdateRole` and `DeleteRole` functions.
- Introduce `ErrChangeDefaultRole` error in `internal/errs/role.go` to standardize error messaging.
- Update role-related API handlers in `server/handles/role.go` to enforce the new restriction.
- Enhance comments in `internal/bootstrap/data/role.go` to clarify the significance of default roles.
- Ensure consistent error responses for unauthorized role modifications across the application.

* 🔄 **refactor(role): Enhance role permission handling**

- Replaced `BasePaths` with `PermissionPaths` in `Role` struct for better permission granularity.
- Introduced JSON serialization for `PermissionPaths` using `RawPermission` field in `Role` struct.
- Implemented `BeforeSave` and `AfterFind` GORM hooks for handling `PermissionPaths` serialization.
- Refactored permission calculation logic in `role_perm.go` to work with `PermissionPaths`.
- Updated role creation logic to initialize `PermissionPaths` for `admin` and `guest` roles.
- Removed deprecated `CheckPathLimit` method from `Role` struct.

* fix(model/user/role): update permission settings for admin and role

- Change `RawPermission` field in `role.go` to hide JSON representation
- Update `Permission` field in `user.go` to `0xFFFF` for full access
- Modify `PermissionScopes` in `role.go` to `0xFFFF` for enhanced permissions

* 🔒 feat(role-permissions): Enhance role-based access control

- Introduce `canReadPathByRole` function in `role_perm.go` to verify path access based on user roles
- Modify `CanAccessWithRoles` to include role-based path read check
- Add `RoleNames` and `Permissions` to `UserResp` struct in `auth.go` for enhanced user role and permission details
- Implement role details aggregation in `auth.go` to populate `RoleNames` and `Permissions`
- Update `User` struct in `user.go` to include `RolesDetail` for more detailed role information
- Enhance middleware in `auth.go` to load and verify detailed role information for users
- Move `guest` user initialization logic in `user.go` to improve code organization and avoid repetition

* 🔒 fix(permissions): Add permission checks for archive operations

- Add `MergeRolePermissions` and `HasPermission` checks to validate user access for reading archives
- Ensure users have `PermReadArchives` before proceeding with `GetNearestMeta` in specific archive paths
- Implement permission checks for decompress operations, requiring `PermDecompress` for source paths
- Return `PermissionDenied` errors with 403 status if user lacks necessary permissions

* 🔒 fix(server): Add permission check for offline download

- Add permission merging logic for user roles
- Check user has permission for offline download addition
- Return error response with "permission denied" if check fails

*  feat(role-permission): Implement path-based role permission checks

- Add `CheckPathLimitWithRoles` function to validate access based on `PermPathLimit` permission.
- Integrate `CheckPathLimitWithRoles` in `offline_download` to enforce path-based access control.
- Apply `CheckPathLimitWithRoles` across file system management operations (e.g., creation, movement, deletion).
- Ensure `CheckPathLimitWithRoles` is invoked for batch operations and archive-related actions.
- Update error handling to return `PermissionDenied` if the path validation fails.
- Import `errs` package in `offline_download` for consistent error responses.

*  feat(role-permission): Implement path-based role permission checks

- Add `CheckPathLimitWithRoles` function to validate access based on `PermPathLimit` permission.
- Integrate `CheckPathLimitWithRoles` in `offline_download` to enforce path-based access control.
- Apply `CheckPathLimitWithRoles` across file system management operations (e.g., creation, movement, deletion).
- Ensure `CheckPathLimitWithRoles` is invoked for batch operations and archive-related actions.
- Update error handling to return `PermissionDenied` if the path validation fails.
- Import `errs` package in `offline_download` for consistent error responses.

* ♻️ refactor(access-control): Update access control logic to use role-based checks

- Remove deprecated logic from `CanAccess` function in `check.go`, replacing it with `CanAccessWithRoles` for improved role-based access control.
- Modify calls in `search.go` to use `CanAccessWithRoles` for more precise handling of permissions.
- Update `fsread.go` to utilize `CanAccessWithRoles`, ensuring accurate access validation based on user roles.
- Simplify import statements in `check.go` by removing unused packages to clean up the codebase.

*  feat(fs): Improve visibility logic for hidden files

- Import `server/common` package to handle permissions more robustly
- Update `whetherHide` function to use `MergeRolePermissions` for user-specific path permissions
- Replace direct user checks with `HasPermission` for `PermSeeHides`
- Enhance logic to ensure `nil` user cases are handled explicitly

* 标签管理

* feat(db/auth/user): Enhance role handling and clean permission paths

- Comment out role modification checks in `server/handles/user.go` to allow flexible role changes.
- Improve permission path handling in `server/handles/auth.go` by normalizing and deduplicating paths.
- Introduce `addedPaths` map in `CurrentUser` to prevent duplicate permissions.

* feat(storage/db): Implement role permissions path prefix update

- Add `UpdateRolePermissionsPathPrefix` function in `role.go` to update role permissions paths.
- Modify `storage.go` to call the new function when the mount path is renamed.
- Introduce path cleaning and prefix matching logic for accurate path updates.
- Ensure roles are updated only if their permission scopes are modified.
- Handle potential errors with informative messages during database operations.

* feat(role-migration): Implement role conversion and introduce NEWGENERAL role

- Add `NEWGENERAL` to the roles enumeration in `user.go`
- Create new file `convert_role.go` for migrating legacy roles to new model
- Implement `ConvertLegacyRoles` function to handle role conversion with permission scopes
- Add `convert_role.go` patch to `all.go` under version `v3.46.0`

* feat(role/auth): Add role retrieval by user ID and update path prefixes

- Add `GetRolesByUserID` function for efficient role retrieval by user ID
- Implement `UpdateUserBasePathPrefix` to update user base paths
- Modify `UpdateRolePermissionsPathPrefix` to return modified role IDs
- Update `auth.go` middleware to use the new role retrieval function
- Refresh role and user caches upon path prefix updates to maintain consistency

---------

Co-authored-by: Leslie-Xy <540049476@qq.com>
2025-07-26 09:51:59 +08:00
Sakana
5e15a360b7 feat(github_releases): concurrently request the GitHub API (#9211) 2025-07-24 15:30:12 +08:00
alist666
2bdc5bef9e Merge pull request #9207 from AlistGo/fix-aliyundirve
fix: update DriveId assignment to use DeviceID from Addition struct
2025-07-17 13:21:32 +08:00
AlistDev
13ea1c1405 fix: restore user-agent header in HTTP requests 2025-07-16 20:39:05 +08:00
AlistDev
fd41186679 fix: update DriveId assignment to use DeviceID from Addition struct 2025-07-14 23:04:40 +08:00
alist666
9da56bab4d Merge pull request #9171 from AlistGo/fix-189pc-login
fix: update documentation links to point to the new domain And fix 189pc getToken fail
2025-06-28 00:20:50 +08:00
alistgo
51eeb22465 fix: dead link 2025-06-27 23:58:52 +08:00
Alone
b1586612ca feat: add ghcr docker image (#8524) 2025-06-27 23:39:23 +08:00
AlistDev
7aeb0ab078 fix: update documentation links to point to the new domain And fix 189pc getToken fail 2025-06-27 16:28:09 +08:00
MadDogOwner
ffa03bfda1 feat(cloudreve_v4): add Cloudreve V4 driver (#8470 closes #8328 #8467)
* feat(cloudreve_v4): add Cloudreve V4 driver implementation

* fix(cloudreve_v4): update request handling to prevent token refresh loop

* feat(onedrive): implement retry logic for upload failures

* feat(cloudreve): implement retry logic for upload failures

* feat(cloudreve_v4): support cloud sorting

* fix(cloudreve_v4): improve token handling in Init method

* feat(cloudreve_v4): support share

* feat(cloudreve): support reference

* feat(cloudreve_v4): support version upload

* fix(cloudreve_v4): add SetBody in upLocal

* fix(cloudreve_v4): update URL structure in Link and FileUrlResp
2025-05-24 13:38:43 +08:00
Andy Hsu
630cf30af5 feat(115_open): implement rate limiting for API requests 2025-05-11 13:39:32 +08:00
Andy Hsu
bc5117fa4f fix(115_open): add delay in MakeDir function to handle rate limiting 2025-05-02 16:53:39 +08:00
yoclo
11e7284824 fix: prevent guest user from updating profile (#8447) 2025-04-29 23:14:16 +08:00
MadDogOwner
b2b91a9281 feat(doubao): add get_download_info API and download_api option (#8428) 2025-04-27 20:00:25 +08:00
MadDogOwner
f541489d7d fix(netease_music): change ListResp size fields from string to int64 (#8417) 2025-04-27 19:59:30 +08:00
bigQY
6d9c554f6f feat: add UseLargeThumbnail for 139 (#8424) 2025-04-27 19:58:45 +08:00
Mmx
e532ab31ef fix: remove auth middleware for authn login (#8407) 2025-04-27 19:58:09 +08:00
Mmx
bf0705ec17 fix: shebang of entrypoint.sh (#8408) 2025-04-27 19:56:34 +08:00
gdm257
17b42b9fa4 fix(mega): use newest file for same filename (#8422 close #8344)
Mega supports duplicate names but alist does not support.
In `List()` method, driver will return multiple files with same name.
That makes alist to use oldest version file for listing/downloading.
So it is necessary to filter old same name files in a folder.
After fixes, all CRUD work normally.

Refs #8344
2025-04-27 19:56:04 +08:00
Sam- Pan(潘绍森)
41bdab49aa fix(139): incorrect host (#8368)
* fix: correct new personal cloud path for 139Driver

* Update drivers/139/driver.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix bug

---------

Co-authored-by: panshaosen <19802021493@139.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: j2rong4cn <253551464@qq.com>
2025-04-19 14:29:12 +08:00
Lin Tianchuan
8f89c55aca perf(local): avoid duplicate parsing of VideoThumbPos (#7812)
* feat(local): support percent for video thumbnail

The percentage determines the point in the video (as a percentage of the total duration) at which the thumbnail will be generated.

* feat(local): support both time and percent for video thumbnail

* refactor(local): avoid duplicate parsing of VideoThumbPos
2025-04-19 14:27:13 +08:00
wxnq
b449312da8 fix(docker_release): avoid duplicate occupation in docker image (#8393 close #8388)
* fix(ci): modify the method of adding permissions

* fix(build): modify the method of adding permissions(to keep up with ci)
2025-04-19 14:26:19 +08:00
MadDogOwner
52d4e8ec47 fix(lanzou): remove JavaScript comments from response data (#8386)
* feat(lanzou): add RemoveJSComment function to clean JavaScript comments from HTML

* feat(lanzou): remove comments from share page data in getFilesByShareUrl function

* fix(lanzou): optimize RemoveJSComment function to improve comment removal logic
2025-04-19 14:24:43 +08:00
New Future
28e5b5759e feat(azure_blob): implement GetRootId interface in Addition struct (#8389)
fix failed get dir
2025-04-19 14:23:48 +08:00
asdfghjkl
477c43971f feat(doubao_share): support doubao_share link (#8376)
Co-authored-by: anobodys <anobodys@gmail.com>
2025-04-19 14:22:43 +08:00
Yifan Gao
0a9921fa79 fix(aliyundrive_open): resolve file duplication issues and improve path handling (#8358)
* fix(aliyundrive_open): resolve file duplication issues and improve path handling

1. Fix file duplication by implementing a new removeDuplicateFiles method that cleans up duplicate files after operations
2. Change Move operation to use "ignore" for check_name_mode instead of "refuse" to allow moves when destination has same filename
3. Set Copy operation to handle duplicates by removing them after successful copy
4. Improve path handling for all file operations (Move, Rename, Put, MakeDir) by properly maintaining the full path of objects
5. Implement GetRoot interface for proper root object initialization with correct path
6. Add proper path management in List operation to ensure objects have correct paths
7. Fix path handling in error cases and improve logging of failures

* refactor(aliyundrive_open): change error logging to warnings for duplicate file removal

Updated the Move, Rename, and Copy methods to log warnings instead of errors when duplicate file removal fails, as the primary operations have already completed successfully. This improves the clarity of logs without affecting the functionality.

* Update drivers/aliyundrive_open/util.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-19 14:22:12 +08:00
Lee CQ
88abb323cb feat(url-tree): implement the Put interface to support adding links directly to the UrlTree on the web side (#8312)
* feat(url-tree)支持PUT

* feat(url-tree) UrlTree更新时,需要将路径和内容分割 #8303

* fix: stdpath.Join call

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Andy Hsu <i@nn.ci>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-12 17:27:56 +08:00
asdfghjkl
f0b1aeaf8d feat(doubao): support upload (#8302 close #8335)
* feat(doubao): support upload

* fix(doubao): fix file list cursor

* fix: handle strconv.Atoi err

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: anobodys <anobodys@gmail.com>
Co-authored-by: Andy Hsu <i@nn.ci>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-12 17:12:40 +08:00
Yifan Gao
c8470b9a2a fix(fs): remove old target object from cache before updating (#8352) 2025-04-12 17:09:46 +08:00
Dgs
d0ee90cd11 fix(thunder): fix login issue (#8342 close #8288) 2025-04-12 17:05:58 +08:00
Dgs
544a7ea022 fix(pikpak&pikpak_share): fix WebPackageName (#8305) 2025-04-12 17:03:58 +08:00
j2rong4cn
4f5cabc725 feat: add h2c for http server (#8294)
* feat: add h2c for http server

* chore(config): add EnableH2c option
2025-04-12 17:02:51 +08:00
j2rong4cn
a2f266277c fix(net): unexpected write (#8291 close #8281) 2025-04-12 17:01:52 +08:00
jerry
a4bfbf8a83 fix(ipfs): fix problems (#8252)
* fix: 🐛 (ipfs): fix the list error caused by not proper join path function

使用更加规范的路径拼接,修复了有中文或符号的路径无法正常访问的问题

* refactor: 命名规范

* 删除多余的条件判断

* fix: 使用withresult方法重构代码,添加get方法,提高性能

* fix: 允许get方法获取目录

去除多余的判断

* fix: 允许copy,rename,move进行覆写

* fix: 修复move方法导致的目录被删除

* refactor: 整理关于返回Path的代码

* fix: 修复由于get方法导致的ipfs路径无法访问

* fix: 修复path处理错误的get方法

修复get方法,删除意外加入的目录

* fix: fix path join

use path join instead of filepath join to avoid os problem

* fix: rm filepath ref

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2025-04-12 17:01:30 +08:00
j2rong4cn
ddffacf07b perf: optimize IO read/write usage (#8243)
* perf: optimize IO read/write usage

* .

* Update drivers/139/driver.go

Co-authored-by: MadDogOwner <xiaoran@xrgzs.top>

---------

Co-authored-by: MadDogOwner <xiaoran@xrgzs.top>
2025-04-12 16:55:31 +08:00
xiaoQQya
3375c26c41 perf(quark_uc&quark_uc_tv): native proxy multithreading (#8287)
* perf(quark_uc): native proxy multithreading

* perf(quark_uc_tv): native proxy multithreading

* chore(fs): file query result add id
2025-04-03 20:50:29 +08:00
asdfghjkl
ab68faef44 fix(baidu_netdisk): add another video crack api (#8275)
Co-authored-by: anobodys <anobodys@gmail.com>
2025-04-03 20:44:49 +08:00
New Future
2e21df0661 feat(driver): add Azure Blob Storage driver (#8261)
* add azure-blob driver

* fix nested folders copy

* feat(driver): add Azure Blob Storage driver

实现 Azure Blob Storage 驱动,支持以下功能:
- 使用共享密钥身份验证初始化连接
- 列出目录和文件
- 生成临时 SAS URL 进行文件访问
- 创建目录
- 移动和重命名文件/文件夹
- 复制文件/文件夹
- 删除文件/文件夹
- 上传文件并支持进度跟踪

此驱动允许用户通过 AList 平台无缝访问和管理 Azure Blob Storage 中的数据。

* feat(driver): update help doc for Azure Blob

* doc(readme): add new driver

* Update drivers/azure_blob/driver.go

fix(azure): fix name check

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update README.md

doc(readme): fix the link

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix(azure): fix log and link

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-03 20:43:21 +08:00
MadDogOwner
af18cb138b feat(139): add option ReportRealSize (#8244 close #8141)
* feat(139): handle family upload errors

* feat(139): add option `ReportRealSize`

* Update drivers/139/driver.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-03 20:41:59 +08:00
j2rong4cn
31c55a2adf fix(archive): unable to preview (#8248)
* fix(archive): unable to preview

* fix bug
2025-04-03 20:41:05 +08:00
MadDogOwner
465dd1703d feat(cloudreve): s3 policy support (#8245)
* feat(cloudreve): s3 policy support

* fix(cloudreve): correct potential off-by-one error in `etags` initialization
2025-04-03 20:40:19 +08:00
j2rong4cn
a6304285b6 fix: revert "refactor(net): pass request header" (#8269)
5be50e77d9
2025-04-03 20:35:52 +08:00
YangXu
affd0cecd1 fix(pikpak&pikpak_share): update algorithms (#8278) 2025-04-03 20:35:14 +08:00
MadDogOwner
37640221c0 fix(doubao): update file size type to int64 (#8289) 2025-04-03 20:34:27 +08:00
Andy Hsu
e4bd223d1c fix(deps): update 115-sdk-go to v0.1.5 2025-04-03 20:29:53 +08:00
jerry
0cde4e73d6 feat(ipfs): better ipfs support (#8225)
* feat:  better ipfs support

fixed mfs crud, added ipns support

* Update driver.go

clean up
2025-03-27 23:25:23 +08:00
Ljcbaby
7b62dcb88c fix(baidu_netdisk): deplicate retry (#8210 redo #7972, link #8180) 2025-03-27 23:22:55 +08:00
never lee
c38dc6df7c fix(115_open): support multipart upload (#8229)
Co-authored-by: neverlee <neverlea@formail.com>
2025-03-27 23:22:08 +08:00
MadDogOwner
5668e4a4ea feat(doubao): add Doubao driver (#8232 closes #8020 #8206)
* feat(doubao): implement List()

* feat(doubao): implement Link()

* feat(doubao): implement MakeDir()

* refactor(doubao): add type Object to store key

* feat(doubao): implement Move()

* feat(doubao): implement Rename()

* feat(doubao): implement Remove()
2025-03-27 23:21:42 +08:00
KirCute
1335f80362 feat(archive): support multipart archives (#8184 close #8015)
* feat(archive): multipart support & sevenzip tool

* feat(archive): rardecode tool

* feat(archive): support decompress multi-selected

* fix(archive): decompress response filter internal

* feat(archive): support multipart zip

* fix: more applicable AcceptedMultipartExtensions interface
2025-03-27 23:20:44 +08:00
KirCute
704d3854df feat(alist_v3): support forward archive requests (#8230)
* feat(alist_v3): support forward archive requests

* fix: encode all inner path
2025-03-27 23:18:34 +08:00
MadDogOwner
44cc71d354 fix(cloudreve): enable SetContentLength for uploading to local policy (#8228 close #8174)
* fix(cloudreve): upload failure to return error msg instead of deletion success

* fix(cloudreve): enable SetContentLength for uploading to local policy

* refactor(cloudreve): move local policy upload logic to utils for better error handling

* refactor(cloudreve): unified upload code style

* refactor(cloudreve): improve user agent handling
2025-03-27 23:18:15 +08:00
KirCute
9a9aee9ac6 feat(alias): support writing to non-ambiguous paths (#8216)
* feat(alias): support writing to non-ambiguous paths

* feat(alias): support extract concurrency

* fix(alias): extract url no pass query
2025-03-27 23:17:45 +08:00
KirCute
4fcc3a187e fix(traffic): duplicate semaphore release when uploading (#8211 close #8180) 2025-03-27 23:15:47 +08:00
Ljcbaby
10a76c701d fix(db): support postgres trust/peer mode (#8198 close #8066) 2025-03-27 23:15:04 +08:00
KirCute
6e13923225 fix(sftp-server): postgre cannot store control characters (#8188 close #8186) 2025-03-27 23:14:36 +08:00
Andy Hsu
32890da29f fix(115_open): upgrade 115-sdk-go dependency to v0.1.4 2025-03-21 19:06:09 +08:00
Andy Hsu
758554a40f fix(115_open): upgrade 115-sdk-go dependency to v0.1.3 (close #8169) 2025-03-19 21:47:42 +08:00
Andy Hsu
4563aea47e fix(115_open): rename delay to take effect (close #8156) 2025-03-18 22:25:04 +08:00
Andy Hsu
35d6f3b8fc fix(115_open): upgrade sdk (close #8151) 2025-03-18 22:21:50 +08:00
j2rong4cn
b4e6ab12d9 refactor: FilterReadMeScripts (#8154 close #8150)
* refactor: FilterReadMeScripts

* .
2025-03-18 22:02:33 +08:00
Andy Hsu
3499c4db87 feat: 115 open driver (#8139)
* wip: 115 open

* chore(go.mod): update 115-sdk-go dependency version

* feat(115_open): implement directory management and file operations

* chore(go.mod): update 115-sdk-go dependency to v0.1.1 and adjust callback handling in driver

* chore: rename driver
2025-03-17 00:52:09 +08:00
hshpy
d20f41d687 fix: missing handling of RangeReadCloser (#8146) 2025-03-16 22:14:44 +08:00
Andy Hsu
d16ba65f42 fix(lang): initialize configuration in LangCmd before generating language JSON file 2025-03-16 16:37:33 +08:00
hshpy
c82e632ee1 fix: potential XSS vulnerabilities (#7923)
* fix: potential XSS vulnerabilities

* feat: support filter and render for readme.md

* chore: set ReadMeAutoRender to true

* fix attachFileName undefined

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2025-03-15 23:28:40 +08:00
折纸飞机
04f5525f20 fix(s3): incorrectly added slash before the Bucket name (#8083 close #8001) 2025-03-15 00:21:24 +08:00
shniubobo
28b61a93fd feat(webdav): support oc:checksums (#8064 close #7472)
Ref: #7472
2025-03-15 00:21:07 +08:00
j2rong4cn
0126af4de0 fix(crypt): premature close of MFile (#8132 close #8119)
* fix(crypt): premature close of MFile

* refactor
2025-03-15 00:13:30 +08:00
MadDogOwner
7579d44517 fix(onedrive): set req.ContentLength (#8081)
* fix(onedrive): set req.ContentLength

* fix(onedrive_app): set req.ContentLength

* fix(cloudreve): set req.ContentLength
2025-03-15 00:12:37 +08:00
MadDogOwner
5dfea714d8 fix(cloudreve): use milliseconds timestamp in last_modified (#8133) 2025-03-15 00:12:15 +08:00
Ljcbaby
370a6c15a9 fix(baidu_netdisk): remove duplicate retry (#7972) 2025-03-01 19:00:36 +08:00
Ljcbaby
2570707a06 feat(baidu_netdisk): support dynamical slice size for low bandwith upload case (#7965)
* 动态分片尺寸

* 补充严格测试结果
2025-03-01 18:46:05 +08:00
j2rong4cn
4145734c18 refactor(net): pass request header (#8031 close #8008)
* refactor(net): pass request header

* feat(proxy): add `Etag` to response header

* refactor
2025-03-01 18:35:34 +08:00
KirCute
646c7bcd21 fix(archive): use another sign for extraction (#7982) 2025-03-01 18:34:33 +08:00
KirCute
cdc41595bc feat(github): support GPG verification (#7996 close #7986)
* feat(github): support GPG verification

* chore
2025-02-24 23:12:23 +08:00
KirCute_ECT
79bef0be9e chore: fix build failed (#8005) 2025-02-16 15:11:48 +08:00
KirCute_ECT
c230f24ebe fix(archive): decode filename when decompressing zips (#7998 close #7988) 2025-02-16 12:25:01 +08:00
KirCute_ECT
30d8c20756 feat(archive): support deprioritize previewing (#7984) 2025-02-16 12:24:10 +08:00
KirCute_ECT
3b71500f23 feat(traffic): support limit task worker count & file stream rate (#7948)
* feat: set task workers num & client stream rate limit

* feat: server stream rate limit

* upgrade xhofe/tache

* .
2025-02-16 12:22:11 +08:00
foxxorcat
399336b33c fix(189pc): transfer rename (#7958)
* fix(189pc): transfer rename

* fix: OverwriteUpload

* fix: change search method

* fix

* fix
2025-02-16 12:21:34 +08:00
KirCute_ECT
36b4204623 feat(github): support github proxy (#7979 close #7963) 2025-02-16 12:21:03 +08:00
YangRucheng
f25be154c6 fix(ilanzou): add header X-Forwarded-For to solve IP ban (#7977)
* fix: warning

* feat: ip header

* fix: ip header for fs link
2025-02-16 12:20:28 +08:00
Sakana
ec3fc945a3 fix(feiji): modify the request header (#7902 close #7890) 2025-02-09 18:35:39 +08:00
MadDogOwner
3f9bed3d5f feat(bootstrap): add .url to proxy types (#7928) 2025-02-09 18:33:38 +08:00
Jealous
b9ad18bd0a feat(recursive-move): Advanced conflict policy for preventing unintentional overwriting (#7906) 2025-02-09 18:32:57 +08:00
Jealous
0219c4e15a fix(index): fix the issue where ignored paths are not updated (#7907) 2025-02-09 18:31:43 +08:00
Feng.YJ
d983a4ebcb refactor(cmd): use std runtime package to get go version info (#7964)
* refactor(cmd): use std `runtime` package to get go version info

- Remove the `GoVersion` variable.
- Remove overriding `GoVersion` by ldflags in `build.sh`.
- Get go version, OS and arch from the constants in the std `runtime` package instead of compile time.

* chore(ci): remove `GoVersion` flag from workflows

Remove GoVersion flag from beta_release.yml and build.yml workflows.

> Reduce compile-time dependencies.
2025-02-09 18:30:56 +08:00
Sakana
f795807753 feat(github_releases): support dir size for show all version (#7938)
* refactor

* 修改默认 RepoStructure

* feat: 支持使用 gh-proxy
2025-02-09 18:30:38 +08:00
hshpy
6164e4577b fix: missing args when using alias driver (#7941 close #7932) 2025-02-05 19:22:10 +08:00
Sakana
39bde328ee fix(lenovonas_share): the size of the directory (#7914) 2025-02-01 17:32:58 +08:00
KirCute_ECT
779c293f04 fix(driver): implement canceling and updating progress for putting for some drivers (#7847)
* fix(driver): additionally implement canceling and updating progress for putting for some drivers

* refactor: add driver archive api into template

* fix(123): use built-in MD5 to avoid caching full

* .

* fix build failed
2025-02-01 17:29:55 +08:00
abc1763613206
b9f397d29f fix(139): restore the Account handling, partially reverts #7850 (#7900 close #7784) 2025-01-30 11:25:41 +08:00
Jiang Xiang
d53eecc229 fix(febbox): panic due to slice out of range (#7898 close #7889) 2025-01-30 11:24:07 +08:00
Andy Hsu
f88fd83d4a feat(ci): use go-cross/cgo-actions for dev build 2025-01-28 18:57:09 +08:00
Andy Hsu
226c34929a feat(ci): add build info for beta release 2025-01-27 21:32:59 +08:00
j2rong4cn
027edcbe53 refactor(patch): execute all patches in dev version (#7807) 2025-01-27 20:49:24 +08:00
Snowykami
fd51f34efa feat(misskey): add misskey driver (#7864) 2025-01-27 20:47:52 +08:00
Sakana
bdd9774aa7 feat(github_releases): add support for github_releases driver (#7844 close #7842)
* feat(github_releases): 添加对 GitHub Releases 的支持

* feat(github_releases): 增加目录大小和更新时间,增加请求缓存

* Feat(github_releases): 可选填入 GitHub token 来提高速率限制或访问私有仓库

* Fix(github_releases): 修复仓库无权限或不存在时的异常

* feat(github_releases): 支持显示所有版本,开启后不显示文件夹大小

* feat(github_releases): 兼容无子目录
2025-01-27 20:28:44 +08:00
Jealous
258b8f520f feat(recursive-move): add overwrite option to preventing unintentional overwriting (#7868 closes #7382,#7719)
* feat(recursive-move): add `overwrite` option to preventing unintentional overwriting

* chore: rearrange code order
2025-01-27 20:25:39 +08:00
Jiang Xiang
99f39410f2 fix(s3): escape CopySource request header when copying files (#7860 close #7858) 2025-01-27 20:23:13 +08:00
Shelton Zhu
267120a8c8 fix(115): fix offline download (#7845 close #7794)
* feat(115): use multi url for list files & change download url api

* fix(115): fix offline download. (close #7794)
2025-01-27 20:20:55 +08:00
KirCute_ECT
5eff8cc7bf feat(upload): support rapid upload on web (#7851) 2025-01-27 20:20:09 +08:00
KirCute_ECT
d5ec998699 feat(task): allow retry canceled (#7852) 2025-01-27 20:18:10 +08:00
LaoShui
23f3178f39 chore(README): formatting spacing in README links (#7879) [skip ci] 2025-01-27 20:13:35 +08:00
MadDogOwner
cafdb4d407 fix(139): correct path handling in groupGetFiles (#7850 closes #7848,#7603)
* fix(139): correct path handling in groupGetFiles

* perf(139): reduce the number of requests in groupGetFiles

* refactor(139): check authorization expiration (#10)

* refactor(139): check authorization expiration

* fix bug

* chore(139): update api version to 7.14.0

---------

Co-authored-by: j2rong4cn <36783515+j2rong4cn@users.noreply.github.com>
2025-01-27 20:11:21 +08:00
Jealous
0d4c63e9ff feat(fs): display the existing filename in error message (#7877) 2025-01-27 20:09:17 +08:00
j2rong4cn
5c5d8378e5 fix(archive): unable to preview (#7843)
* fix(archive): unrecognition zip

* feat(archive): add tree for zip meta

* fix bug

* refactor(archive):  meta cache time use Link Expiration first

* feat(archive): return sort policy in meta (#2)

* refactor

* perf(archive): reduce new network requests

---------

Co-authored-by: KirCute_ECT <951206789@qq.com>
2025-01-27 20:08:56 +08:00
j2rong4cn
2be0c3d1a0 feat(alias): add DownloadConcurrency and DownloadPartSize option (#7829)
* fix(net): goroutine logic bug (AlistGo/alist#7215)

* Fix goroutine logic bug

* Fix bug

---------

Co-authored-by: hpy hs <hshpy.pengyu@gmail.com>

* perf(net): sequential and dynamic concurrency

* fix(net): incorrect error return

* feat(alias):  add `DownloadConcurrency` and `DownloadPartSize` option

* feat(net): add `ConcurrencyLimit`

* pref(net): create `chunk` on demand

* refactor

* refactor

* fix(net): `r.Closers.Add` has no effect

* refactor

---------

Co-authored-by: hpy hs <hshpy.pengyu@gmail.com>
2025-01-27 20:08:39 +08:00
foxxorcat
bdcf450203 fix: resolve concurrent read/write issues in WrapObjName (#7865) 2025-01-27 20:06:18 +08:00
Jealous
c2633dd443 fix(workflow): use the dev version of the web for beta releases (#7862)
* fix(workflow): use dev version of the web for beta releases

* chore(config): check version string by prefix
2025-01-23 22:49:35 +08:00
KirCute_ECT
11b6a6012f fix(copy): use Link and Put when the driver does not support copying (#7834) 2025-01-18 23:52:02 +08:00
Jealous
59e02287b2 feat(fs): add overwrite option to preventing unintentional overwriting (#7809) 2025-01-18 23:39:07 +08:00
KirCute_ECT
bb40e2e2cd feat(archive): archive manage (#7817)
* feat(archive): archive management

* fix(ftp-server): remove duplicate ReadAtSeeker realization

* fix(archive): bad seeking of SeekableStream

* fix(archive): split internal and driver extraction api

* feat(archive): patch

* fix(shutdown): clear decompress upload tasks

* chore

* feat(archive): support .iso format

* chore
2025-01-18 23:28:12 +08:00
j2rong4cn
ab22cf8233 feat: add Reference interface to driver (#7805)
* feat: add `Reference` interface to driver

* feat(123_share): support reference 123pan
2025-01-18 23:26:58 +08:00
MadDogOwner
880cc7abca fix(139): use personal_new by default (#7836) 2025-01-18 23:24:09 +08:00
Jealous
b60da9732f feat(offline-download): allow using offline download tools in any storage (#7716)
* Feat(offline-download): allow using thunder offline download tool in any storage

* Feat(offline-download): allow using 115 offline download tool in any storage

* Feat(offline-download): allow using pikpak offline download tool in any storage

* style(offline-download): unify offline download tool names

* feat(offline-download): show available offline download tools only

* Fix(offline-download): update unmodified tool names.

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2025-01-10 21:24:44 +08:00
KirCute_ECT
e04114d102 feat(github): add github api driver (#7717)
* feat(github): add github api driver

* fix: filter submodule operation

* feat: rename, copy and move, but with bugs

* fix: move and copy returns 422

* fix: change TargetPath in rename msg from parent path to new self path

* fix: add non-commit mutex

* pref(github): use net/http to put blob

* chore: add a help message to `ref` addition
2025-01-10 20:59:58 +08:00
KirCute_ECT
51bcf83511 feat(url-tree): support url tree driver writing (#7779 close #5166)
* feat: support url tree writing

* fix: meta writable

* feat: disable writable via addition
2025-01-10 20:50:56 +08:00
KirCute_ECT
25b4b55ee1 feat(ftp-server): support resumable downloading (#7792) 2025-01-10 20:50:20 +08:00
Jiang Xiang
6812ec9a6d fix(ilanzou): add accept-encoding request header (#7796 close #7759) 2025-01-10 20:49:50 +08:00
Lin Tianchuan
31a7470865 feat(local): support both time and percent for video thumbnail (#7802)
* feat(local): support percent for video thumbnail

The percentage determines the point in the video (as a percentage of the total duration) at which the thumbnail will be generated.

* feat(local): support both time and percent for video thumbnail
2025-01-10 20:48:45 +08:00
Mmx
687124c81d ci(build_docker): merge build_docker into release_docker workflow (#7755)
* feat(ci): merge build_docker workflow into release_docker

* fix(ci): logics of docker meta
2025-01-01 21:29:59 +08:00
foxxorcat
e4439e66b9 fix:(baidu_photo): upload erron -6 (#7760 close #7744)
* fix:(baidu_photo): upload erron -6

* fix(baidu_photo):api add bdstoken
2025-01-01 21:13:34 +08:00
MadDogOwner
7fd4ac7851 fix(139): update familyGetFiles pagination logic (#7748 close #7711) 2024-12-30 22:55:47 +08:00
KirCute_ECT
6745dcc139 feat(task): attach creator to user of the context (#7729) 2024-12-30 22:55:09 +08:00
KirCute_ECT
aa1082a56c feat(sftp-server): do not generate host key until first enabled (#7734) 2024-12-30 22:54:37 +08:00
Jealous
ed149be84b feat(index): add disable index option for storages (#7730) 2024-12-30 22:52:55 +08:00
Sakana
040dc14ee6 fix(lenovonas_share): stoken expire (#7727) 2024-12-30 22:51:39 +08:00
Mmx
4dce53d72b feat(docker release): improve aria2 image, add aio image (#7750)
* build: add argument INSTALL_ARIA2 to dockerfile

* feat: run aria2 in main entrypoint

* feat(ci): environment matrix for docker release

* improve(ci): allow overwrite artifacts in docker release

* fix(ci): permission of alist binary in docker; entrypoint logic

* improve(aria2): move aria2 data to /opt/aria2; fix permission issues

References:

https://github.com/AlistGo/with_aria2/pull/13

Co-authored-by: GoodbyeNJN <cc@fuckwall.cc>

* fix(ci): aio image is not taking effect

* fix(build): tar command in aria2 installation process

(cherry picked from commit 647285408354807bae64df6a20fefb696ff787de)

---------

Co-authored-by: GoodbyeNJN <cc@fuckwall.cc>
2024-12-30 22:51:05 +08:00
j2rong4cn
365fc40dfe fix: static page to limit request method (#7745 close #7667) 2024-12-30 22:49:18 +08:00
KirCute_ECT
5994c17b4e feat(patch): upgrade patch module (#7738)
* feat(patch): upgrade patch module

* chore(patch): add docs

* fix(patch): skip and rewrite invalid last launched version

* fix(patch): turn two functions into patches
2024-12-30 22:48:33 +08:00
Jealous
42243b1517 feat(thunder): add offline download tool (#7673)
* feat(thunder): add offline download tool

* fix(thunder): improve error handling and parse file size in status response

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2024-12-25 21:23:58 +08:00
KirCute_ECT
48916cdedf fix(permission): enhance the strictness of permissions (#7705 close #7680)
* fix(permission): enhance the strictness of permissions

* fix: add initial permissions to admin
2024-12-25 21:17:58 +08:00
Feng.YJ
5ecf5e823c fix(webauthn): handle error when removing webauthn credential (#7689) 2024-12-25 21:16:34 +08:00
KirCute_ECT
c218b5701e fix(115): support float QPS (#7677) 2024-12-25 21:16:03 +08:00
KirCute_ECT
77d0c78bfd feat(sftp-server): public key login (#7668) 2024-12-25 21:15:06 +08:00
j2rong4cn
db5c601cfe fix(crypt): add sign to thumbnail (#6611) 2024-12-25 21:13:54 +08:00
KirCute_ECT
221cdf3611 feat(s3): support custom host presign (#7699 close #7696) 2024-12-25 21:13:23 +08:00
KirCute_ECT
40b0e66efe feat(ftp-server): treat moving across file systems as copying (#7704 close #7701)
* feat(ftp-server): treat moving across file systems as copying

* fix: ensure compatibility across different fs on the same driver
2024-12-25 21:12:30 +08:00
KirCute_ECT
b72e85a73a fix(ftp-server): rewrite download in a more appropriate method (#7656) 2024-12-25 21:11:45 +08:00
KirCute_ECT
6aaf5975c6 fix(ftp-server): work unproperly when base url is not root (#7693)
* fix(ftp-server): work unproperly when base url is not root

* fix: avoid merge conflict
2024-12-25 21:11:36 +08:00
MadDogOwner
bb2aec20e4 fix(139): handle upload file conflicts (#7692) 2024-12-25 21:11:05 +08:00
KirCute_ECT
d7aa1608ac feat(task): add speed monitor (#7655) 2024-12-25 21:09:54 +08:00
j2rong4cn
db99224126 perf: Speed ​​of database initialization (#7694)
* perf: 优化非sqlite3数据库时初始化慢的问题

* refactor
2024-12-25 21:08:22 +08:00
MadDogOwner
b8bd14f99b fix(lanzou): missing parameter (#7678 close #7210) 2024-12-17 22:05:52 +08:00
hshpy
331885ed64 fix(net): close of closed channel (#7580) 2024-12-17 22:04:27 +08:00
Andy Hsu
cf58ab3a78 chore(config): disable FTP and SFTP by default 2024-12-12 21:04:14 +08:00
KirCute_ECT
33ba7f1521 feat: sftp server support (#7643)
* feat: sftp server support

* fix(sftp-server): try fix build failed

* fix: sftp download lack
2024-12-12 20:51:43 +08:00
KirCute_ECT
201e25c17f fix(ftp-server): large transfer leads to client timeout (#7639)
* fix(ftp-server): client timeout to wait a large file upload to netdisk

* fix(ftp-server): driver alist v3 upload failed and temp files do not be deleted
2024-12-12 20:50:00 +08:00
Andy Hsu
ecefa5e0eb ci: fix desktop beta release trigger 2024-12-10 20:21:51 +08:00
KirCute_ECT
650b03aeb1 feat: ftp server support (#7634 close #1898)
* feat: ftp server support

* fix(ftp): incorrect mode for dirs in LIST returns
2024-12-10 20:17:46 +08:00
KirCute_ECT
7341846499 perf(task): merge requests of operating selected (#7637) 2024-12-10 19:30:50 +08:00
MadDogOwner
a3908fd9a6 fix(139): update APIs (#7591 close #7603)
* fix(139): update family cloud API

* fix(139): update API of familyGetLink

* feat(139): support group (close #7603)

* docs: add `139 group` to Readme

* feat(139): support multipart upload (close: #7444)

* feat(139): add custom upload part size option

* fix: missing right big quote

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2024-12-09 23:54:21 +08:00
MadDogOwner
2a035302b2 fix(cloudreve): support upload to remote and OneDrive storage (#7632 close #6882)
- Add support for remote and OneDrive storage types
- Implement new upload methods for different storage types
- Update driver to handle various storage policies
- Add error handling and session cleanup for failed uploads
2024-12-09 23:35:44 +08:00
MadDogOwner
016e169c41 feat(139): support multipart upload (close: #7444) (#7630)
* feat(139): support multipart upload (close: #7444)

* feat(139): add custom upload part size option
2024-12-09 23:34:29 +08:00
Joseph Chris
088120df82 feat(sso): add custom extra scope support (#7577) 2024-12-09 23:33:46 +08:00
Shelton Zhu
aa45a82914 fix(115): fix login bug (#7626 close #7614 close #7620) 2024-12-09 23:33:07 +08:00
shingyu
5084d98398 fix(onedrive): fix timeout error (#7551 close #7506) 2024-12-08 17:06:33 +08:00
YangXu
fa15c576f0 fix(pikpak): remove oauth2 method (#7567 close #7545) 2024-12-07 17:03:46 +08:00
foxxorcat
2d3605c684 fix(baidu_photo): cookie login fix download error (#7602) 2024-12-07 17:02:52 +08:00
alist666
492b49d77a Update README.md 2024-12-07 01:00:25 +08:00
372 changed files with 30740 additions and 2921 deletions

2
.github/FUNDING.yml vendored
View File

@@ -10,4 +10,4 @@ liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
custom: ['https://alist.nn.ci/guide/sponsor.html']
custom: ['https://alistgo.com/guide/sponsor.html']

View File

@@ -16,14 +16,14 @@ body:
您必须勾选以下所有内容否则您的issue可能会被直接关闭。或者您可以去[讨论区](https://github.com/alist-org/alist/discussions)
options:
- label: |
I have read the [documentation](https://alist.nn.ci).
我已经阅读了[文档](https://alist.nn.ci)。
I have read the [documentation](https://alistgo.com).
我已经阅读了[文档](https://alistgo.com)。
- label: |
I'm sure there are no duplicate issues or discussions.
我确定没有重复的issue或讨论。
- label: |
I'm sure it's due to `AList` and not something else(such as [Network](https://alist.nn.ci/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host) ,`Dependencies` or `Operational`).
我确定是`AList`的问题,而不是其他原因(例如[网络](https://alist.nn.ci/zh/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host)`依赖`或`操作`)。
I'm sure it's due to `AList` and not something else(such as [Network](https://alistgo.com/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host) ,`Dependencies` or `Operational`).
我确定是`AList`的问题,而不是其他原因(例如[网络](https://alistgo.com/zh/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host)`依赖`或`操作`)。
- label: |
I'm sure this issue is not fixed in the latest version.
我确定这个问题在最新版本中没有被修复。

View File

@@ -7,7 +7,7 @@ body:
label: Please make sure of the following things
description: You may select more than one, even select all.
options:
- label: I have read the [documentation](https://alist.nn.ci).
- label: I have read the [documentation](https://alistgo.com).
- label: I'm sure there are no duplicate issues or discussions.
- label: I'm sure this feature is not implemented.
- label: I'm sure it's a reasonable and popular requirement.

View File

@@ -87,12 +87,17 @@ jobs:
run: bash build.sh dev web
- name: Build
id: test-action
uses: go-cross/cgo-actions@v1
with:
targets: ${{ matrix.target }}
musl-target-format: $os-$musl-$arch
out-dir: build
x-flags: |
github.com/alist-org/alist/v3/internal/conf.BuiltAt=$built_at
github.com/alist-org/alist/v3/internal/conf.GitAuthor=Xhofe
github.com/alist-org/alist/v3/internal/conf.GitCommit=$git_commit
github.com/alist-org/alist/v3/internal/conf.Version=$tag
github.com/alist-org/alist/v3/internal/conf.WebVersion=dev
- name: Compress
run: |
@@ -111,14 +116,23 @@ jobs:
name: Beta Release Desktop
runs-on: ubuntu-latest
steps:
- uses: peter-evans/create-or-update-comment@v4
- name: Checkout repo
uses: actions/checkout@v4
with:
issue-number: 69
body: |
/release-beta
- triggered by @${{ github.actor }}
- commit sha: ${{ github.sha }}
- view files: https://github.com/alist-org/alist/tree/${{ github.sha }}
reactions: 'rocket'
token: ${{ secrets.MY_TOKEN }}
repository: alist-org/desktop-release
repository: AlistGo/desktop-release
ref: main
persist-credentials: false
fetch-depth: 0
- name: Commit
run: |
git config --local user.email "bot@nn.ci"
git config --local user.name "IlaBot"
git commit --allow-empty -m "Trigger build for ${{ github.sha }}"
- name: Push commit
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.MY_TOKEN }}
branch: main
repository: AlistGo/desktop-release

View File

@@ -15,14 +15,19 @@ jobs:
strategy:
matrix:
platform: [ubuntu-latest]
go-version: [ '1.21' ]
target:
- darwin-amd64
- darwin-arm64
- windows-amd64
- linux-arm64-musl
- linux-amd64-musl
- windows-arm64
- android-arm64
name: Build
runs-on: ${{ matrix.platform }}
env:
GOPROXY: https://proxy.golang.org,direct
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
@@ -30,19 +35,29 @@ jobs:
- uses: benjlevesque/short-sha@v3.0
id: short-sha
- name: Install dependencies
run: |
sudo snap install zig --classic --beta
docker pull crazymax/xgo:latest
go install github.com/crazy-max/xgo@latest
sudo apt install upx
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: '1.22'
- name: Setup web
run: bash build.sh dev web
- name: Build
run: |
bash build.sh dev
uses: go-cross/cgo-actions@v1
with:
targets: ${{ matrix.target }}
musl-target-format: $os-$musl-$arch
out-dir: build
x-flags: |
github.com/alist-org/alist/v3/internal/conf.BuiltAt=$built_at
github.com/alist-org/alist/v3/internal/conf.GitAuthor=Xhofe
github.com/alist-org/alist/v3/internal/conf.GitCommit=$git_commit
github.com/alist-org/alist/v3/internal/conf.Version=$tag
github.com/alist-org/alist/v3/internal/conf.WebVersion=dev
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: alist_${{ env.SHA }}
path: dist
name: alist_${{ env.SHA }}_${{ matrix.target }}
path: build/*

View File

@@ -1,126 +0,0 @@
name: build_docker
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
build_docker:
name: Build Docker
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: xhofe/alist
tags: |
type=schedule
type=ref,event=branch
type=ref,event=tag
type=ref,event=pr
type=raw,value=beta,enable={{is_default_branch}}
- name: Docker meta with ffmpeg
id: meta-ffmpeg
uses: docker/metadata-action@v5
with:
images: xhofe/alist
flavor: |
suffix=-ffmpeg
tags: |
type=schedule
type=ref,event=branch
type=ref,event=tag
type=ref,event=pr
type=raw,value=beta,enable={{is_default_branch}}
- uses: actions/setup-go@v5
with:
go-version: 'stable'
- name: Cache Musl
id: cache-musl
uses: actions/cache@v4
with:
path: build/musl-libs
key: docker-musl-libs-v2
- name: Download Musl Library
if: steps.cache-musl.outputs.cache-hit != 'true'
run: bash build.sh prepare docker-multiplatform
- name: Build go binary
run: bash build.sh dev docker-multiplatform
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
if: github.event_name == 'push'
uses: docker/login-action@v3
with:
username: xhofe
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v6
with:
context: .
file: Dockerfile.ci
push: ${{ github.event_name == 'push' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64
- name: Build and push with ffmpeg
id: docker_build_ffmpeg
uses: docker/build-push-action@v6
with:
context: .
file: Dockerfile.ci
push: ${{ github.event_name == 'push' }}
tags: ${{ steps.meta-ffmpeg.outputs.tags }}
labels: ${{ steps.meta-ffmpeg.outputs.labels }}
build-args: INSTALL_FFMPEG=true
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64
build_docker_with_aria2:
needs: build_docker
name: Build docker with aria2
runs-on: ubuntu-latest
if: github.event_name == 'push'
steps:
- name: Checkout repo
uses: actions/checkout@v4
with:
repository: alist-org/with_aria2
ref: main
persist-credentials: false
fetch-depth: 0
- name: Commit
run: |
git config --local user.email "bot@nn.ci"
git config --local user.name "IlaBot"
git commit --allow-empty -m "Trigger build for ${{ github.sha }}"
- name: Push commit
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.MY_TOKEN }}
branch: main
repository: alist-org/with_aria2

View File

@@ -72,7 +72,7 @@ jobs:
- name: Checkout repo
uses: actions/checkout@v4
with:
repository: alist-org/desktop-release
repository: AlistGo/desktop-release
ref: main
persist-credentials: false
fetch-depth: 0
@@ -89,4 +89,4 @@ jobs:
with:
github_token: ${{ secrets.MY_TOKEN }}
branch: main
repository: alist-org/desktop-release
repository: AlistGo/desktop-release

View File

@@ -4,10 +4,35 @@ on:
push:
tags:
- 'v*'
branches:
- main
pull_request:
branches:
- main
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env:
REGISTRY: 'xhofe/alist'
REGISTRY_USERNAME: 'xhofe'
REGISTRY_PASSWORD: ${{ secrets.DOCKERHUB_TOKEN }}
GITHUB_CR_REPO: ghcr.io/${{ github.repository }}
ARTIFACT_NAME: 'binaries_docker_release'
RELEASE_PLATFORMS: 'linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64'
IMAGE_PUSH: ${{ github.event_name == 'push' }}
IMAGE_IS_PROD: ${{ github.ref_type == 'tag' }}
IMAGE_TAGS_BETA: |
type=schedule
type=ref,event=branch
type=ref,event=tag
type=ref,event=pr
type=raw,value=beta,enable={{is_default_branch}}
jobs:
release_docker:
name: Release Docker
build_binary:
name: Build Binaries for Docker Release
runs-on: ubuntu-latest
steps:
- name: Checkout
@@ -28,14 +53,53 @@ jobs:
if: steps.cache-musl.outputs.cache-hit != 'true'
run: bash build.sh prepare docker-multiplatform
- name: Build go binary
- name: Build go binary (beta)
if: env.IMAGE_IS_PROD != 'true'
run: bash build.sh beta docker-multiplatform
- name: Build go binary (release)
if: env.IMAGE_IS_PROD == 'true'
run: bash build.sh release docker-multiplatform
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
images: xhofe/alist
name: ${{ env.ARTIFACT_NAME }}
overwrite: true
path: |
build/
!build/*.tgz
!build/musl-libs/**
release_docker:
needs: build_binary
name: Release Docker image
runs-on: ubuntu-latest
strategy:
matrix:
image: ["latest", "ffmpeg", "aria2", "aio"]
include:
- image: "latest"
build_arg: ""
tag_favor: ""
- image: "ffmpeg"
build_arg: INSTALL_FFMPEG=true
tag_favor: "suffix=-ffmpeg,onlatest=true"
- image: "aria2"
build_arg: INSTALL_ARIA2=true
tag_favor: "suffix=-aria2,onlatest=true"
- image: "aio"
build_arg: |
INSTALL_FFMPEG=true
INSTALL_ARIA2=true
tag_favor: "suffix=-aio,onlatest=true"
steps:
- name: Checkout
uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: ${{ env.ARTIFACT_NAME }}
path: 'build/'
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
@@ -44,10 +108,32 @@ jobs:
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
if: env.IMAGE_PUSH == 'true'
uses: docker/login-action@v3
with:
username: xhofe
password: ${{ secrets.DOCKERHUB_TOKEN }}
logout: true
username: ${{ env.REGISTRY_USERNAME }}
password: ${{ env.REGISTRY_PASSWORD }}
- name: Login to GHCR
uses: docker/login-action@v3
with:
logout: true
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: |
${{ env.REGISTRY }}
${{ env.GITHUB_CR_REPO }}
tags: ${{ env.IMAGE_IS_PROD == 'true' && '' || env.IMAGE_TAGS_BETA }}
flavor: |
${{ env.IMAGE_IS_PROD == 'true' && 'latest=true' || '' }}
${{ matrix.tag_favor }}
- name: Build and push
id: docker_build
@@ -55,54 +141,8 @@ jobs:
with:
context: .
file: Dockerfile.ci
push: true
push: ${{ env.IMAGE_PUSH == 'true' }}
build-args: ${{ matrix.build_arg }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64
- name: Docker meta with ffmpeg
id: meta-ffmpeg
uses: docker/metadata-action@v5
with:
images: xhofe/alist
flavor: |
latest=true
suffix=-ffmpeg,onlatest=true
- name: Build and push with ffmpeg
id: docker_build_ffmpeg
uses: docker/build-push-action@v6
with:
context: .
file: Dockerfile.ci
push: true
tags: ${{ steps.meta-ffmpeg.outputs.tags }}
labels: ${{ steps.meta-ffmpeg.outputs.labels }}
build-args: INSTALL_FFMPEG=true
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64
release_docker_with_aria2:
needs: release_docker
name: Release docker with aria2
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v4
with:
repository: alist-org/with_aria2
ref: main
persist-credentials: false
fetch-depth: 0
- name: Add tag
run: |
git config --local user.email "bot@nn.ci"
git config --local user.name "IlaBot"
git tag -a ${{ github.ref_name }} -m "release ${{ github.ref_name }}"
- name: Push tags
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.MY_TOKEN }}
branch: main
repository: alist-org/with_aria2
platforms: ${{ env.RELEASE_PLATFORMS }}

View File

@@ -10,6 +10,7 @@ RUN bash build.sh release docker
FROM alpine:edge
ARG INSTALL_FFMPEG=false
ARG INSTALL_ARIA2=false
LABEL MAINTAINER="i@nn.ci"
WORKDIR /opt/alist/
@@ -18,13 +19,24 @@ RUN apk update && \
apk upgrade --no-cache && \
apk add --no-cache bash ca-certificates su-exec tzdata; \
[ "$INSTALL_FFMPEG" = "true" ] && apk add --no-cache ffmpeg; \
[ "$INSTALL_ARIA2" = "true" ] && apk add --no-cache curl aria2 && \
mkdir -p /opt/aria2/.aria2 && \
wget https://github.com/P3TERX/aria2.conf/archive/refs/heads/master.tar.gz -O /tmp/aria-conf.tar.gz && \
tar -zxvf /tmp/aria-conf.tar.gz -C /opt/aria2/.aria2 --strip-components=1 && rm -f /tmp/aria-conf.tar.gz && \
sed -i 's|rpc-secret|#rpc-secret|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/script.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/script.conf && \
touch /opt/aria2/.aria2/aria2.session && \
/opt/aria2/.aria2/tracker.sh ; \
rm -rf /var/cache/apk/*
COPY --from=builder /app/bin/alist ./
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh && /entrypoint.sh version
COPY --chmod=755 --from=builder /app/bin/alist ./
COPY --chmod=755 entrypoint.sh /entrypoint.sh
RUN /entrypoint.sh version
ENV PUID=0 PGID=0 UMASK=022
ENV PUID=0 PGID=0 UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
VOLUME /opt/alist/data/
EXPOSE 5244 5245
CMD [ "/entrypoint.sh" ]

View File

@@ -1,7 +1,8 @@
FROM alpine:edge
FROM alpine:3.20.7
ARG TARGETPLATFORM
ARG INSTALL_FFMPEG=false
ARG INSTALL_ARIA2=false
LABEL MAINTAINER="i@nn.ci"
WORKDIR /opt/alist/
@@ -10,13 +11,24 @@ RUN apk update && \
apk upgrade --no-cache && \
apk add --no-cache bash ca-certificates su-exec tzdata; \
[ "$INSTALL_FFMPEG" = "true" ] && apk add --no-cache ffmpeg; \
[ "$INSTALL_ARIA2" = "true" ] && apk add --no-cache curl aria2 && \
mkdir -p /opt/aria2/.aria2 && \
wget https://github.com/P3TERX/aria2.conf/archive/refs/heads/master.tar.gz -O /tmp/aria-conf.tar.gz && \
tar -zxvf /tmp/aria-conf.tar.gz -C /opt/aria2/.aria2 --strip-components=1 && rm -f /tmp/aria-conf.tar.gz && \
sed -i 's|rpc-secret|#rpc-secret|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root/.aria2|/opt/aria2/.aria2|g' /opt/aria2/.aria2/script.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/aria2.conf && \
sed -i 's|/root|/opt/aria2|g' /opt/aria2/.aria2/script.conf && \
touch /opt/aria2/.aria2/aria2.session && \
/opt/aria2/.aria2/tracker.sh ; \
rm -rf /var/cache/apk/*
COPY /build/${TARGETPLATFORM}/alist ./
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh && /entrypoint.sh version
COPY --chmod=755 /build/${TARGETPLATFORM}/alist ./
COPY --chmod=755 entrypoint.sh /entrypoint.sh
RUN /entrypoint.sh version
ENV PUID=0 PGID=0 UMASK=022
ENV PUID=0 PGID=0 UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
VOLUME /opt/alist/data/
EXPOSE 5244 5245
CMD [ "/entrypoint.sh" ]
CMD [ "/entrypoint.sh" ]

View File

@@ -1,5 +1,5 @@
<div align="center">
<a href="https://alist.nn.ci"><img width="100px" alt="logo" src="https://cdn.jsdelivr.net/gh/alist-org/logo@main/logo.svg"/></a>
<a href="https://alistgo.com"><img width="100px" alt="logo" src="https://cdn.jsdelivr.net/gh/alist-org/logo@main/logo.svg"/></a>
<p><em>🗂A file list program that supports multiple storages, powered by Gin and Solidjs.</em></p>
<div>
<a href="https://goreportcard.com/report/github.com/alist-org/alist/v3">
@@ -31,7 +31,7 @@
<a href="https://hub.docker.com/r/xhofe/alist">
<img src="https://img.shields.io/docker/pulls/xhofe/alist?color=%2348BB78&logo=docker&label=pulls" alt="Downloads" />
</a>
<a href="https://alist.nn.ci/guide/sponsor.html">
<a href="https://alistgo.com/guide/sponsor.html">
<img src="https://img.shields.io/badge/%24-sponsor-F87171.svg" alt="sponsor" />
</a>
</div>
@@ -39,7 +39,7 @@
---
English | [中文](./README_cn.md)| [日本語](./README_ja.md) | [Contributing](./CONTRIBUTING.md) | [CODE_OF_CONDUCT](./CODE_OF_CONDUCT.md)
English | [中文](./README_cn.md) | [日本語](./README_ja.md) | [Contributing](./CONTRIBUTING.md) | [CODE_OF_CONDUCT](./CODE_OF_CONDUCT.md)
## Features
@@ -57,8 +57,10 @@ English | [中文](./README_cn.md)| [日本語](./README_ja.md) | [Contributing]
- [x] [UPYUN Storage Service](https://www.upyun.com/products/file-storage)
- [x] WebDav(Support OneDrive/SharePoint without API)
- [x] Teambition([China](https://www.teambition.com/ ),[International](https://us.teambition.com/ ))
- [x] [MediaFire](https://www.mediafire.com)
- [x] [Mediatrack](https://www.mediatrack.cn/)
- [x] [139yun](https://yun.139.com/) (Personal, Family)
- [x] [ProtonDrive](https://proton.me/drive)
- [x] [139yun](https://yun.139.com/) (Personal, Family, Group)
- [x] [YandexDisk](https://disk.yandex.com/)
- [x] [BaiduNetdisk](http://pan.baidu.com/)
- [x] [Terabox](https://www.terabox.com/main)
@@ -77,6 +79,7 @@ English | [中文](./README_cn.md)| [日本語](./README_ja.md) | [Contributing]
- [x] [Dropbox](https://www.dropbox.com/)
- [x] [FeijiPan](https://www.feijipan.com/)
- [x] [dogecloud](https://www.dogecloud.com/product/oss)
- [x] [Azure Blob Storage](https://azure.microsoft.com/products/storage/blobs)
- [x] Easy to deploy and out-of-the-box
- [x] File preview (PDF, markdown, code, plain text, ...)
- [x] Image preview in gallery mode
@@ -87,7 +90,7 @@ English | [中文](./README_cn.md)| [日本語](./README_ja.md) | [Contributing]
- [x] Dark mode
- [x] I18n
- [x] Protected routes (password protection and authentication)
- [x] WebDav (see https://alist.nn.ci/guide/webdav.html for details)
- [x] WebDav (see https://alistgo.com/guide/webdav.html for details)
- [x] [Docker Deploy](https://hub.docker.com/r/xhofe/alist)
- [x] Cloudflare Workers proxy
- [x] File/Folder package download
@@ -98,7 +101,11 @@ English | [中文](./README_cn.md)| [日本語](./README_ja.md) | [Contributing]
## Document
<https://alist.nn.ci/>
<https://alistgo.com/>
## API Documentation (via Apifox):
<https://alist-public.apifox.cn/>
## Demo
@@ -111,13 +118,11 @@ Please go to our [discussion forum](https://github.com/alist-org/alist/discussio
## Sponsor
AList is an open-source software, if you happen to like this project and want me to keep going, please consider sponsoring me or providing a single donation! Thanks for all the love and support:
https://alist.nn.ci/guide/sponsor.html
https://alistgo.com/guide/sponsor.html
### Special sponsors
- [VidHub](https://apps.apple.com/app/apple-store/id1659622164?pt=118612019&ct=alist&mt=8) - An elegant cloud video player within the Apple ecosystem. Support for iPhone, iPad, Mac, and Apple TV.
- [亚洲云](https://www.asiayun.com/aff/QQCOOQKZ) - 高防服务器|服务器租用|福州高防|广东电信|香港服务器|美国服务器|海外服务器 - 国内靠谱的企业级云计算服务提供商 (sponsored Chinese API server)
- [找资源](http://zhaoziyuan2.cc/) - 阿里云盘资源搜索引擎
## Contributors
@@ -138,4 +143,4 @@ The `AList` is open-source software licensed under the AGPL-3.0 license.
---
> [@Blog](https://nn.ci/) · [@GitHub](https://github.com/alist-org) · [@TelegramGroup](https://t.me/alist_chat) · [@Discord](https://discord.gg/F4ymsH4xv2)
> [@GitHub](https://github.com/alist-org) · [@TelegramGroup](https://t.me/alist_chat) · [@Discord](https://discord.gg/F4ymsH4xv2)

View File

@@ -1,5 +1,5 @@
<div align="center">
<a href="https://alist.nn.ci"><img width="100px" alt="logo" src="https://cdn.jsdelivr.net/gh/alist-org/logo@main/logo.svg"/></a>
<a href="https://alistgo.com"><img width="100px" alt="logo" src="https://cdn.jsdelivr.net/gh/alist-org/logo@main/logo.svg"/></a>
<p><em>🗂一个支持多存储的文件列表程序,使用 Gin 和 Solidjs。</em></p>
<div>
<a href="https://goreportcard.com/report/github.com/alist-org/alist/v3">
@@ -31,7 +31,7 @@
<a href="https://hub.docker.com/r/xhofe/alist">
<img src="https://img.shields.io/docker/pulls/xhofe/alist?color=%2348BB78&logo=docker&label=pulls" alt="Downloads" />
</a>
<a href="https://alist.nn.ci/zh/guide/sponsor.html">
<a href="https://alistgo.com/zh/guide/sponsor.html">
<img src="https://img.shields.io/badge/%24-sponsor-F87171.svg" alt="sponsor" />
</a>
</div>
@@ -57,8 +57,10 @@
- [x] [又拍云对象存储](https://www.upyun.com/products/file-storage)
- [x] WebDav(支持无API的OneDrive/SharePoint)
- [x] Teambition[中国](https://www.teambition.com/ )[国际](https://us.teambition.com/ )
- [x] [MediaFire](https://www.mediafire.com)
- [x] [分秒帧](https://www.mediatrack.cn/)
- [x] [和彩云](https://yun.139.com/) (个人云, 家庭云)
- [x] [ProtonDrive](https://proton.me/drive)
- [x] [和彩云](https://yun.139.com/) (个人云, 家庭云,共享群组)
- [x] [Yandex.Disk](https://disk.yandex.com/)
- [x] [百度网盘](http://pan.baidu.com/)
- [x] [UC网盘](https://drive.uc.cn)
@@ -86,7 +88,7 @@
- [x] 黑暗模式
- [x] 国际化
- [x] 受保护的路由(密码保护和身份验证)
- [x] WebDav (具体见 https://alist.nn.ci/zh/guide/webdav.html)
- [x] WebDav (具体见 https://alistgo.com/zh/guide/webdav.html)
- [x] [Docker 部署](https://hub.docker.com/r/xhofe/alist)
- [x] Cloudflare workers 中转
- [x] 文件/文件夹打包下载
@@ -97,7 +99,11 @@
## 文档
<https://alist.nn.ci/zh/>
<https://alistgo.com/zh/>
## API 文档(通过 Apifox 提供)
<https://alist-public.apifox.cn/>
## Demo
@@ -109,13 +115,11 @@
## 赞助
AList 是一个开源软件如果你碰巧喜欢这个项目并希望我继续下去请考虑赞助我或提供一个单一的捐款感谢所有的爱和支持https://alist.nn.ci/zh/guide/sponsor.html
AList 是一个开源软件如果你碰巧喜欢这个项目并希望我继续下去请考虑赞助我或提供一个单一的捐款感谢所有的爱和支持https://alistgo.com/zh/guide/sponsor.html
### 特别赞助
- [VidHub](https://apps.apple.com/app/apple-store/id1659622164?pt=118612019&ct=alist&mt=8) - 苹果生态下优雅的网盘视频播放器iPhoneiPadMacApple TV全平台支持。
- [亚洲云](https://www.asiayun.com/aff/QQCOOQKZ) - 高防服务器|服务器租用|福州高防|广东电信|香港服务器|美国服务器|海外服务器 - 国内靠谱的企业级云计算服务提供商 (国内API服务器赞助)
- [找资源](http://zhaoziyuan2.cc/) - 阿里云盘资源搜索引擎
## 贡献者

View File

@@ -1,5 +1,5 @@
<div align="center">
<a href="https://alist.nn.ci"><img width="100px" alt="logo" src="https://cdn.jsdelivr.net/gh/alist-org/logo@main/logo.svg"/></a>
<a href="https://alistgo.com"><img width="100px" alt="logo" src="https://cdn.jsdelivr.net/gh/alist-org/logo@main/logo.svg"/></a>
<p><em>🗂Gin と Solidjs による、複数のストレージをサポートするファイルリストプログラム。</em></p>
<div>
<a href="https://goreportcard.com/report/github.com/alist-org/alist/v3">
@@ -31,7 +31,7 @@
<a href="https://hub.docker.com/r/xhofe/alist">
<img src="https://img.shields.io/docker/pulls/xhofe/alist?color=%2348BB78&logo=docker&label=pulls" alt="Downloads" />
</a>
<a href="https://alist.nn.ci/guide/sponsor.html">
<a href="https://alistgo.com/guide/sponsor.html">
<img src="https://img.shields.io/badge/%24-sponsor-F87171.svg" alt="sponsor" />
</a>
</div>
@@ -57,8 +57,10 @@
- [x] [UPYUN Storage Service](https://www.upyun.com/products/file-storage)
- [x] WebDav(Support OneDrive/SharePoint without API)
- [x] Teambition([China](https://www.teambition.com/ ),[International](https://us.teambition.com/ ))
- [x] [MediaFire](https://www.mediafire.com)
- [x] [Mediatrack](https://www.mediatrack.cn/)
- [x] [139yun](https://yun.139.com/) (Personal, Family)
- [x] [ProtonDrive](https://proton.me/drive)
- [x] [139yun](https://yun.139.com/) (Personal, Family, Group)
- [x] [YandexDisk](https://disk.yandex.com/)
- [x] [BaiduNetdisk](http://pan.baidu.com/)
- [x] [Terabox](https://www.terabox.com/main)
@@ -87,7 +89,7 @@
- [x] ダークモード
- [x] 国際化
- [x] 保護されたルート (パスワード保護と認証)
- [x] WebDav (詳細は https://alist.nn.ci/guide/webdav.html を参照)
- [x] WebDav (詳細は https://alistgo.com/guide/webdav.html を参照)
- [x] [Docker デプロイ](https://hub.docker.com/r/xhofe/alist)
- [x] Cloudflare ワーカープロキシ
- [x] ファイル/フォルダパッケージのダウンロード
@@ -98,7 +100,11 @@
## ドキュメント
<https://alist.nn.ci/>
<https://alistgo.com/>
## APIドキュメントApifox 提供)
<https://alist-public.apifox.cn/>
## デモ
@@ -111,13 +117,11 @@
## スポンサー
AList はオープンソースのソフトウェアです。もしあなたがこのプロジェクトを気に入ってくださり、続けて欲しいと思ってくださるなら、ぜひスポンサーになってくださるか、1口でも寄付をしてくださるようご検討くださいすべての愛とサポートに感謝します:
https://alist.nn.ci/guide/sponsor.html
https://alistgo.com/guide/sponsor.html
### スペシャルスポンサー
- [VidHub](https://apps.apple.com/app/apple-store/id1659622164?pt=118612019&ct=alist&mt=8) - An elegant cloud video player within the Apple ecosystem. Support for iPhone, iPad, Mac, and Apple TV.
- [亚洲云](https://www.asiayun.com/aff/QQCOOQKZ) - 高防服务器|服务器租用|福州高防|广东电信|香港服务器|美国服务器|海外服务器 - 国内靠谱的企业级云计算服务提供商 (sponsored Chinese API server)
- [找资源](http://zhaoziyuan2.cc/) - 阿里云盘资源搜索引擎
## コントリビューター

View File

@@ -1,12 +1,14 @@
appName="alist"
builtAt="$(date +'%F %T %z')"
goVersion=$(go version | sed 's/go version //')
gitAuthor="Xhofe <i@nn.ci>"
gitCommit=$(git log --pretty=format:"%h" -1)
if [ "$1" = "dev" ]; then
version="dev"
webVersion="dev"
elif [ "$1" = "beta" ]; then
version="beta"
webVersion="dev"
else
git tag -d beta
version=$(git describe --abbrev=0 --tags)
@@ -19,7 +21,6 @@ echo "frontend version: $webVersion"
ldflags="\
-w -s \
-X 'github.com/alist-org/alist/v3/internal/conf.BuiltAt=$builtAt' \
-X 'github.com/alist-org/alist/v3/internal/conf.GoVersion=$goVersion' \
-X 'github.com/alist-org/alist/v3/internal/conf.GitAuthor=$gitAuthor' \
-X 'github.com/alist-org/alist/v3/internal/conf.GitCommit=$gitCommit' \
-X 'github.com/alist-org/alist/v3/internal/conf.Version=$version' \
@@ -92,7 +93,7 @@ BuildDocker() {
PrepareBuildDockerMusl() {
mkdir -p build/musl-libs
BASE="https://musl.cc/"
BASE="https://github.com/go-cross/musl-toolchain-archive/releases/latest/download/"
FILES=(x86_64-linux-musl-cross aarch64-linux-musl-cross i486-linux-musl-cross s390x-linux-musl-cross armv6-linux-musleabihf-cross armv7l-linux-musleabihf-cross riscv64-linux-musl-cross powerpc64le-linux-musl-cross)
for i in "${FILES[@]}"; do
url="${BASE}${i}.tgz"
@@ -244,7 +245,7 @@ BuildReleaseFreeBSD() {
cgo_cc="clang --target=${CGO_ARGS[$i]} --sysroot=/opt/freebsd/${os_arch}"
echo building for freebsd-${os_arch}
sudo mkdir -p "/opt/freebsd/${os_arch}"
wget -q https://download.freebsd.org/releases/${os_arch}/14.1-RELEASE/base.txz
wget -q https://download.freebsd.org/releases/${os_arch}/14.3-RELEASE/base.txz
sudo tar -xf ./base.txz -C /opt/freebsd/${os_arch}
rm base.txz
export GOOS=freebsd
@@ -301,8 +302,12 @@ if [ "$1" = "dev" ]; then
else
BuildDev
fi
elif [ "$1" = "release" ]; then
FetchWebRelease
elif [ "$1" = "release" -o "$1" = "beta" ]; then
if [ "$1" = "beta" ]; then
FetchWebDev
else
FetchWebRelease
fi
if [ "$2" = "docker" ]; then
BuildDocker
elif [ "$2" = "docker-multiplatform" ]; then

View File

@@ -1,6 +1,7 @@
package cmd
import (
"github.com/alist-org/alist/v3/internal/bootstrap/patch/v3_46_0"
"os"
"path/filepath"
"strconv"
@@ -16,8 +17,16 @@ func Init() {
bootstrap.InitConfig()
bootstrap.Log()
bootstrap.InitDB()
if v3_46_0.IsLegacyRoleDetected() {
utils.Log.Warnf("Detected legacy role format, executing ConvertLegacyRoles patch early...")
v3_46_0.ConvertLegacyRoles()
}
data.InitData()
bootstrap.InitStreamLimit()
bootstrap.InitIndex()
bootstrap.InitUpgradePatch()
}
func Release() {

54
cmd/kill.go Normal file
View File

@@ -0,0 +1,54 @@
package cmd
import (
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"os"
)
// KillCmd represents the kill command
var KillCmd = &cobra.Command{
Use: "kill",
Short: "Force kill alist server process by daemon/pid file",
Run: func(cmd *cobra.Command, args []string) {
kill()
},
}
func kill() {
initDaemon()
if pid == -1 {
log.Info("Seems not have been started. Try use `alist start` to start server.")
return
}
process, err := os.FindProcess(pid)
if err != nil {
log.Errorf("failed to find process by pid: %d, reason: %v", pid, process)
return
}
err = process.Kill()
if err != nil {
log.Errorf("failed to kill process %d: %v", pid, err)
} else {
log.Info("killed process: ", pid)
}
err = os.Remove(pidFile)
if err != nil {
log.Errorf("failed to remove pid file")
}
pid = -1
}
func init() {
RootCmd.AddCommand(KillCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// stopCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// stopCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}

View File

@@ -12,6 +12,7 @@ import (
"strings"
_ "github.com/alist-org/alist/v3/drivers"
"github.com/alist-org/alist/v3/internal/bootstrap"
"github.com/alist-org/alist/v3/internal/bootstrap/data"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/op"
@@ -137,6 +138,7 @@ var LangCmd = &cobra.Command{
Use: "lang",
Short: "Generate language json file",
Run: func(cmd *cobra.Command, args []string) {
bootstrap.InitConfig()
err := os.MkdirAll("lang", 0777)
if err != nil {
utils.Log.Fatalf("failed create folder: %s", err.Error())

View File

@@ -6,6 +6,7 @@ import (
"github.com/alist-org/alist/v3/cmd/flags"
_ "github.com/alist-org/alist/v3/drivers"
_ "github.com/alist-org/alist/v3/internal/archive"
_ "github.com/alist-org/alist/v3/internal/offline_download"
"github.com/spf13/cobra"
)
@@ -15,7 +16,7 @@ var RootCmd = &cobra.Command{
Short: "A file list program that supports multiple storage.",
Long: `A file list program that supports multiple storage,
built with love by Xhofe and friends in Go/Solid.js.
Complete documentation is available at https://alist.nn.ci/`,
Complete documentation is available at https://alistgo.com/`,
}
func Execute() {

View File

@@ -13,14 +13,19 @@ import (
"syscall"
"time"
ftpserver "github.com/KirCute/ftpserverlib-pasvportmap"
"github.com/KirCute/sftpd-alist"
"github.com/alist-org/alist/v3/cmd/flags"
"github.com/alist-org/alist/v3/internal/bootstrap"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/fs"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/server"
"github.com/gin-gonic/gin"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"golang.org/x/net/http2"
"golang.org/x/net/http2/h2c"
)
// ServerCmd represents the server command
@@ -44,11 +49,15 @@ the address is defined in config file`,
r := gin.New()
r.Use(gin.LoggerWithWriter(log.StandardLogger().Out), gin.RecoveryWithWriter(log.StandardLogger().Out))
server.Init(r)
var httpHandler http.Handler = r
if conf.Conf.Scheme.EnableH2c {
httpHandler = h2c.NewHandler(r, &http2.Server{})
}
var httpSrv, httpsSrv, unixSrv *http.Server
if conf.Conf.Scheme.HttpPort != -1 {
httpBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpPort)
utils.Log.Infof("start HTTP server @ %s", httpBase)
httpSrv = &http.Server{Addr: httpBase, Handler: r}
httpSrv = &http.Server{Addr: httpBase, Handler: httpHandler}
go func() {
err := httpSrv.ListenAndServe()
if err != nil && !errors.Is(err, http.ErrServerClosed) {
@@ -69,7 +78,7 @@ the address is defined in config file`,
}
if conf.Conf.Scheme.UnixFile != "" {
utils.Log.Infof("start unix server @ %s", conf.Conf.Scheme.UnixFile)
unixSrv = &http.Server{Handler: r}
unixSrv = &http.Server{Handler: httpHandler}
go func() {
listener, err := net.Listen("unix", conf.Conf.Scheme.UnixFile)
if err != nil {
@@ -112,6 +121,42 @@ the address is defined in config file`,
}
}()
}
var ftpDriver *server.FtpMainDriver
var ftpServer *ftpserver.FtpServer
if conf.Conf.FTP.Listen != "" && conf.Conf.FTP.Enable {
var err error
ftpDriver, err = server.NewMainDriver()
if err != nil {
utils.Log.Fatalf("failed to start ftp driver: %s", err.Error())
} else {
utils.Log.Infof("start ftp server on %s", conf.Conf.FTP.Listen)
go func() {
ftpServer = ftpserver.NewFtpServer(ftpDriver)
err = ftpServer.ListenAndServe()
if err != nil {
utils.Log.Fatalf("problem ftp server listening: %s", err.Error())
}
}()
}
}
var sftpDriver *server.SftpDriver
var sftpServer *sftpd.SftpServer
if conf.Conf.SFTP.Listen != "" && conf.Conf.SFTP.Enable {
var err error
sftpDriver, err = server.NewSftpDriver()
if err != nil {
utils.Log.Fatalf("failed to start sftp driver: %s", err.Error())
} else {
utils.Log.Infof("start sftp server on %s", conf.Conf.SFTP.Listen)
go func() {
sftpServer = sftpd.NewSftpServer(sftpDriver)
err = sftpServer.RunServer()
if err != nil {
utils.Log.Fatalf("problem sftp server listening: %s", err.Error())
}
}()
}
}
// Wait for interrupt signal to gracefully shutdown the server with
// a timeout of 1 second.
quit := make(chan os.Signal, 1)
@@ -121,6 +166,7 @@ the address is defined in config file`,
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
<-quit
utils.Log.Println("Shutdown server...")
fs.ArchiveContentUploadTaskManager.RemoveAll()
Release()
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
@@ -152,6 +198,25 @@ the address is defined in config file`,
}
}()
}
if conf.Conf.FTP.Listen != "" && conf.Conf.FTP.Enable && ftpServer != nil && ftpDriver != nil {
wg.Add(1)
go func() {
defer wg.Done()
ftpDriver.Stop()
if err := ftpServer.Stop(); err != nil {
utils.Log.Fatal("FTP server shutdown err: ", err)
}
}()
}
if conf.Conf.SFTP.Listen != "" && conf.Conf.SFTP.Enable && sftpServer != nil && sftpDriver != nil {
wg.Add(1)
go func() {
defer wg.Done()
if err := sftpServer.Close(); err != nil {
utils.Log.Fatal("SFTP server shutdown err: ", err)
}
}()
}
wg.Wait()
utils.Log.Println("Server exit")
},

View File

@@ -1,10 +1,10 @@
/*
Copyright © 2022 NAME HERE <EMAIL ADDRESS>
*/
//go:build !windows
package cmd
import (
"os"
"syscall"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
@@ -30,11 +30,11 @@ func stop() {
log.Errorf("failed to find process by pid: %d, reason: %v", pid, process)
return
}
err = process.Kill()
err = process.Signal(syscall.SIGTERM)
if err != nil {
log.Errorf("failed to kill process %d: %v", pid, err)
log.Errorf("failed to terminate process %d: %v", pid, err)
} else {
log.Info("killed process: ", pid)
log.Info("terminated process: ", pid)
}
err = os.Remove(pidFile)
if err != nil {

34
cmd/stop_windows.go Normal file
View File

@@ -0,0 +1,34 @@
//go:build windows
package cmd
import (
"github.com/spf13/cobra"
)
// StopCmd represents the stop command
var StopCmd = &cobra.Command{
Use: "stop",
Short: "Same as the kill command",
Run: func(cmd *cobra.Command, args []string) {
stop()
},
}
func stop() {
kill()
}
func init() {
RootCmd.AddCommand(StopCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// stopCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// stopCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}

View File

@@ -6,6 +6,7 @@ package cmd
import (
"fmt"
"os"
"runtime"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/spf13/cobra"
@@ -16,14 +17,15 @@ var VersionCmd = &cobra.Command{
Use: "version",
Short: "Show current version of AList",
Run: func(cmd *cobra.Command, args []string) {
goVersion := fmt.Sprintf("%s %s/%s", runtime.Version(), runtime.GOOS, runtime.GOARCH)
fmt.Printf(`Built At: %s
Go Version: %s
Author: %s
Commit ID: %s
Version: %s
WebVersion: %s
`,
conf.BuiltAt, conf.GoVersion, conf.GitAuthor, conf.GitCommit, conf.Version, conf.WebVersion)
`, conf.BuiltAt, goVersion, conf.GitAuthor, conf.GitCommit, conf.Version, conf.WebVersion)
os.Exit(0)
},
}

View File

@@ -215,12 +215,12 @@ func (d *Pan115) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
var uploadResult *UploadResult
// 闪传失败,上传
if stream.GetSize() <= 10*utils.MB { // 文件大小小于10MB改用普通模式上传
if uploadResult, err = d.UploadByOSS(&fastInfo.UploadOSSParams, stream, dirID); err != nil {
if uploadResult, err = d.UploadByOSS(ctx, &fastInfo.UploadOSSParams, stream, dirID, up); err != nil {
return nil, err
}
} else {
// 分片上传
if uploadResult, err = d.UploadByMultipart(&fastInfo.UploadOSSParams, stream.GetSize(), stream, dirID); err != nil {
if uploadResult, err = d.UploadByMultipart(ctx, &fastInfo.UploadOSSParams, stream.GetSize(), stream, dirID, up); err != nil {
return nil, err
}
}
@@ -241,7 +241,7 @@ func (d *Pan115) OfflineList(ctx context.Context) ([]*driver115.OfflineTask, err
}
func (d *Pan115) OfflineDownload(ctx context.Context, uris []string, dstDir model.Obj) ([]string, error) {
return d.client.AddOfflineTaskURIs(uris, dstDir.GetID())
return d.client.AddOfflineTaskURIs(uris, dstDir.GetID(), driver115.WithAppVer(appVer))
}
func (d *Pan115) DeleteOfflineTasks(ctx context.Context, hashes []string, deleteFiles bool) error {

View File

@@ -10,7 +10,7 @@ type Addition struct {
QRCodeToken string `json:"qrcode_token" type:"text" help:"one of QR code token and cookie required"`
QRCodeSource string `json:"qrcode_source" type:"select" options:"web,android,ios,tv,alipaymini,wechatmini,qandroid" default:"linux" help:"select the QR code device, default linux"`
PageSize int64 `json:"page_size" type:"number" default:"1000" help:"list api per page size of 115 driver"`
LimitRate float64 `json:"limit_rate" type:"number" default:"2" help:"limit all api request rate ([limit]r/1s)"`
LimitRate float64 `json:"limit_rate" type:"float" default:"2" help:"limit all api request rate ([limit]r/1s)"`
driver.RootID
}

View File

@@ -2,6 +2,7 @@ package _115
import (
"bytes"
"context"
"crypto/md5"
"crypto/tls"
"encoding/hex"
@@ -13,22 +14,23 @@ import (
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/aliyun/aliyun-oss-go-sdk/oss"
cipher "github.com/SheltonZhu/115driver/pkg/crypto/ec115"
crypto "github.com/SheltonZhu/115driver/pkg/crypto/m115"
driver115 "github.com/SheltonZhu/115driver/pkg/driver"
crypto "github.com/gaoyb7/115drive-webdav/115"
"github.com/orzogc/fake115uploader/cipher"
"github.com/pkg/errors"
)
//var UserAgent = driver115.UA115Browser
// var UserAgent = driver115.UA115Browser
func (d *Pan115) login() error {
var err error
opts := []driver115.Option{
@@ -46,7 +48,7 @@ func (d *Pan115) login() error {
if cr, err = d.client.QRCodeLoginWithApp(s, driver115.LoginApp(d.QRCodeSource)); err != nil {
return errors.Wrap(err, "failed to login by qrcode")
}
d.Cookie = fmt.Sprintf("UID=%s;CID=%s;SEID=%s", cr.UID, cr.CID, cr.SEID)
d.Cookie = fmt.Sprintf("UID=%s;CID=%s;SEID=%s;KID=%s", cr.UID, cr.CID, cr.SEID, cr.KID)
d.QRCodeToken = ""
} else if d.Cookie != "" {
if err = cr.FromCookie(d.Cookie); err != nil {
@@ -64,7 +66,7 @@ func (d *Pan115) getFiles(fileId string) ([]FileObj, error) {
if d.PageSize <= 0 {
d.PageSize = driver115.FileListLimit
}
files, err := d.client.ListWithLimit(fileId, d.PageSize)
files, err := d.client.ListWithLimit(fileId, d.PageSize, driver115.WithMultiUrls())
if err != nil {
return nil, err
}
@@ -109,7 +111,7 @@ func (d *Pan115) getUA() string {
func (d *Pan115) DownloadWithUA(pickCode, ua string) (*driver115.DownloadInfo, error) {
key := crypto.GenerateKey()
result := driver115.DownloadResp{}
params, err := utils.Json.Marshal(map[string]string{"pickcode": pickCode})
params, err := utils.Json.Marshal(map[string]string{"pick_code": pickCode})
if err != nil {
return nil, err
}
@@ -117,7 +119,7 @@ func (d *Pan115) DownloadWithUA(pickCode, ua string) (*driver115.DownloadInfo, e
data := crypto.Encode(params, key)
bodyReader := strings.NewReader(url.Values{"data": []string{data}}.Encode())
reqUrl := fmt.Sprintf("%s?t=%s", driver115.ApiDownloadGetUrl, driver115.Now().String())
reqUrl := fmt.Sprintf("%s?t=%s", driver115.AndroidApiDownloadGetUrl, driver115.Now().String())
req, _ := http.NewRequest(http.MethodPost, reqUrl, bodyReader)
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
req.Header.Set("Cookie", d.Cookie)
@@ -141,24 +143,23 @@ func (d *Pan115) DownloadWithUA(pickCode, ua string) (*driver115.DownloadInfo, e
return nil, err
}
bytes, err := crypto.Decode(string(result.EncodedData), key)
b, err := crypto.Decode(string(result.EncodedData), key)
if err != nil {
return nil, err
}
downloadInfo := driver115.DownloadData{}
if err := utils.Json.Unmarshal(bytes, &downloadInfo); err != nil {
downloadInfo := struct {
Url string `json:"url"`
}{}
if err := utils.Json.Unmarshal(b, &downloadInfo); err != nil {
return nil, err
}
for _, info := range downloadInfo {
if info.FileSize < 0 {
return nil, driver115.ErrDownloadEmpty
}
info.Header = resp.Request.Header
return info, nil
}
return nil, driver115.ErrUnexpected
info := &driver115.DownloadInfo{}
info.PickCode = pickCode
info.Header = resp.Request.Header
info.Url.Url = downloadInfo.Url
return info, nil
}
func (c *Pan115) GenerateToken(fileID, preID, timeStamp, fileSize, signKey, signVal string) string {
@@ -273,7 +274,7 @@ func UploadDigestRange(stream model.FileStreamer, rangeSpec string) (result stri
}
// UploadByOSS use aliyun sdk to upload
func (c *Pan115) UploadByOSS(params *driver115.UploadOSSParams, r io.Reader, dirID string) (*UploadResult, error) {
func (c *Pan115) UploadByOSS(ctx context.Context, params *driver115.UploadOSSParams, s model.FileStreamer, dirID string, up driver.UpdateProgress) (*UploadResult, error) {
ossToken, err := c.client.GetOSSToken()
if err != nil {
return nil, err
@@ -288,6 +289,10 @@ func (c *Pan115) UploadByOSS(params *driver115.UploadOSSParams, r io.Reader, dir
}
var bodyBytes []byte
r := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: s,
UpdateProgress: up,
})
if err = bucket.PutObject(params.Object, r, append(
driver115.OssOption(params, ossToken),
oss.CallbackResult(&bodyBytes),
@@ -303,7 +308,8 @@ func (c *Pan115) UploadByOSS(params *driver115.UploadOSSParams, r io.Reader, dir
}
// UploadByMultipart upload by mutipart blocks
func (d *Pan115) UploadByMultipart(params *driver115.UploadOSSParams, fileSize int64, stream model.FileStreamer, dirID string, opts ...driver115.UploadMultipartOption) (*UploadResult, error) {
func (d *Pan115) UploadByMultipart(ctx context.Context, params *driver115.UploadOSSParams, fileSize int64, s model.FileStreamer,
dirID string, up driver.UpdateProgress, opts ...driver115.UploadMultipartOption) (*UploadResult, error) {
var (
chunks []oss.FileChunk
parts []oss.UploadPart
@@ -315,7 +321,7 @@ func (d *Pan115) UploadByMultipart(params *driver115.UploadOSSParams, fileSize i
err error
)
tmpF, err := stream.CacheFullInTempFile()
tmpF, err := s.CacheFullInTempFile()
if err != nil {
return nil, err
}
@@ -374,6 +380,7 @@ func (d *Pan115) UploadByMultipart(params *driver115.UploadOSSParams, fileSize i
quit <- struct{}{}
}()
completedNum := atomic.Int32{}
// consumers
for i := 0; i < options.ThreadsNum; i++ {
go func(threadId int) {
@@ -386,24 +393,28 @@ func (d *Pan115) UploadByMultipart(params *driver115.UploadOSSParams, fileSize i
var part oss.UploadPart // 出现错误就继续尝试共尝试3次
for retry := 0; retry < 3; retry++ {
select {
case <-ctx.Done():
break
case <-ticker.C:
if ossToken, err = d.client.GetOSSToken(); err != nil { // 到时重新获取ossToken
errCh <- errors.Wrap(err, "刷新token时出现错误")
}
default:
}
buf := make([]byte, chunk.Size)
if _, err = tmpF.ReadAt(buf, chunk.Offset); err != nil && !errors.Is(err, io.EOF) {
continue
}
if part, err = bucket.UploadPart(imur, bytes.NewBuffer(buf), chunk.Size, chunk.Number, driver115.OssOption(params, ossToken)...); err == nil {
if part, err = bucket.UploadPart(imur, driver.NewLimitedUploadStream(ctx, bytes.NewReader(buf)),
chunk.Size, chunk.Number, driver115.OssOption(params, ossToken)...); err == nil {
break
}
}
if err != nil {
errCh <- errors.Wrap(err, fmt.Sprintf("上传 %s 的第%d个分片时出现错误%v", stream.GetName(), chunk.Number, err))
errCh <- errors.Wrap(err, fmt.Sprintf("上传 %s 的第%d个分片时出现错误%v", s.GetName(), chunk.Number, err))
} else {
num := completedNum.Add(1)
up(float64(num) * 100.0 / float64(len(chunks)))
}
UploadedPartsCh <- part
}

335
drivers/115_open/driver.go Normal file
View File

@@ -0,0 +1,335 @@
package _115_open
import (
"context"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/cmd/flags"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
sdk "github.com/xhofe/115-sdk-go"
"golang.org/x/time/rate"
)
type Open115 struct {
model.Storage
Addition
client *sdk.Client
limiter *rate.Limiter
}
func (d *Open115) Config() driver.Config {
return config
}
func (d *Open115) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Open115) Init(ctx context.Context) error {
d.client = sdk.New(sdk.WithRefreshToken(d.Addition.RefreshToken),
sdk.WithAccessToken(d.Addition.AccessToken),
sdk.WithOnRefreshToken(func(s1, s2 string) {
d.Addition.AccessToken = s1
d.Addition.RefreshToken = s2
op.MustSaveDriverStorage(d)
}))
if flags.Debug || flags.Dev {
d.client.SetDebug(true)
}
_, err := d.client.UserInfo(ctx)
if err != nil {
return err
}
if d.Addition.LimitRate > 0 {
d.limiter = rate.NewLimiter(rate.Limit(d.Addition.LimitRate), 1)
}
return nil
}
func (d *Open115) WaitLimit(ctx context.Context) error {
if d.limiter != nil {
return d.limiter.Wait(ctx)
}
return nil
}
func (d *Open115) Drop(ctx context.Context) error {
return nil
}
func (d *Open115) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var res []model.Obj
pageSize := int64(200)
offset := int64(0)
for {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
resp, err := d.client.GetFiles(ctx, &sdk.GetFilesReq{
CID: dir.GetID(),
Limit: pageSize,
Offset: offset,
ASC: d.Addition.OrderDirection == "asc",
O: d.Addition.OrderBy,
// Cur: 1,
ShowDir: true,
})
if err != nil {
return nil, err
}
res = append(res, utils.MustSliceConvert(resp.Data, func(src sdk.GetFilesResp_File) model.Obj {
obj := Obj(src)
return &obj
})...)
if len(res) >= int(resp.Count) {
break
}
offset += pageSize
}
return res, nil
}
func (d *Open115) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
var ua string
if args.Header != nil {
ua = args.Header.Get("User-Agent")
}
if ua == "" {
ua = base.UserAgent
}
obj, ok := file.(*Obj)
if !ok {
return nil, fmt.Errorf("can't convert obj")
}
pc := obj.Pc
resp, err := d.client.DownURL(ctx, pc, ua)
if err != nil {
return nil, err
}
u, ok := resp[obj.GetID()]
if !ok {
return nil, fmt.Errorf("can't get link")
}
return &model.Link{
URL: u.URL.URL,
Header: http.Header{
"User-Agent": []string{ua},
},
}, nil
}
func (d *Open115) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
resp, err := d.client.Mkdir(ctx, parentDir.GetID(), dirName)
if err != nil {
return nil, err
}
return &Obj{
Fid: resp.FileID,
Pid: parentDir.GetID(),
Fn: dirName,
Fc: "0",
Upt: time.Now().Unix(),
Uet: time.Now().Unix(),
UpPt: time.Now().Unix(),
}, nil
}
func (d *Open115) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
_, err := d.client.Move(ctx, &sdk.MoveReq{
FileIDs: srcObj.GetID(),
ToCid: dstDir.GetID(),
})
if err != nil {
return nil, err
}
return srcObj, nil
}
func (d *Open115) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
_, err := d.client.UpdateFile(ctx, &sdk.UpdateFileReq{
FileID: srcObj.GetID(),
FileNma: newName,
})
if err != nil {
return nil, err
}
obj, ok := srcObj.(*Obj)
if ok {
obj.Fn = newName
}
return srcObj, nil
}
func (d *Open115) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
_, err := d.client.Copy(ctx, &sdk.CopyReq{
PID: dstDir.GetID(),
FileID: srcObj.GetID(),
NoDupli: "1",
})
if err != nil {
return nil, err
}
return srcObj, nil
}
func (d *Open115) Remove(ctx context.Context, obj model.Obj) error {
if err := d.WaitLimit(ctx); err != nil {
return err
}
_obj, ok := obj.(*Obj)
if !ok {
return fmt.Errorf("can't convert obj")
}
_, err := d.client.DelFile(ctx, &sdk.DelFileReq{
FileIDs: _obj.GetID(),
ParentID: _obj.Pid,
})
if err != nil {
return err
}
return nil
}
func (d *Open115) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
if err := d.WaitLimit(ctx); err != nil {
return err
}
tempF, err := file.CacheFullInTempFile()
if err != nil {
return err
}
// cal full sha1
sha1, err := utils.HashReader(utils.SHA1, tempF)
if err != nil {
return err
}
_, err = tempF.Seek(0, io.SeekStart)
if err != nil {
return err
}
// pre 128k sha1
sha1128k, err := utils.HashReader(utils.SHA1, io.LimitReader(tempF, 128*1024))
if err != nil {
return err
}
_, err = tempF.Seek(0, io.SeekStart)
if err != nil {
return err
}
// 1. Init
resp, err := d.client.UploadInit(ctx, &sdk.UploadInitReq{
FileName: file.GetName(),
FileSize: file.GetSize(),
Target: dstDir.GetID(),
FileID: strings.ToUpper(sha1),
PreID: strings.ToUpper(sha1128k),
})
if err != nil {
return err
}
if resp.Status == 2 {
return nil
}
// 2. two way verify
if utils.SliceContains([]int{6, 7, 8}, resp.Status) {
signCheck := strings.Split(resp.SignCheck, "-") //"sign_check": "2392148-2392298" 取2392148-2392298之间的内容(包含2392148、2392298)的sha1
start, err := strconv.ParseInt(signCheck[0], 10, 64)
if err != nil {
return err
}
end, err := strconv.ParseInt(signCheck[1], 10, 64)
if err != nil {
return err
}
_, err = tempF.Seek(start, io.SeekStart)
if err != nil {
return err
}
signVal, err := utils.HashReader(utils.SHA1, io.LimitReader(tempF, end-start+1))
if err != nil {
return err
}
_, err = tempF.Seek(0, io.SeekStart)
if err != nil {
return err
}
resp, err = d.client.UploadInit(ctx, &sdk.UploadInitReq{
FileName: file.GetName(),
FileSize: file.GetSize(),
Target: dstDir.GetID(),
FileID: strings.ToUpper(sha1),
PreID: strings.ToUpper(sha1128k),
SignKey: resp.SignKey,
SignVal: strings.ToUpper(signVal),
})
if err != nil {
return err
}
if resp.Status == 2 {
return nil
}
}
// 3. get upload token
tokenResp, err := d.client.UploadGetToken(ctx)
if err != nil {
return err
}
// 4. upload
err = d.multpartUpload(ctx, tempF, file, up, tokenResp, resp)
if err != nil {
return err
}
return nil
}
// func (d *Open115) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// // TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
// return nil, errs.NotImplement
// }
// func (d *Open115) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// // TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
// return nil, errs.NotImplement
// }
// func (d *Open115) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// // TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
// return nil, errs.NotImplement
// }
// func (d *Open115) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// // TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// // a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// // return errs.NotImplement to use an internal archive tool
// return nil, errs.NotImplement
// }
//func (d *Template) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Open115)(nil)

37
drivers/115_open/meta.go Normal file
View File

@@ -0,0 +1,37 @@
package _115_open
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
driver.RootID
// define other
RefreshToken string `json:"refresh_token" required:"true"`
OrderBy string `json:"order_by" type:"select" options:"file_name,file_size,user_utime,file_type"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc"`
LimitRate float64 `json:"limit_rate" type:"float" default:"1" help:"limit all api request rate ([limit]r/1s)"`
AccessToken string
}
var config = driver.Config{
Name: "115 Open",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Open115{}
})
}

59
drivers/115_open/types.go Normal file
View File

@@ -0,0 +1,59 @@
package _115_open
import (
"time"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
sdk "github.com/xhofe/115-sdk-go"
)
type Obj sdk.GetFilesResp_File
// Thumb implements model.Thumb.
func (o *Obj) Thumb() string {
return o.Thumbnail
}
// CreateTime implements model.Obj.
func (o *Obj) CreateTime() time.Time {
return time.Unix(o.UpPt, 0)
}
// GetHash implements model.Obj.
func (o *Obj) GetHash() utils.HashInfo {
return utils.NewHashInfo(utils.SHA1, o.Sha1)
}
// GetID implements model.Obj.
func (o *Obj) GetID() string {
return o.Fid
}
// GetName implements model.Obj.
func (o *Obj) GetName() string {
return o.Fn
}
// GetPath implements model.Obj.
func (o *Obj) GetPath() string {
return ""
}
// GetSize implements model.Obj.
func (o *Obj) GetSize() int64 {
return o.FS
}
// IsDir implements model.Obj.
func (o *Obj) IsDir() bool {
return o.Fc == "0"
}
// ModTime implements model.Obj.
func (o *Obj) ModTime() time.Time {
return time.Unix(o.Upt, 0)
}
var _ model.Obj = (*Obj)(nil)
var _ model.Thumb = (*Obj)(nil)

140
drivers/115_open/upload.go Normal file
View File

@@ -0,0 +1,140 @@
package _115_open
import (
"context"
"encoding/base64"
"io"
"time"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/aliyun/aliyun-oss-go-sdk/oss"
"github.com/avast/retry-go"
sdk "github.com/xhofe/115-sdk-go"
)
func calPartSize(fileSize int64) int64 {
var partSize int64 = 20 * utils.MB
if fileSize > partSize {
if fileSize > 1*utils.TB { // file Size over 1TB
partSize = 5 * utils.GB // file part size 5GB
} else if fileSize > 768*utils.GB { // over 768GB
partSize = 109951163 // ≈ 104.8576MB, split 1TB into 10,000 part
} else if fileSize > 512*utils.GB { // over 512GB
partSize = 82463373 // ≈ 78.6432MB
} else if fileSize > 384*utils.GB { // over 384GB
partSize = 54975582 // ≈ 52.4288MB
} else if fileSize > 256*utils.GB { // over 256GB
partSize = 41231687 // ≈ 39.3216MB
} else if fileSize > 128*utils.GB { // over 128GB
partSize = 27487791 // ≈ 26.2144MB
}
}
return partSize
}
func (d *Open115) singleUpload(ctx context.Context, tempF model.File, tokenResp *sdk.UploadGetTokenResp, initResp *sdk.UploadInitResp) error {
ossClient, err := oss.New(tokenResp.Endpoint, tokenResp.AccessKeyId, tokenResp.AccessKeySecret, oss.SecurityToken(tokenResp.SecurityToken))
if err != nil {
return err
}
bucket, err := ossClient.Bucket(initResp.Bucket)
if err != nil {
return err
}
err = bucket.PutObject(initResp.Object, tempF,
oss.Callback(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.Callback))),
oss.CallbackVar(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.CallbackVar))),
)
return err
}
// type CallbackResult struct {
// State bool `json:"state"`
// Code int `json:"code"`
// Message string `json:"message"`
// Data struct {
// PickCode string `json:"pick_code"`
// FileName string `json:"file_name"`
// FileSize int64 `json:"file_size"`
// FileID string `json:"file_id"`
// ThumbURL string `json:"thumb_url"`
// Sha1 string `json:"sha1"`
// Aid int `json:"aid"`
// Cid string `json:"cid"`
// } `json:"data"`
// }
func (d *Open115) multpartUpload(ctx context.Context, tempF model.File, stream model.FileStreamer, up driver.UpdateProgress, tokenResp *sdk.UploadGetTokenResp, initResp *sdk.UploadInitResp) error {
fileSize := stream.GetSize()
chunkSize := calPartSize(fileSize)
ossClient, err := oss.New(tokenResp.Endpoint, tokenResp.AccessKeyId, tokenResp.AccessKeySecret, oss.SecurityToken(tokenResp.SecurityToken))
if err != nil {
return err
}
bucket, err := ossClient.Bucket(initResp.Bucket)
if err != nil {
return err
}
imur, err := bucket.InitiateMultipartUpload(initResp.Object, oss.Sequential())
if err != nil {
return err
}
partNum := (stream.GetSize() + chunkSize - 1) / chunkSize
parts := make([]oss.UploadPart, partNum)
offset := int64(0)
for i := int64(1); i <= partNum; i++ {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
partSize := chunkSize
if i == partNum {
partSize = fileSize - (i-1)*chunkSize
}
rd := utils.NewMultiReadable(io.LimitReader(stream, partSize))
err = retry.Do(func() error {
_ = rd.Reset()
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
part, err := bucket.UploadPart(imur, rateLimitedRd, partSize, int(i))
if err != nil {
return err
}
parts[i-1] = part
return nil
},
retry.Attempts(3),
retry.DelayType(retry.BackOffDelay),
retry.Delay(time.Second))
if err != nil {
return err
}
if i == partNum {
offset = fileSize
} else {
offset += partSize
}
up(float64(offset) / float64(fileSize))
}
// callbackRespBytes := make([]byte, 1024)
_, err = bucket.CompleteMultipartUpload(
imur,
parts,
oss.Callback(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.Callback))),
oss.CallbackVar(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.CallbackVar))),
// oss.CallbackResult(&callbackRespBytes),
)
if err != nil {
return err
}
return nil
}

3
drivers/115_open/util.go Normal file
View File

@@ -0,0 +1,3 @@
package _115_open
// do others that not defined in Driver interface

View File

@@ -10,7 +10,7 @@ type Addition struct {
QRCodeToken string `json:"qrcode_token" type:"text" help:"one of QR code token and cookie required"`
QRCodeSource string `json:"qrcode_source" type:"select" options:"web,android,ios,tv,alipaymini,wechatmini,qandroid" default:"linux" help:"select the QR code device, default linux"`
PageSize int64 `json:"page_size" type:"number" default:"1000" help:"list api per page size of 115 driver"`
LimitRate float64 `json:"limit_rate" type:"number" default:"2" help:"limit all api request rate (1r/[limit_rate]s)"`
LimitRate float64 `json:"limit_rate" type:"float" default:"2" help:"limit all api request rate (1r/[limit_rate]s)"`
ShareCode string `json:"share_code" type:"text" required:"true" help:"share code of 115 share link"`
ReceiveCode string `json:"receive_code" type:"text" required:"true" help:"receive code of 115 share link"`
driver.RootID
@@ -18,7 +18,7 @@ type Addition struct {
var config = driver.Config{
Name: "115 Share",
DefaultRoot: "",
DefaultRoot: "0",
// OnlyProxy: true,
// OnlyLocal: true,
CheckStatus: false,

View File

@@ -96,7 +96,7 @@ func (d *Pan115Share) login() error {
if cr, err = d.client.QRCodeLoginWithApp(s, driver115.LoginApp(d.QRCodeSource)); err != nil {
return errors.Wrap(err, "failed to login by qrcode")
}
d.Cookie = fmt.Sprintf("UID=%s;CID=%s;SEID=%s", cr.UID, cr.CID, cr.SEID)
d.Cookie = fmt.Sprintf("UID=%s;CID=%s;SEID=%s;KID=%s", cr.UID, cr.CID, cr.SEID, cr.KID)
d.QRCodeToken = ""
} else if d.Cookie != "" {
if err = cr.FromCookie(d.Cookie); err != nil {

View File

@@ -2,21 +2,22 @@ package _123
import (
"context"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"fmt"
"golang.org/x/time/rate"
"io"
"net/http"
"net/url"
"strconv"
"strings"
"sync"
"time"
"golang.org/x/time/rate"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
@@ -29,7 +30,8 @@ import (
type Pan123 struct {
model.Storage
Addition
apiRateLimit sync.Map
apiRateLimit sync.Map
safeBoxUnlocked sync.Map
}
func (d *Pan123) Config() driver.Config {
@@ -41,21 +43,38 @@ func (d *Pan123) GetAddition() driver.Additional {
}
func (d *Pan123) Init(ctx context.Context) error {
_, err := d.request(UserInfo, http.MethodGet, nil, nil)
_, err := d.Request(UserInfo, http.MethodGet, nil, nil)
return err
}
func (d *Pan123) Drop(ctx context.Context) error {
_, _ = d.request(Logout, http.MethodPost, func(req *resty.Request) {
_, _ = d.Request(Logout, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{})
}, nil)
return nil
}
func (d *Pan123) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
if f, ok := dir.(File); ok && f.IsLock {
if err := d.unlockSafeBox(f.FileId); err != nil {
return nil, err
}
}
files, err := d.getFiles(ctx, dir.GetID(), dir.GetName())
if err != nil {
return nil, err
msg := strings.ToLower(err.Error())
if strings.Contains(msg, "safe box") || strings.Contains(err.Error(), "保险箱") {
if id, e := strconv.ParseInt(dir.GetID(), 10, 64); e == nil {
if e = d.unlockSafeBox(id); e == nil {
files, err = d.getFiles(ctx, dir.GetID(), dir.GetName())
} else {
return nil, e
}
}
}
if err != nil {
return nil, err
}
}
return utils.SliceConvert(files, func(src File) (model.Obj, error) {
return src, nil
@@ -81,8 +100,8 @@ func (d *Pan123) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
"size": f.Size,
"type": f.Type,
}
resp, err := d.request(DownloadInfo, http.MethodPost, func(req *resty.Request) {
resp, err := d.Request(DownloadInfo, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetHeaders(headers)
}, nil)
if err != nil {
@@ -135,7 +154,7 @@ func (d *Pan123) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
"size": 0,
"type": 1,
}
_, err := d.request(Mkdir, http.MethodPost, func(req *resty.Request) {
_, err := d.Request(Mkdir, http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
}, nil)
return err
@@ -146,7 +165,7 @@ func (d *Pan123) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
"fileIdList": []base.Json{{"FileId": srcObj.GetID()}},
"parentFileId": dstDir.GetID(),
}
_, err := d.request(Move, http.MethodPost, func(req *resty.Request) {
_, err := d.Request(Move, http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
}, nil)
return err
@@ -158,7 +177,7 @@ func (d *Pan123) Rename(ctx context.Context, srcObj model.Obj, newName string) e
"fileId": srcObj.GetID(),
"fileName": newName,
}
_, err := d.request(Rename, http.MethodPost, func(req *resty.Request) {
_, err := d.Request(Rename, http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
}, nil)
return err
@@ -175,7 +194,7 @@ func (d *Pan123) Remove(ctx context.Context, obj model.Obj) error {
"operation": true,
"fileTrashInfoList": []File{f},
}
_, err := d.request(Trash, http.MethodPost, func(req *resty.Request) {
_, err := d.Request(Trash, http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
}, nil)
return err
@@ -184,36 +203,26 @@ func (d *Pan123) Remove(ctx context.Context, obj model.Obj) error {
}
}
func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
// const DEFAULT int64 = 10485760
h := md5.New()
// need to calculate md5 of the full content
tempFile, err := stream.CacheFullInTempFile()
if err != nil {
return err
func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
etag := file.GetHash().GetHash(utils.MD5)
var err error
if len(etag) < utils.MD5.Width {
_, etag, err = stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil {
return err
}
}
defer func() {
_ = tempFile.Close()
}()
if _, err = utils.CopyWithBuffer(h, tempFile); err != nil {
return err
}
_, err = tempFile.Seek(0, io.SeekStart)
if err != nil {
return err
}
etag := hex.EncodeToString(h.Sum(nil))
data := base.Json{
"driveId": 0,
"duplicate": 2, // 2->覆盖 1->重命名 0->默认
"etag": etag,
"fileName": stream.GetName(),
"fileName": file.GetName(),
"parentFileId": dstDir.GetID(),
"size": stream.GetSize(),
"size": file.GetSize(),
"type": 0,
}
var resp UploadResp
res, err := d.request(UploadRequest, http.MethodPost, func(req *resty.Request) {
res, err := d.Request(UploadRequest, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetContext(ctx)
}, &resp)
if err != nil {
@@ -224,7 +233,7 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
return nil
}
if resp.Data.AccessKeyId == "" || resp.Data.SecretAccessKey == "" || resp.Data.SessionToken == "" {
err = d.newUpload(ctx, &resp, stream, tempFile, up)
err = d.newUpload(ctx, &resp, file, up)
return err
} else {
cfg := &aws.Config{
@@ -238,17 +247,23 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
return err
}
uploader := s3manager.NewUploader(s)
if stream.GetSize() > s3manager.MaxUploadParts*s3manager.DefaultUploadPartSize {
uploader.PartSize = stream.GetSize() / (s3manager.MaxUploadParts - 1)
if file.GetSize() > s3manager.MaxUploadParts*s3manager.DefaultUploadPartSize {
uploader.PartSize = file.GetSize() / (s3manager.MaxUploadParts - 1)
}
input := &s3manager.UploadInput{
Bucket: &resp.Data.Bucket,
Key: &resp.Data.Key,
Body: tempFile,
Body: driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: file,
UpdateProgress: up,
}),
}
_, err = uploader.UploadWithContext(ctx, input)
if err != nil {
return err
}
}
_, err = d.request(UploadComplete, http.MethodPost, func(req *resty.Request) {
_, err = d.Request(UploadComplete, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"fileId": resp.Data.FileId,
}).SetContext(ctx)

View File

@@ -6,8 +6,9 @@ import (
)
type Addition struct {
Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
SafePassword string `json:"safe_password"`
driver.RootID
//OrderBy string `json:"order_by" type:"select" options:"file_id,file_name,size,update_at" default:"file_name"`
//OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`

View File

@@ -20,6 +20,7 @@ type File struct {
Etag string `json:"Etag"`
S3KeyFlag string `json:"S3KeyFlag"`
DownloadUrl string `json:"DownloadUrl"`
IsLock bool `json:"IsLock"`
}
func (f File) CreateTime() time.Time {

View File

@@ -4,7 +4,6 @@ import (
"context"
"fmt"
"io"
"math"
"net/http"
"strconv"
@@ -25,7 +24,7 @@ func (d *Pan123) getS3PreSignedUrls(ctx context.Context, upReq *UploadResp, star
"StorageNode": upReq.Data.StorageNode,
}
var s3PreSignedUrls S3PreSignedURLs
_, err := d.request(S3PreSignedUrls, http.MethodPost, func(req *resty.Request) {
_, err := d.Request(S3PreSignedUrls, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetContext(ctx)
}, &s3PreSignedUrls)
if err != nil {
@@ -44,7 +43,7 @@ func (d *Pan123) getS3Auth(ctx context.Context, upReq *UploadResp, start, end in
"uploadId": upReq.Data.UploadId,
}
var s3PreSignedUrls S3PreSignedURLs
_, err := d.request(S3Auth, http.MethodPost, func(req *resty.Request) {
_, err := d.Request(S3Auth, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetContext(ctx)
}, &s3PreSignedUrls)
if err != nil {
@@ -63,21 +62,31 @@ func (d *Pan123) completeS3(ctx context.Context, upReq *UploadResp, file model.F
"key": upReq.Data.Key,
"uploadId": upReq.Data.UploadId,
}
_, err := d.request(UploadCompleteV2, http.MethodPost, func(req *resty.Request) {
_, err := d.Request(UploadCompleteV2, http.MethodPost, func(req *resty.Request) {
req.SetBody(data).SetContext(ctx)
}, nil)
return err
}
func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.FileStreamer, reader io.Reader, up driver.UpdateProgress) error {
chunkSize := int64(1024 * 1024 * 16)
func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.FileStreamer, up driver.UpdateProgress) error {
tmpF, err := file.CacheFullInTempFile()
if err != nil {
return err
}
// fetch s3 pre signed urls
chunkCount := int(math.Ceil(float64(file.GetSize()) / float64(chunkSize)))
size := file.GetSize()
chunkSize := min(size, 16*utils.MB)
chunkCount := int(size / chunkSize)
lastChunkSize := size % chunkSize
if lastChunkSize > 0 {
chunkCount++
} else {
lastChunkSize = chunkSize
}
// only 1 batch is allowed
isMultipart := chunkCount > 1
batchSize := 1
getS3UploadUrl := d.getS3Auth
if isMultipart {
if chunkCount > 1 {
batchSize = 10
getS3UploadUrl = d.getS3PreSignedUrls
}
@@ -86,10 +95,7 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
return ctx.Err()
}
start := i
end := i + batchSize
if end > chunkCount+1 {
end = chunkCount + 1
}
end := min(i+batchSize, chunkCount+1)
s3PreSignedUrls, err := getS3UploadUrl(ctx, upReq, start, end)
if err != nil {
return err
@@ -101,9 +107,9 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
}
curSize := chunkSize
if j == chunkCount {
curSize = file.GetSize() - (int64(chunkCount)-1)*chunkSize
curSize = lastChunkSize
}
err = d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, j, end, io.LimitReader(reader, chunkSize), curSize, false, getS3UploadUrl)
err = d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, j, end, io.NewSectionReader(tmpF, chunkSize*int64(j-1), curSize), curSize, false, getS3UploadUrl)
if err != nil {
return err
}
@@ -114,12 +120,12 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
return d.completeS3(ctx, upReq, file, chunkCount > 1)
}
func (d *Pan123) uploadS3Chunk(ctx context.Context, upReq *UploadResp, s3PreSignedUrls *S3PreSignedURLs, cur, end int, reader io.Reader, curSize int64, retry bool, getS3UploadUrl func(ctx context.Context, upReq *UploadResp, start int, end int) (*S3PreSignedURLs, error)) error {
func (d *Pan123) uploadS3Chunk(ctx context.Context, upReq *UploadResp, s3PreSignedUrls *S3PreSignedURLs, cur, end int, reader *io.SectionReader, curSize int64, retry bool, getS3UploadUrl func(ctx context.Context, upReq *UploadResp, start int, end int) (*S3PreSignedURLs, error)) error {
uploadUrl := s3PreSignedUrls.Data.PreSignedUrls[strconv.Itoa(cur)]
if uploadUrl == "" {
return fmt.Errorf("upload url is empty, s3PreSignedUrls: %+v", s3PreSignedUrls)
}
req, err := http.NewRequest("PUT", uploadUrl, reader)
req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, reader))
if err != nil {
return err
}
@@ -142,6 +148,7 @@ func (d *Pan123) uploadS3Chunk(ctx context.Context, upReq *UploadResp, s3PreSign
}
s3PreSignedUrls.Data.PreSignedUrls = newS3PreSignedUrls.Data.PreSignedUrls
// retry
reader.Seek(0, io.SeekStart)
return d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, cur, end, reader, curSize, true, getS3UploadUrl)
}
if res.StatusCode != http.StatusOK {

View File

@@ -43,6 +43,7 @@ const (
S3Auth = MainApi + "/file/s3_upload_object/auth"
UploadCompleteV2 = MainApi + "/file/upload_complete/v2"
S3Complete = MainApi + "/file/s3_complete_multipart_upload"
SafeBoxUnlock = MainApi + "/restful/goapi/v1/file/safe_box/auth/unlockbox"
//AuthKeySalt = "8-8D$sL8gPjom7bk#cY"
)
@@ -161,12 +162,12 @@ func (d *Pan123) login() error {
}
res, err := base.RestyClient.R().
SetHeaders(map[string]string{
"origin": "https://www.123pan.com",
"referer": "https://www.123pan.com/",
"user-agent": "Dart/2.19(dart:io)-alist",
"origin": "https://www.123pan.com",
"referer": "https://www.123pan.com/",
//"user-agent": "Dart/2.19(dart:io)-alist",
"platform": "web",
"app-version": "3",
//"user-agent": base.UserAgent,
"user-agent": base.UserAgent,
}).
SetBody(body).Post(SignIn)
if err != nil {
@@ -194,13 +195,15 @@ func (d *Pan123) login() error {
// return &authKey, nil
//}
func (d *Pan123) request(url string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
func (d *Pan123) Request(url string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
isRetry := false
do:
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"origin": "https://www.123pan.com",
"referer": "https://www.123pan.com/",
"authorization": "Bearer " + d.AccessToken,
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) alist-client",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)",
"platform": "web",
"app-version": "3",
//"user-agent": base.UserAgent,
@@ -223,18 +226,35 @@ func (d *Pan123) request(url string, method string, callback base.ReqCallback, r
body := res.Body()
code := utils.Json.Get(body, "code").ToInt()
if code != 0 {
if code == 401 {
if !isRetry && code == 401 {
err := d.login()
if err != nil {
return nil, err
}
return d.request(url, method, callback, resp)
isRetry = true
goto do
}
return nil, errors.New(jsoniter.Get(body, "message").ToString())
}
return body, nil
}
func (d *Pan123) unlockSafeBox(fileId int64) error {
if _, ok := d.safeBoxUnlocked.Load(fileId); ok {
return nil
}
data := base.Json{"password": d.SafePassword}
url := fmt.Sprintf("%s?fileId=%d", SafeBoxUnlock, fileId)
_, err := d.Request(url, http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
}, nil)
if err != nil {
return err
}
d.safeBoxUnlocked.Store(fileId, true)
return nil
}
func (d *Pan123) getFiles(ctx context.Context, parentId string, name string) ([]File, error) {
page := 1
total := 0
@@ -260,10 +280,19 @@ func (d *Pan123) getFiles(ctx context.Context, parentId string, name string) ([]
"operateType": "4",
"inDirectSpace": "false",
}
_res, err := d.request(FileList, http.MethodGet, func(req *resty.Request) {
_res, err := d.Request(FileList, http.MethodGet, func(req *resty.Request) {
req.SetQueryParams(query)
}, &resp)
if err != nil {
msg := strings.ToLower(err.Error())
if strings.Contains(msg, "safe box") || strings.Contains(err.Error(), "保险箱") {
if fid, e := strconv.ParseInt(parentId, 10, 64); e == nil {
if e = d.unlockSafeBox(fid); e == nil {
return d.getFiles(ctx, parentId, name)
}
return nil, e
}
}
return nil, err
}
log.Debug(string(_res))

191
drivers/123_open/api.go Normal file
View File

@@ -0,0 +1,191 @@
package _123Open
import (
"fmt"
"github.com/go-resty/resty/v2"
"net/http"
)
const (
// baseurl
ApiBaseURL = "https://open-api.123pan.com"
// auth
ApiToken = "/api/v1/access_token"
// file list
ApiFileList = "/api/v2/file/list"
// direct link
ApiGetDirectLink = "/api/v1/direct-link/url"
// mkdir
ApiMakeDir = "/upload/v1/file/mkdir"
// remove
ApiRemove = "/api/v1/file/trash"
// upload
ApiUploadDomainURL = "/upload/v2/file/domain"
ApiSingleUploadURL = "/upload/v2/file/single/create"
ApiCreateUploadURL = "/upload/v2/file/create"
ApiUploadSliceURL = "/upload/v2/file/slice"
ApiUploadCompleteURL = "/upload/v2/file/upload_complete"
// move
ApiMove = "/api/v1/file/move"
// rename
ApiRename = "/api/v1/file/name"
)
type Response[T any] struct {
Code int `json:"code"`
Message string `json:"message"`
Data T `json:"data"`
}
type TokenResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data TokenData `json:"data"`
}
type TokenData struct {
AccessToken string `json:"accessToken"`
ExpiredAt string `json:"expiredAt"`
}
type FileListResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data FileListData `json:"data"`
}
type FileListData struct {
LastFileId int64 `json:"lastFileId"`
FileList []File `json:"fileList"`
}
type DirectLinkResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data DirectLinkData `json:"data"`
}
type DirectLinkData struct {
URL string `json:"url"`
}
type MakeDirRequest struct {
Name string `json:"name"`
ParentID int64 `json:"parentID"`
}
type MakeDirResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data MakeDirData `json:"data"`
}
type MakeDirData struct {
DirID int64 `json:"dirID"`
}
type RemoveRequest struct {
FileIDs []int64 `json:"fileIDs"`
}
type UploadCreateResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data UploadCreateData `json:"data"`
}
type UploadCreateData struct {
FileID int64 `json:"fileId"`
Reuse bool `json:"reuse"`
PreuploadID string `json:"preuploadId"`
SliceSize int64 `json:"sliceSize"`
Servers []string `json:"servers"`
}
type UploadUrlResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data UploadUrlData `json:"data"`
}
type UploadUrlData struct {
PresignedURL string `json:"presignedUrl"`
}
type UploadCompleteResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data UploadCompleteData `json:"data"`
}
type UploadCompleteData struct {
FileID int `json:"fileID"`
Completed bool `json:"completed"`
}
func (d *Open123) Request(endpoint string, method string, setup func(*resty.Request), result any) (*resty.Response, error) {
client := resty.New()
token, err := d.tm.getToken()
if err != nil {
return nil, err
}
req := client.R().
SetHeader("Authorization", "Bearer "+token).
SetHeader("Platform", "open_platform").
SetHeader("Content-Type", "application/json").
SetResult(result)
if setup != nil {
setup(req)
}
switch method {
case http.MethodGet:
return req.Get(ApiBaseURL + endpoint)
case http.MethodPost:
return req.Post(ApiBaseURL + endpoint)
case http.MethodPut:
return req.Put(ApiBaseURL + endpoint)
default:
return nil, fmt.Errorf("unsupported method: %s", method)
}
}
func (d *Open123) RequestTo(fullURL string, method string, setup func(*resty.Request), result any) (*resty.Response, error) {
client := resty.New()
token, err := d.tm.getToken()
if err != nil {
return nil, err
}
req := client.R().
SetHeader("Authorization", "Bearer "+token).
SetHeader("Platform", "open_platform").
SetHeader("Content-Type", "application/json").
SetResult(result)
if setup != nil {
setup(req)
}
switch method {
case http.MethodGet:
return req.Get(fullURL)
case http.MethodPost:
return req.Post(fullURL)
case http.MethodPut:
return req.Put(fullURL)
default:
return nil, fmt.Errorf("unsupported method: %s", method)
}
}

294
drivers/123_open/driver.go Normal file
View File

@@ -0,0 +1,294 @@
package _123Open
import (
"context"
"fmt"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"net/http"
"strconv"
"time"
)
type Open123 struct {
model.Storage
Addition
UploadThread int
tm *tokenManager
}
func (d *Open123) Config() driver.Config {
return config
}
func (d *Open123) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Open123) Init(ctx context.Context) error {
d.tm = newTokenManager(d.ClientID, d.ClientSecret)
if _, err := d.tm.getToken(); err != nil {
return fmt.Errorf("token 初始化失败: %w", err)
}
return nil
}
func (d *Open123) Drop(ctx context.Context) error {
return nil
}
func (d *Open123) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
parentFileId, err := strconv.ParseInt(dir.GetID(), 10, 64)
if err != nil {
return nil, err
}
fileLastId := int64(0)
var results []File
for fileLastId != -1 {
files, err := d.getFiles(parentFileId, 100, fileLastId)
if err != nil {
return nil, err
}
for _, f := range files.Data.FileList {
if f.Trashed == 0 {
results = append(results, f)
}
}
fileLastId = files.Data.LastFileId
}
objs := make([]model.Obj, 0, len(results))
for _, f := range results {
objs = append(objs, f)
}
return objs, nil
}
func (d *Open123) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if file.IsDir() {
return nil, errs.LinkIsDir
}
fileID := file.GetID()
var result DirectLinkResp
url := fmt.Sprintf("%s?fileID=%s", ApiGetDirectLink, fileID)
_, err := d.Request(url, http.MethodGet, nil, &result)
if err != nil {
return nil, err
}
if result.Code != 0 {
return nil, fmt.Errorf("get link failed: %s", result.Message)
}
linkURL := result.Data.URL
if d.PrivateKey != "" {
if d.UID == 0 {
return nil, fmt.Errorf("uid is required when private key is set")
}
duration := time.Duration(d.ValidDuration)
if duration <= 0 {
duration = 30
}
signedURL, err := SignURL(linkURL, d.PrivateKey, d.UID, duration*time.Minute)
if err != nil {
return nil, err
}
linkURL = signedURL
}
return &model.Link{
URL: linkURL,
}, nil
}
func (d *Open123) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
parentID, err := strconv.ParseInt(parentDir.GetID(), 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid parent ID: %w", err)
}
var result MakeDirResp
reqBody := MakeDirRequest{
Name: dirName,
ParentID: parentID,
}
_, err = d.Request(ApiMakeDir, http.MethodPost, func(r *resty.Request) {
r.SetBody(reqBody)
}, &result)
if err != nil {
return nil, err
}
if result.Code != 0 {
return nil, fmt.Errorf("mkdir failed: %s", result.Message)
}
newDir := File{
FileId: result.Data.DirID,
FileName: dirName,
Type: 1,
ParentFileId: int(parentID),
Size: 0,
Trashed: 0,
}
return newDir, nil
}
func (d *Open123) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
srcID, err := strconv.ParseInt(srcObj.GetID(), 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid src file ID: %w", err)
}
dstID, err := strconv.ParseInt(dstDir.GetID(), 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid dest dir ID: %w", err)
}
var result Response[any]
reqBody := map[string]interface{}{
"fileIDs": []int64{srcID},
"toParentFileID": dstID,
}
_, err = d.Request(ApiMove, http.MethodPost, func(r *resty.Request) {
r.SetBody(reqBody)
}, &result)
if err != nil {
return nil, err
}
if result.Code != 0 {
return nil, fmt.Errorf("move failed: %s", result.Message)
}
files, err := d.getFiles(dstID, 100, 0)
if err != nil {
return nil, fmt.Errorf("move succeed but failed to get target dir: %w", err)
}
for _, f := range files.Data.FileList {
if f.FileId == srcID {
return f, nil
}
}
return nil, fmt.Errorf("move succeed but file not found in target dir")
}
func (d *Open123) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
srcID, err := strconv.ParseInt(srcObj.GetID(), 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid file ID: %w", err)
}
var result Response[any]
reqBody := map[string]interface{}{
"fileId": srcID,
"fileName": newName,
}
_, err = d.Request(ApiRename, http.MethodPut, func(r *resty.Request) {
r.SetBody(reqBody)
}, &result)
if err != nil {
return nil, err
}
if result.Code != 0 {
return nil, fmt.Errorf("rename failed: %s", result.Message)
}
parentID := 0
if file, ok := srcObj.(File); ok {
parentID = file.ParentFileId
}
files, err := d.getFiles(int64(parentID), 100, 0)
if err != nil {
return nil, fmt.Errorf("rename succeed but failed to get parent dir: %w", err)
}
for _, f := range files.Data.FileList {
if f.FileId == srcID {
return f, nil
}
}
return nil, fmt.Errorf("rename succeed but file not found in parent dir")
}
func (d *Open123) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
return nil, errs.NotSupport
}
func (d *Open123) Remove(ctx context.Context, obj model.Obj) error {
idStr := obj.GetID()
id, err := strconv.ParseInt(idStr, 10, 64)
if err != nil {
return fmt.Errorf("invalid file ID: %w", err)
}
var result Response[any]
reqBody := RemoveRequest{
FileIDs: []int64{id},
}
_, err = d.Request(ApiRemove, http.MethodPost, func(r *resty.Request) {
r.SetBody(reqBody)
}, &result)
if err != nil {
return err
}
if result.Code != 0 {
return fmt.Errorf("remove failed: %s", result.Message)
}
return nil
}
func (d *Open123) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
parentFileId, err := strconv.ParseInt(dstDir.GetID(), 10, 64)
etag := file.GetHash().GetHash(utils.MD5)
if len(etag) < utils.MD5.Width {
up = model.UpdateProgressWithRange(up, 50, 100)
_, etag, err = stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil {
return err
}
}
createResp, err := d.create(parentFileId, file.GetName(), etag, file.GetSize(), 2, false)
if err != nil {
return err
}
if createResp.Data.Reuse {
return nil
}
return d.Upload(ctx, file, parentFileId, createResp, up)
}
func (d *Open123) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
return nil, errs.NotSupport
}
func (d *Open123) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
return nil, errs.NotSupport
}
func (d *Open123) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
return nil, errs.NotSupport
}
func (d *Open123) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
return nil, errs.NotSupport
}
//func (d *Open123) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Open123)(nil)

36
drivers/123_open/meta.go Normal file
View File

@@ -0,0 +1,36 @@
package _123Open
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootID
ClientID string `json:"client_id" required:"true" label:"Client ID"`
ClientSecret string `json:"client_secret" required:"true" label:"Client Secret"`
PrivateKey string `json:"private_key"`
UID uint64 `json:"uid" type:"number"`
ValidDuration int64 `json:"valid_duration" type:"number" default:"30" help:"minutes"`
}
var config = driver.Config{
Name: "123 Open",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Open123{}
})
}

27
drivers/123_open/sign.go Normal file
View File

@@ -0,0 +1,27 @@
package _123Open
import (
"crypto/md5"
"fmt"
"math/rand"
"net/url"
"time"
)
func SignURL(originURL, privateKey string, uid uint64, validDuration time.Duration) (string, error) {
if privateKey == "" {
return originURL, nil
}
parsed, err := url.Parse(originURL)
if err != nil {
return "", err
}
ts := time.Now().Add(validDuration).Unix()
randInt := rand.Int()
signature := fmt.Sprintf("%d-%d-%d-%x", ts, randInt, uid, md5.Sum([]byte(fmt.Sprintf("%s-%d-%d-%d-%s",
parsed.Path, ts, randInt, uid, privateKey))))
query := parsed.Query()
query.Add("auth_key", signature)
parsed.RawQuery = query.Encode()
return parsed.String(), nil
}

85
drivers/123_open/token.go Normal file
View File

@@ -0,0 +1,85 @@
package _123Open
import (
"bytes"
"encoding/json"
"fmt"
"net/http"
"sync"
"time"
)
const tokenURL = ApiBaseURL + ApiToken
type tokenManager struct {
clientID string
clientSecret string
mu sync.Mutex
accessToken string
expireTime time.Time
}
func newTokenManager(clientID, clientSecret string) *tokenManager {
return &tokenManager{
clientID: clientID,
clientSecret: clientSecret,
}
}
func (tm *tokenManager) getToken() (string, error) {
tm.mu.Lock()
defer tm.mu.Unlock()
if tm.accessToken != "" && time.Now().Before(tm.expireTime.Add(-5*time.Minute)) {
return tm.accessToken, nil
}
reqBody := map[string]string{
"clientID": tm.clientID,
"clientSecret": tm.clientSecret,
}
body, _ := json.Marshal(reqBody)
req, err := http.NewRequest("POST", tokenURL, bytes.NewBuffer(body))
if err != nil {
return "", err
}
req.Header.Set("Platform", "open_platform")
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
var result TokenResp
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return "", err
}
if result.Code != 0 {
return "", fmt.Errorf("get token failed: %s", result.Message)
}
tm.accessToken = result.Data.AccessToken
expireAt, err := time.Parse(time.RFC3339, result.Data.ExpiredAt)
if err != nil {
return "", fmt.Errorf("parse expire time failed: %w", err)
}
tm.expireTime = expireAt
return tm.accessToken, nil
}
func (tm *tokenManager) buildHeaders() (http.Header, error) {
token, err := tm.getToken()
if err != nil {
return nil, err
}
header := http.Header{}
header.Set("Authorization", "Bearer "+token)
header.Set("Platform", "open_platform")
header.Set("Content-Type", "application/json")
return header, nil
}

70
drivers/123_open/types.go Normal file
View File

@@ -0,0 +1,70 @@
package _123Open
import (
"fmt"
"github.com/alist-org/alist/v3/pkg/utils"
"time"
)
type File struct {
FileName string `json:"filename"`
Size int64 `json:"size"`
CreateAt string `json:"createAt"`
UpdateAt string `json:"updateAt"`
FileId int64 `json:"fileId"`
Type int `json:"type"`
Etag string `json:"etag"`
S3KeyFlag string `json:"s3KeyFlag"`
ParentFileId int `json:"parentFileId"`
Category int `json:"category"`
Status int `json:"status"`
Trashed int `json:"trashed"`
}
func (f File) GetID() string {
return fmt.Sprint(f.FileId)
}
func (f File) GetName() string {
return f.FileName
}
func (f File) GetSize() int64 {
return f.Size
}
func (f File) IsDir() bool {
return f.Type == 1
}
func (f File) GetModified() string {
return f.UpdateAt
}
func (f File) GetThumb() string {
return ""
}
func (f File) ModTime() time.Time {
t, err := time.Parse("2006-01-02 15:04:05", f.UpdateAt)
if err != nil {
return time.Time{}
}
return t
}
func (f File) CreateTime() time.Time {
t, err := time.Parse("2006-01-02 15:04:05", f.CreateAt)
if err != nil {
return time.Time{}
}
return t
}
func (f File) GetHash() utils.HashInfo {
return utils.NewHashInfo(utils.MD5, f.Etag)
}
func (f File) GetPath() string {
return ""
}

282
drivers/123_open/upload.go Normal file
View File

@@ -0,0 +1,282 @@
package _123Open
import (
"bytes"
"context"
"crypto/md5"
"encoding/hex"
"encoding/json"
"fmt"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"golang.org/x/sync/errgroup"
"io"
"mime/multipart"
"net/http"
"runtime"
"strconv"
"time"
)
func (d *Open123) create(parentFileID int64, filename, etag string, size int64, duplicate int, containDir bool) (*UploadCreateResp, error) {
var resp UploadCreateResp
_, err := d.Request(ApiCreateUploadURL, http.MethodPost, func(req *resty.Request) {
body := base.Json{
"parentFileID": parentFileID,
"filename": filename,
"etag": etag,
"size": size,
}
if duplicate > 0 {
body["duplicate"] = duplicate
}
if containDir {
body["containDir"] = true
}
req.SetBody(body)
}, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}
func (d *Open123) GetUploadDomains() ([]string, error) {
var resp struct {
Code int `json:"code"`
Message string `json:"message"`
Data []string `json:"data"`
}
_, err := d.Request(ApiUploadDomainURL, http.MethodGet, nil, &resp)
if err != nil {
return nil, err
}
if resp.Code != 0 {
return nil, fmt.Errorf("get upload domain failed: %s", resp.Message)
}
return resp.Data, nil
}
func (d *Open123) UploadSingle(ctx context.Context, createResp *UploadCreateResp, file model.FileStreamer, parentID int64) error {
domain := createResp.Data.Servers[0]
etag := file.GetHash().GetHash(utils.MD5)
if len(etag) < utils.MD5.Width {
_, _, err := stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil {
return err
}
}
reader, err := file.RangeRead(http_range.Range{Start: 0, Length: file.GetSize()})
if err != nil {
return err
}
reader = driver.NewLimitedUploadStream(ctx, reader)
var b bytes.Buffer
mw := multipart.NewWriter(&b)
mw.WriteField("parentFileID", fmt.Sprint(parentID))
mw.WriteField("filename", file.GetName())
mw.WriteField("etag", etag)
mw.WriteField("size", fmt.Sprint(file.GetSize()))
fw, _ := mw.CreateFormFile("file", file.GetName())
_, err = io.Copy(fw, reader)
mw.Close()
req, err := http.NewRequestWithContext(ctx, "POST", domain+ApiSingleUploadURL, &b)
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+d.tm.accessToken)
req.Header.Set("Platform", "open_platform")
req.Header.Set("Content-Type", mw.FormDataContentType())
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
var result struct {
Code int `json:"code"`
Message string `json:"message"`
Data struct {
FileID int64 `json:"fileID"`
Completed bool `json:"completed"`
} `json:"data"`
}
body, _ := io.ReadAll(resp.Body)
if err := json.Unmarshal(body, &result); err != nil {
return fmt.Errorf("unmarshal response error: %v, body: %s", err, string(body))
}
if result.Code != 0 {
return fmt.Errorf("upload failed: %s", result.Message)
}
if !result.Data.Completed || result.Data.FileID == 0 {
return fmt.Errorf("upload incomplete or missing fileID")
}
return nil
}
func (d *Open123) Upload(ctx context.Context, file model.FileStreamer, parentID int64, createResp *UploadCreateResp, up driver.UpdateProgress) error {
if cacher, ok := file.(interface{ CacheFullInTempFile() (model.File, error) }); ok {
if _, err := cacher.CacheFullInTempFile(); err != nil {
return err
}
}
size := file.GetSize()
chunkSize := createResp.Data.SliceSize
uploadNums := (size + chunkSize - 1) / chunkSize
uploadDomain := createResp.Data.Servers[0]
if d.UploadThread <= 0 {
cpuCores := runtime.NumCPU()
threads := cpuCores * 2
if threads < 4 {
threads = 4
}
if threads > 16 {
threads = 16
}
d.UploadThread = threads
fmt.Printf("[Upload] Auto set upload concurrency: %d (CPU cores=%d)\n", d.UploadThread, cpuCores)
}
fmt.Printf("[Upload] File size: %d bytes, chunk size: %d bytes, total slices: %d, concurrency: %d\n",
size, chunkSize, uploadNums, d.UploadThread)
if size <= 1<<30 {
return d.UploadSingle(ctx, createResp, file, parentID)
}
if createResp.Data.Reuse {
up(100)
return nil
}
client := resty.New()
semaphore := make(chan struct{}, d.UploadThread)
threadG, _ := errgroup.WithContext(ctx)
var progressArr = make([]int64, uploadNums)
for partIndex := int64(0); partIndex < uploadNums; partIndex++ {
partIndex := partIndex
semaphore <- struct{}{}
threadG.Go(func() error {
defer func() { <-semaphore }()
offset := partIndex * chunkSize
length := min(chunkSize, size-offset)
partNumber := partIndex + 1
fmt.Printf("[Slice %d] Starting read from offset %d, length %d\n", partNumber, offset, length)
reader, err := file.RangeRead(http_range.Range{Start: offset, Length: length})
if err != nil {
return fmt.Errorf("[Slice %d] RangeRead error: %v", partNumber, err)
}
buf := make([]byte, length)
n, err := io.ReadFull(reader, buf)
if err != nil && err != io.EOF {
return fmt.Errorf("[Slice %d] Read error: %v", partNumber, err)
}
buf = buf[:n]
hash := md5.Sum(buf)
sliceMD5Str := hex.EncodeToString(hash[:])
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
writer.WriteField("preuploadID", createResp.Data.PreuploadID)
writer.WriteField("sliceNo", strconv.FormatInt(partNumber, 10))
writer.WriteField("sliceMD5", sliceMD5Str)
partName := fmt.Sprintf("%s.part%d", file.GetName(), partNumber)
fw, _ := writer.CreateFormFile("slice", partName)
fw.Write(buf)
writer.Close()
resp, err := client.R().
SetHeader("Authorization", "Bearer "+d.tm.accessToken).
SetHeader("Platform", "open_platform").
SetHeader("Content-Type", writer.FormDataContentType()).
SetBody(body.Bytes()).
Post(uploadDomain + ApiUploadSliceURL)
if err != nil {
return fmt.Errorf("[Slice %d] Upload HTTP error: %v", partNumber, err)
}
if resp.StatusCode() != 200 {
return fmt.Errorf("[Slice %d] Upload failed with status: %s, resp: %s", partNumber, resp.Status(), resp.String())
}
progressArr[partIndex] = length
var totalUploaded int64 = 0
for _, v := range progressArr {
totalUploaded += v
}
if up != nil {
percent := float64(totalUploaded) / float64(size) * 100
up(percent)
}
fmt.Printf("[Slice %d] MD5: %s\n", partNumber, sliceMD5Str)
fmt.Printf("[Slice %d] Upload finished\n", partNumber)
return nil
})
}
if err := threadG.Wait(); err != nil {
return err
}
var completeResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data struct {
Completed bool `json:"completed"`
FileID int64 `json:"fileID"`
} `json:"data"`
}
for {
reqBody := fmt.Sprintf(`{"preuploadID":"%s"}`, createResp.Data.PreuploadID)
req, err := http.NewRequestWithContext(ctx, "POST", uploadDomain+ApiUploadCompleteURL, bytes.NewBufferString(reqBody))
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+d.tm.accessToken)
req.Header.Set("Platform", "open_platform")
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
body, _ := io.ReadAll(resp.Body)
resp.Body.Close()
if err := json.Unmarshal(body, &completeResp); err != nil {
return fmt.Errorf("completion response unmarshal error: %v, body: %s", err, string(body))
}
if completeResp.Code != 0 {
return fmt.Errorf("completion API returned error code %d: %s", completeResp.Code, completeResp.Message)
}
if completeResp.Data.Completed && completeResp.Data.FileID != 0 {
fmt.Printf("[Upload] Upload completed successfully. FileID: %d\n", completeResp.Data.FileID)
break
}
time.Sleep(time.Second)
}
up(100)
return nil
}

20
drivers/123_open/util.go Normal file
View File

@@ -0,0 +1,20 @@
package _123Open
import (
"fmt"
"net/http"
)
func (d *Open123) getFiles(parentFileId int64, limit int, lastFileId int64) (*FileListResp, error) {
var result FileListResp
url := fmt.Sprintf("%s?parentFileId=%d&limit=%d&lastFileId=%d", ApiFileList, parentFileId, limit, lastFileId)
_, err := d.Request(url, http.MethodGet, nil, &result)
if err != nil {
return nil, err
}
if result.Code != 0 {
return nil, fmt.Errorf("list error: %s", result.Message)
}
return &result, nil
}

View File

@@ -4,12 +4,14 @@ import (
"context"
"encoding/base64"
"fmt"
"golang.org/x/time/rate"
"net/http"
"net/url"
"sync"
"time"
"golang.org/x/time/rate"
_123 "github.com/alist-org/alist/v3/drivers/123"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
@@ -23,6 +25,7 @@ type Pan123Share struct {
model.Storage
Addition
apiRateLimit sync.Map
ref *_123.Pan123
}
func (d *Pan123Share) Config() driver.Config {
@@ -39,7 +42,17 @@ func (d *Pan123Share) Init(ctx context.Context) error {
return nil
}
func (d *Pan123Share) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*_123.Pan123)
if ok {
d.ref = refStorage
return nil
}
return fmt.Errorf("ref: storage is not 123Pan")
}
func (d *Pan123Share) Drop(ctx context.Context) error {
d.ref = nil
return nil
}

View File

@@ -53,6 +53,9 @@ func GetApi(rawUrl string) string {
}
func (d *Pan123Share) request(url string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
if d.ref != nil {
return d.ref.Request(url, method, callback, resp)
}
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"origin": "https://www.123pan.com",

View File

@@ -2,28 +2,32 @@ package _139
import (
"context"
"encoding/base64"
"encoding/xml"
"fmt"
"io"
"net/http"
"path"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
streamPkg "github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/cron"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/pkg/utils/random"
log "github.com/sirupsen/logrus"
)
type Yun139 struct {
model.Storage
Addition
cron *cron.Cron
Account string
cron *cron.Cron
Account string
ref *Yun139
PersonalCloudHost string
}
func (d *Yun139) Config() driver.Config {
@@ -35,56 +39,79 @@ func (d *Yun139) GetAddition() driver.Additional {
}
func (d *Yun139) Init(ctx context.Context) error {
if d.Authorization == "" {
return fmt.Errorf("authorization is empty")
}
d.cron = cron.NewCron(time.Hour * 24 * 7)
d.cron.Do(func() {
if d.ref == nil {
if len(d.Authorization) == 0 {
return fmt.Errorf("authorization is empty")
}
err := d.refreshToken()
if err != nil {
log.Errorf("%+v", err)
return err
}
})
// Query Route Policy
var resp QueryRoutePolicyResp
_, err = d.requestRoute(base.Json{
"userInfo": base.Json{
"userType": 1,
"accountType": 1,
"accountName": d.Account},
"modAddrType": 1,
}, &resp)
if err != nil {
return err
}
for _, policyItem := range resp.Data.RoutePolicyList {
if policyItem.ModName == "personal" {
d.PersonalCloudHost = policyItem.HttpsUrl
break
}
}
if len(d.PersonalCloudHost) == 0 {
return fmt.Errorf("PersonalCloudHost is empty")
}
d.cron = cron.NewCron(time.Hour * 12)
d.cron.Do(func() {
err := d.refreshToken()
if err != nil {
log.Errorf("%+v", err)
}
})
}
switch d.Addition.Type {
case MetaPersonalNew:
if len(d.Addition.RootFolderID) == 0 {
d.RootFolderID = "/"
}
return nil
case MetaPersonal:
if len(d.Addition.RootFolderID) == 0 {
d.RootFolderID = "root"
}
fallthrough
case MetaGroup:
if len(d.Addition.RootFolderID) == 0 {
d.RootFolderID = d.CloudID
}
case MetaFamily:
decode, err := base64.StdEncoding.DecodeString(d.Authorization)
if err != nil {
return err
}
decodeStr := string(decode)
splits := strings.Split(decodeStr, ":")
if len(splits) < 2 {
return fmt.Errorf("authorization is invalid, splits < 2")
}
d.Account = splits[1]
_, err = d.post("/orchestration/personalCloud/user/v1.0/qryUserExternInfo", base.Json{
"qryUserExternInfoReq": base.Json{
"commonAccountInfo": base.Json{
"account": d.Account,
"accountType": 1,
},
},
}, nil)
return err
default:
return errs.NotImplement
}
return nil
}
func (d *Yun139) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*Yun139)
if ok {
d.ref = refStorage
return nil
}
return errs.NotSupport
}
func (d *Yun139) Drop(ctx context.Context) error {
if d.cron != nil {
d.cron.Stop()
}
d.ref = nil
return nil
}
@@ -96,6 +123,8 @@ func (d *Yun139) List(ctx context.Context, dir model.Obj, args model.ListArgs) (
return d.getFiles(dir.GetID())
case MetaFamily:
return d.familyGetFiles(dir.GetID())
case MetaGroup:
return d.groupGetFiles(dir.GetID())
default:
return nil, errs.NotImplement
}
@@ -108,9 +137,11 @@ func (d *Yun139) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
case MetaPersonalNew:
url, err = d.personalGetLink(file.GetID())
case MetaPersonal:
fallthrough
case MetaFamily:
url, err = d.getLink(file.GetID())
case MetaFamily:
url, err = d.familyGetLink(file.GetID(), file.GetPath())
case MetaGroup:
url, err = d.groupGetLink(file.GetID(), file.GetPath())
default:
return nil, errs.NotImplement
}
@@ -131,7 +162,7 @@ func (d *Yun139) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
"type": "folder",
"fileRenameMode": "force_rename",
}
pathname := "/hcy/file/create"
pathname := "/file/create"
_, err = d.personalPost(pathname, data, nil)
case MetaPersonal:
data := base.Json{
@@ -139,7 +170,7 @@ func (d *Yun139) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
"parentCatalogID": parentDir.GetID(),
"newCatalogName": dirName,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
},
@@ -150,12 +181,26 @@ func (d *Yun139) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
data := base.Json{
"cloudID": d.CloudID,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
"docLibName": dirName,
"path": path.Join(parentDir.GetPath(), parentDir.GetID()),
}
pathname := "/orchestration/familyCloud/cloudCatalog/v1.0/createCloudDoc"
pathname := "/orchestration/familyCloud-rebuild/cloudCatalog/v1.0/createCloudDoc"
_, err = d.post(pathname, data, nil)
case MetaGroup:
data := base.Json{
"catalogName": dirName,
"commonAccountInfo": base.Json{
"account": d.getAccount(),
"accountType": 1,
},
"groupID": d.CloudID,
"parentFileId": parentDir.GetID(),
"path": path.Join(parentDir.GetPath(), parentDir.GetID()),
}
pathname := "/orchestration/group-rebuild/catalog/v1.0/createGroupCatalog"
_, err = d.post(pathname, data, nil)
default:
err = errs.NotImplement
@@ -170,12 +215,40 @@ func (d *Yun139) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj,
"fileIds": []string{srcObj.GetID()},
"toParentFileId": dstDir.GetID(),
}
pathname := "/hcy/file/batchMove"
pathname := "/file/batchMove"
_, err := d.personalPost(pathname, data, nil)
if err != nil {
return nil, err
}
return srcObj, nil
case MetaGroup:
var contentList []string
var catalogList []string
if srcObj.IsDir() {
catalogList = append(catalogList, srcObj.GetID())
} else {
contentList = append(contentList, srcObj.GetID())
}
data := base.Json{
"taskType": 3,
"srcType": 2,
"srcGroupID": d.CloudID,
"destType": 2,
"destGroupID": d.CloudID,
"destPath": dstDir.GetPath(),
"contentList": contentList,
"catalogList": catalogList,
"commonAccountInfo": base.Json{
"account": d.getAccount(),
"accountType": 1,
},
}
pathname := "/orchestration/group-rebuild/task/v1.0/createBatchOprTask"
_, err := d.post(pathname, data, nil)
if err != nil {
return nil, err
}
return srcObj, nil
case MetaPersonal:
var contentInfoList []string
var catalogInfoList []string
@@ -194,7 +267,7 @@ func (d *Yun139) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj,
"newCatalogID": dstDir.GetID(),
},
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
},
@@ -219,7 +292,7 @@ func (d *Yun139) Rename(ctx context.Context, srcObj model.Obj, newName string) e
"name": newName,
"description": "",
}
pathname := "/hcy/file/update"
pathname := "/file/update"
_, err = d.personalPost(pathname, data, nil)
case MetaPersonal:
var data base.Json
@@ -229,7 +302,7 @@ func (d *Yun139) Rename(ctx context.Context, srcObj model.Obj, newName string) e
"catalogID": srcObj.GetID(),
"catalogName": newName,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
@@ -239,13 +312,72 @@ func (d *Yun139) Rename(ctx context.Context, srcObj model.Obj, newName string) e
"contentID": srcObj.GetID(),
"contentName": newName,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
pathname = "/orchestration/personalCloud/content/v1.0/updateContentInfo"
}
_, err = d.post(pathname, data, nil)
case MetaGroup:
var data base.Json
var pathname string
if srcObj.IsDir() {
data = base.Json{
"groupID": d.CloudID,
"modifyCatalogID": srcObj.GetID(),
"modifyCatalogName": newName,
"path": srcObj.GetPath(),
"commonAccountInfo": base.Json{
"account": d.getAccount(),
"accountType": 1,
},
}
pathname = "/orchestration/group-rebuild/catalog/v1.0/modifyGroupCatalog"
} else {
data = base.Json{
"groupID": d.CloudID,
"contentID": srcObj.GetID(),
"contentName": newName,
"path": srcObj.GetPath(),
"commonAccountInfo": base.Json{
"account": d.getAccount(),
"accountType": 1,
},
}
pathname = "/orchestration/group-rebuild/content/v1.0/modifyGroupContent"
}
_, err = d.post(pathname, data, nil)
case MetaFamily:
var data base.Json
var pathname string
if srcObj.IsDir() {
// 网页接口不支持重命名家庭云文件夹
// data = base.Json{
// "catalogType": 3,
// "catalogID": srcObj.GetID(),
// "catalogName": newName,
// "commonAccountInfo": base.Json{
// "account": d.getAccount(),
// "accountType": 1,
// },
// "path": srcObj.GetPath(),
// }
// pathname = "/orchestration/familyCloud-rebuild/photoContent/v1.0/modifyCatalogInfo"
return errs.NotImplement
} else {
data = base.Json{
"contentID": srcObj.GetID(),
"contentName": newName,
"commonAccountInfo": base.Json{
"account": d.getAccount(),
"accountType": 1,
},
"path": srcObj.GetPath(),
}
pathname = "/orchestration/familyCloud-rebuild/photoContent/v1.0/modifyContentInfo"
}
_, err = d.post(pathname, data, nil)
default:
err = errs.NotImplement
}
@@ -260,7 +392,7 @@ func (d *Yun139) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
"fileIds": []string{srcObj.GetID()},
"toParentFileId": dstDir.GetID(),
}
pathname := "/hcy/file/batchCopy"
pathname := "/file/batchCopy"
_, err := d.personalPost(pathname, data, nil)
return err
case MetaPersonal:
@@ -281,7 +413,7 @@ func (d *Yun139) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
"newCatalogID": dstDir.GetID(),
},
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
},
@@ -300,9 +432,31 @@ func (d *Yun139) Remove(ctx context.Context, obj model.Obj) error {
data := base.Json{
"fileIds": []string{obj.GetID()},
}
pathname := "/hcy/recyclebin/batchTrash"
pathname := "/recyclebin/batchTrash"
_, err := d.personalPost(pathname, data, nil)
return err
case MetaGroup:
var contentList []string
var catalogList []string
// 必须使用完整路径删除
if obj.IsDir() {
catalogList = append(catalogList, obj.GetPath())
} else {
contentList = append(contentList, path.Join(obj.GetPath(), obj.GetID()))
}
data := base.Json{
"taskType": 2,
"srcGroupID": d.CloudID,
"contentList": contentList,
"catalogList": catalogList,
"commonAccountInfo": base.Json{
"account": d.getAccount(),
"accountType": 1,
},
}
pathname := "/orchestration/group-rebuild/task/v1.0/createBatchOprTask"
_, err := d.post(pathname, data, nil)
return err
case MetaPersonal:
fallthrough
case MetaFamily:
@@ -323,7 +477,7 @@ func (d *Yun139) Remove(ctx context.Context, obj model.Obj) error {
"catalogInfoList": catalogInfoList,
},
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
},
@@ -334,13 +488,15 @@ func (d *Yun139) Remove(ctx context.Context, obj model.Obj) error {
"catalogList": catalogInfoList,
"contentList": contentInfoList,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
"sourceCloudID": d.CloudID,
"sourceCatalogType": 1002,
"taskType": 2,
"path": obj.GetPath(),
}
pathname = "/orchestration/familyCloud/batchOprTask/v1.0/createBatchOprTask"
pathname = "/orchestration/familyCloud-rebuild/batchOprTask/v1.0/createBatchOprTask"
}
_, err := d.post(pathname, data, nil)
return err
@@ -349,20 +505,15 @@ func (d *Yun139) Remove(ctx context.Context, obj model.Obj) error {
}
}
const (
_ = iota //ignore first value by assigning to blank identifier
KB = 1 << (10 * iota)
MB
GB
TB
)
func getPartSize(size int64) int64 {
// 网盘对于分片数量存在上限
if size/GB > 30 {
return 512 * MB
func (d *Yun139) getPartSize(size int64) int64 {
if d.CustomUploadPartSize != 0 {
return d.CustomUploadPartSize
}
return 100 * MB
// 网盘对于分片数量存在上限
if size/utils.GB > 30 {
return 512 * utils.MB
}
return 100 * utils.MB
}
func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
@@ -370,149 +521,288 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
case MetaPersonalNew:
var err error
fullHash := stream.GetHash().GetHash(utils.SHA256)
if len(fullHash) <= 0 {
tmpF, err := stream.CacheFullInTempFile()
if err != nil {
return err
}
fullHash, err = utils.HashFile(utils.SHA256, tmpF)
if len(fullHash) != utils.SHA256.Width {
_, fullHash, err = streamPkg.CacheFullInTempFileAndHash(stream, utils.SHA256)
if err != nil {
return err
}
}
// return errs.NotImplement
size := stream.GetSize()
var partSize = d.getPartSize(size)
part := size / partSize
if size%partSize > 0 {
part++
} else if part == 0 {
part = 1
}
partInfos := make([]PartInfo, 0, part)
for i := int64(0); i < part; i++ {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
start := i * partSize
byteSize := size - start
if byteSize > partSize {
byteSize = partSize
}
partNumber := i + 1
partInfo := PartInfo{
PartNumber: partNumber,
PartSize: byteSize,
ParallelHashCtx: ParallelHashCtx{
PartOffset: start,
},
}
partInfos = append(partInfos, partInfo)
}
// 筛选出前 100 个 partInfos
firstPartInfos := partInfos
if len(firstPartInfos) > 100 {
firstPartInfos = firstPartInfos[:100]
}
// 创建任务获取上传信息和前100个分片的上传地址
data := base.Json{
"contentHash": fullHash,
"contentHashAlgorithm": "SHA256",
"contentType": "application/octet-stream",
"parallelUpload": false,
"partInfos": []base.Json{{
"parallelHashCtx": base.Json{
"partOffset": 0,
},
"partNumber": 1,
"partSize": stream.GetSize(),
}},
"size": stream.GetSize(),
"parentFileId": dstDir.GetID(),
"name": stream.GetName(),
"type": "file",
"fileRenameMode": "auto_rename",
"partInfos": firstPartInfos,
"size": size,
"parentFileId": dstDir.GetID(),
"name": stream.GetName(),
"type": "file",
"fileRenameMode": "auto_rename",
}
pathname := "/hcy/file/create"
pathname := "/file/create"
var resp PersonalUploadResp
_, err = d.personalPost(pathname, data, &resp)
if err != nil {
return err
}
if resp.Data.Exist || resp.Data.RapidUpload {
// 判断文件是否已存在
// resp.Data.Exist: true 已存在同名文件且校验相同,云端不会重复增加文件,无需手动处理冲突
if resp.Data.Exist {
return nil
}
// Progress
p := driver.NewProgress(stream.GetSize(), up)
// 判断文件是否支持快传
// resp.Data.RapidUpload: true 支持快传,但此处直接检测是否返回分片的上传地址
// 快传的情况下同样需要手动处理冲突
if resp.Data.PartInfos != nil {
// 读取前100个分片的上传地址
uploadPartInfos := resp.Data.PartInfos
// Update Progress
r := io.TeeReader(stream, p)
// 获取后续分片的上传地址
for i := 101; i < len(partInfos); i += 100 {
end := i + 100
if end > len(partInfos) {
end = len(partInfos)
}
batchPartInfos := partInfos[i:end]
req, err := http.NewRequest("PUT", resp.Data.PartInfos[0].UploadUrl, r)
if err != nil {
return err
}
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Length", fmt.Sprint(stream.GetSize()))
req.Header.Set("Origin", "https://yun.139.com")
req.Header.Set("Referer", "https://yun.139.com/")
req.ContentLength = stream.GetSize()
moredata := base.Json{
"fileId": resp.Data.FileId,
"uploadId": resp.Data.UploadId,
"partInfos": batchPartInfos,
"commonAccountInfo": base.Json{
"account": d.getAccount(),
"accountType": 1,
},
}
pathname := "/file/getUploadUrl"
var moreresp PersonalUploadUrlResp
_, err = d.personalPost(pathname, moredata, &moreresp)
if err != nil {
return err
}
uploadPartInfos = append(uploadPartInfos, moreresp.Data.PartInfos...)
}
res, err := base.HttpClient.Do(req)
if err != nil {
return err
// Progress
p := driver.NewProgress(size, up)
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
// 上传所有分片
for _, uploadPartInfo := range uploadPartInfos {
index := uploadPartInfo.PartNumber - 1
partSize := partInfos[index].PartSize
log.Debugf("[139] uploading part %+v/%+v", index, len(uploadPartInfos))
limitReader := io.LimitReader(rateLimited, partSize)
// Update Progress
r := io.TeeReader(limitReader, p)
req, err := http.NewRequest("PUT", uploadPartInfo.UploadUrl, r)
if err != nil {
return err
}
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "application/octet-stream")
req.Header.Set("Content-Length", fmt.Sprint(partSize))
req.Header.Set("Origin", "https://yun.139.com")
req.Header.Set("Referer", "https://yun.139.com/")
req.ContentLength = partSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
_ = res.Body.Close()
log.Debugf("[139] uploaded: %+v", res)
if res.StatusCode != http.StatusOK {
return fmt.Errorf("unexpected status code: %d", res.StatusCode)
}
}
data = base.Json{
"contentHash": fullHash,
"contentHashAlgorithm": "SHA256",
"fileId": resp.Data.FileId,
"uploadId": resp.Data.UploadId,
}
_, err = d.personalPost("/file/complete", data, nil)
if err != nil {
return err
}
}
_ = res.Body.Close()
log.Debugf("%+v", res)
if res.StatusCode != http.StatusOK {
return fmt.Errorf("unexpected status code: %d", res.StatusCode)
}
data = base.Json{
"contentHash": fullHash,
"contentHashAlgorithm": "SHA256",
"fileId": resp.Data.FileId,
"uploadId": resp.Data.UploadId,
}
_, err = d.personalPost("/hcy/file/complete", data, nil)
if err != nil {
return err
// 处理冲突
if resp.Data.FileName != stream.GetName() {
log.Debugf("[139] conflict detected: %s != %s", resp.Data.FileName, stream.GetName())
// 给服务器一定时间处理数据,避免无法刷新文件列表
time.Sleep(time.Millisecond * 500)
// 刷新并获取文件列表
files, err := d.List(ctx, dstDir, model.ListArgs{Refresh: true})
if err != nil {
return err
}
// 删除旧文件
for _, file := range files {
if file.GetName() == stream.GetName() {
log.Debugf("[139] conflict: removing old: %s", file.GetName())
// 删除前重命名旧文件,避免仍旧冲突
err = d.Rename(ctx, file, stream.GetName()+random.String(4))
if err != nil {
return err
}
err = d.Remove(ctx, file)
if err != nil {
return err
}
break
}
}
// 重命名新文件
for _, file := range files {
if file.GetName() == resp.Data.FileName {
log.Debugf("[139] conflict: renaming new: %s => %s", file.GetName(), stream.GetName())
err = d.Rename(ctx, file, stream.GetName())
if err != nil {
return err
}
break
}
}
}
return nil
case MetaPersonal:
fallthrough
case MetaFamily:
// 处理冲突
// 获取文件列表
files, err := d.List(ctx, dstDir, model.ListArgs{})
if err != nil {
return err
}
// 删除旧文件
for _, file := range files {
if file.GetName() == stream.GetName() {
log.Debugf("[139] conflict: removing old: %s", file.GetName())
// 删除前重命名旧文件,避免仍旧冲突
err = d.Rename(ctx, file, stream.GetName()+random.String(4))
if err != nil {
return err
}
err = d.Remove(ctx, file)
if err != nil {
return err
}
break
}
}
var reportSize int64
if d.ReportRealSize {
reportSize = stream.GetSize()
} else {
reportSize = 0
}
data := base.Json{
"manualRename": 2,
"operation": 0,
"fileCount": 1,
"totalSize": 0, // 去除上传大小限制
"totalSize": reportSize,
"uploadContentList": []base.Json{{
"contentName": stream.GetName(),
"contentSize": 0, // 去除上传大小限制
"contentSize": reportSize,
// "digest": "5a3231986ce7a6b46e408612d385bafa"
}},
"parentCatalogID": dstDir.GetID(),
"newCatalogName": "",
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
pathname := "/orchestration/personalCloud/uploadAndDownload/v1.0/pcUploadFileRequest"
if d.isFamily() {
// data = d.newJson(base.Json{
// "fileCount": 1,
// "manualRename": 2,
// "operation": 0,
// "path": "",
// "seqNo": "",
// "totalSize": 0,
// "uploadContentList": []base.Json{{
// "contentName": stream.GetName(),
// "contentSize": 0,
// // "digest": "5a3231986ce7a6b46e408612d385bafa"
// }},
// })
// pathname = "/orchestration/familyCloud/content/v1.0/getFileUploadURL"
return errs.NotImplement
data = d.newJson(base.Json{
"fileCount": 1,
"manualRename": 2,
"operation": 0,
"path": path.Join(dstDir.GetPath(), dstDir.GetID()),
"seqNo": random.String(32), //序列号不能为空
"totalSize": reportSize,
"uploadContentList": []base.Json{{
"contentName": stream.GetName(),
"contentSize": reportSize,
// "digest": "5a3231986ce7a6b46e408612d385bafa"
}},
})
pathname = "/orchestration/familyCloud-rebuild/content/v1.0/getFileUploadURL"
}
var resp UploadResp
_, err := d.post(pathname, data, &resp)
_, err = d.post(pathname, data, &resp)
if err != nil {
return err
}
if resp.Data.Result.ResultCode != "0" {
return fmt.Errorf("get file upload url failed with result code: %s, message: %s", resp.Data.Result.ResultCode, resp.Data.Result.ResultDesc)
}
size := stream.GetSize()
// Progress
p := driver.NewProgress(stream.GetSize(), up)
var partSize = getPartSize(stream.GetSize())
part := (stream.GetSize() + partSize - 1) / partSize
if part == 0 {
p := driver.NewProgress(size, up)
var partSize = d.getPartSize(size)
part := size / partSize
if size%partSize > 0 {
part++
} else if part == 0 {
part = 1
}
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
for i := int64(0); i < part; i++ {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
start := i * partSize
byteSize := stream.GetSize() - start
if byteSize > partSize {
byteSize = partSize
}
byteSize := min(size-start, partSize)
limitReader := io.LimitReader(stream, byteSize)
limitReader := io.LimitReader(rateLimited, byteSize)
// Update Progress
r := io.TeeReader(limitReader, p)
req, err := http.NewRequest("POST", resp.Data.UploadResult.RedirectionURL, r)
@@ -522,7 +812,7 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "text/plain;name="+unicode(stream.GetName()))
req.Header.Set("contentSize", strconv.FormatInt(stream.GetSize(), 10))
req.Header.Set("contentSize", strconv.FormatInt(size, 10))
req.Header.Set("range", fmt.Sprintf("bytes=%d-%d", start, start+byteSize-1))
req.Header.Set("uploadtaskID", resp.Data.UploadResult.UploadTaskID)
req.Header.Set("rangeType", "0")
@@ -532,13 +822,23 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
if err != nil {
return err
}
_ = res.Body.Close()
log.Debugf("%+v", res)
if res.StatusCode != http.StatusOK {
res.Body.Close()
return fmt.Errorf("unexpected status code: %d", res.StatusCode)
}
bodyBytes, err := io.ReadAll(res.Body)
if err != nil {
return fmt.Errorf("error reading response body: %v", err)
}
var result InterLayerUploadResult
err = xml.Unmarshal(bodyBytes, &result)
if err != nil {
return fmt.Errorf("error parsing XML: %v", err)
}
if result.ResultCode != 0 {
return fmt.Errorf("upload failed with result code: %d, message: %s", result.ResultCode, result.Msg)
}
}
return nil
default:
return errs.NotImplement
@@ -556,7 +856,7 @@ func (d *Yun139) Other(ctx context.Context, args model.OtherArgs) (interface{},
}
switch args.Method {
case "video_preview":
uri = "/hcy/videoPreview/getPreviewInfo"
uri = "/videoPreview/getPreviewInfo"
default:
return nil, errs.NotSupport
}

View File

@@ -9,8 +9,11 @@ type Addition struct {
//Account string `json:"account" required:"true"`
Authorization string `json:"authorization" type:"text" required:"true"`
driver.RootID
Type string `json:"type" type:"select" options:"personal,family,personal_new" default:"personal"`
CloudID string `json:"cloud_id"`
Type string `json:"type" type:"select" options:"personal_new,family,group,personal" default:"personal_new"`
CloudID string `json:"cloud_id"`
CustomUploadPartSize int64 `json:"custom_upload_part_size" type:"number" default:"0" help:"0 for auto"`
ReportRealSize bool `json:"report_real_size" type:"bool" default:"true" help:"Enable to report the real file size during upload"`
UseLargeThumbnail bool `json:"use_large_thumbnail" type:"bool" default:"false" help:"Enable to use large thumbnail for images"`
}
var config = driver.Config{

View File

@@ -7,6 +7,7 @@ import (
const (
MetaPersonal string = "personal"
MetaFamily string = "family"
MetaGroup string = "group"
MetaPersonalNew string = "personal_new"
)
@@ -54,6 +55,7 @@ type Content struct {
//ContentDesc string `json:"contentDesc"`
//ContentType int `json:"contentType"`
//ContentOrigin int `json:"contentOrigin"`
CreateTime string `json:"createTime"`
UpdateTime string `json:"updateTime"`
//CommentCount int `json:"commentCount"`
ThumbnailURL string `json:"thumbnailURL"`
@@ -141,6 +143,13 @@ type UploadResp struct {
} `json:"data"`
}
type InterLayerUploadResult struct {
XMLName xml.Name `xml:"result"`
Text string `xml:",chardata"`
ResultCode int `xml:"resultCode"`
Msg string `xml:"msg"`
}
type CloudContent struct {
ContentID string `json:"contentID"`
//Modifier string `json:"modifier"`
@@ -196,6 +205,37 @@ type QueryContentListResp struct {
} `json:"data"`
}
type QueryGroupContentListResp struct {
BaseResp
Data struct {
Result struct {
ResultCode string `json:"resultCode"`
ResultDesc string `json:"resultDesc"`
} `json:"result"`
GetGroupContentResult struct {
ParentCatalogID string `json:"parentCatalogID"` // 根目录是"0"
CatalogList []struct {
Catalog
Path string `json:"path"`
} `json:"catalogList"`
ContentList []Content `json:"contentList"`
NodeCount int `json:"nodeCount"` // 文件+文件夹数量
CtlgCnt int `json:"ctlgCnt"` // 文件夹数量
ContCnt int `json:"contCnt"` // 文件数量
} `json:"getGroupContentResult"`
} `json:"data"`
}
type ParallelHashCtx struct {
PartOffset int64 `json:"partOffset"`
}
type PartInfo struct {
PartNumber int64 `json:"partNumber"`
PartSize int64 `json:"partSize"`
ParallelHashCtx ParallelHashCtx `json:"parallelHashCtx"`
}
type PersonalThumbnail struct {
Style string `json:"style"`
Url string `json:"url"`
@@ -228,6 +268,7 @@ type PersonalUploadResp struct {
BaseResp
Data struct {
FileId string `json:"fileId"`
FileName string `json:"fileName"`
PartInfos []PersonalPartInfo `json:"partInfos"`
Exist bool `json:"exist"`
RapidUpload bool `json:"rapidUpload"`
@@ -235,11 +276,39 @@ type PersonalUploadResp struct {
}
}
type RefreshTokenResp struct {
XMLName xml.Name `xml:"root"`
Return string `xml:"return"`
Token string `xml:"token"`
Expiretime int32 `xml:"expiretime"`
AccessToken string `xml:"accessToken"`
Desc string `xml:"desc"`
type PersonalUploadUrlResp struct {
BaseResp
Data struct {
FileId string `json:"fileId"`
UploadId string `json:"uploadId"`
PartInfos []PersonalPartInfo `json:"partInfos"`
}
}
type QueryRoutePolicyResp struct {
Success bool `json:"success"`
Code string `json:"code"`
Message string `json:"message"`
Data struct {
RoutePolicyList []struct {
SiteID string `json:"siteID"`
SiteCode string `json:"siteCode"`
ModName string `json:"modName"`
HttpUrl string `json:"httpUrl"`
HttpsUrl string `json:"httpsUrl"`
EnvID string `json:"envID"`
ExtInfo string `json:"extInfo"`
HashName string `json:"hashName"`
ModAddrType int `json:"modAddrType"`
} `json:"routePolicyList"`
} `json:"data"`
}
type RefreshTokenResp struct {
XMLName xml.Name `xml:"root"`
Return string `xml:"return"`
Token string `xml:"token"`
Expiretime int32 `xml:"expiretime"`
AccessToken string `xml:"accessToken"`
Desc string `xml:"desc"`
}

View File

@@ -6,6 +6,7 @@ import (
"fmt"
"net/http"
"net/url"
"path"
"sort"
"strconv"
"strings"
@@ -13,9 +14,9 @@ import (
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/pkg/utils/random"
"github.com/alist-org/alist/v3/internal/op"
"github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go"
log "github.com/sirupsen/logrus"
@@ -54,14 +55,38 @@ func getTime(t string) time.Time {
}
func (d *Yun139) refreshToken() error {
url := "https://aas.caiyun.feixin.10086.cn:443/tellin/authTokenRefresh.do"
var resp RefreshTokenResp
if d.ref != nil {
return d.ref.refreshToken()
}
decode, err := base64.StdEncoding.DecodeString(d.Authorization)
if err != nil {
return err
return fmt.Errorf("authorization decode failed: %s", err)
}
decodeStr := string(decode)
splits := strings.Split(decodeStr, ":")
if len(splits) < 3 {
return fmt.Errorf("authorization is invalid, splits < 3")
}
d.Account = splits[1]
strs := strings.Split(splits[2], "|")
if len(strs) < 4 {
return fmt.Errorf("authorization is invalid, strs < 4")
}
expiration, err := strconv.ParseInt(strs[3], 10, 64)
if err != nil {
return fmt.Errorf("authorization is invalid")
}
expiration -= time.Now().UnixMilli()
if expiration > 1000*60*60*24*15 {
// Authorization有效期大于15天无需刷新
return nil
}
if expiration < 0 {
return fmt.Errorf("authorization has expired")
}
url := "https://aas.caiyun.feixin.10086.cn:443/tellin/authTokenRefresh.do"
var resp RefreshTokenResp
reqBody := "<root><token>" + splits[2] + "</token><account>" + splits[1] + "</account><clienttype>656</clienttype></root>"
_, err = base.RestyClient.R().
ForceContentType("application/xml").
@@ -99,21 +124,22 @@ func (d *Yun139) request(pathname string, method string, callback base.ReqCallba
req.SetHeaders(map[string]string{
"Accept": "application/json, text/plain, */*",
"CMS-DEVICE": "default",
"Authorization": "Basic " + d.Authorization,
"Authorization": "Basic " + d.getAuthorization(),
"mcloud-channel": "1000101",
"mcloud-client": "10701",
//"mcloud-route": "001",
"mcloud-sign": fmt.Sprintf("%s,%s,%s", ts, randStr, sign),
//"mcloud-skey":"",
"mcloud-version": "6.6.0",
"Origin": "https://yun.139.com",
"Referer": "https://yun.139.com/w/",
"x-DeviceInfo": "||9|6.6.0|chrome|95.0.4638.69|uwIy75obnsRPIwlJSd7D9GhUvFwG96ce||macos 10.15.2||zh-CN|||",
"x-huawei-channelSrc": "10000034",
"x-inner-ntwk": "2",
"x-m4c-caller": "PC",
"x-m4c-src": "10002",
"x-SvcType": svcType,
"mcloud-version": "7.14.0",
"Origin": "https://yun.139.com",
"Referer": "https://yun.139.com/w/",
"x-DeviceInfo": "||9|7.14.0|chrome|120.0.0.0|||windows 10||zh-CN|||",
"x-huawei-channelSrc": "10000034",
"x-inner-ntwk": "2",
"x-m4c-caller": "PC",
"x-m4c-src": "10002",
"x-SvcType": svcType,
"Inner-Hcy-Router-Https": "1",
})
var e BaseResp
@@ -131,6 +157,64 @@ func (d *Yun139) request(pathname string, method string, callback base.ReqCallba
}
return res.Body(), nil
}
func (d *Yun139) requestRoute(data interface{}, resp interface{}) ([]byte, error) {
url := "https://user-njs.yun.139.com/user/route/qryRoutePolicy"
req := base.RestyClient.R()
randStr := random.String(16)
ts := time.Now().Format("2006-01-02 15:04:05")
callback := func(req *resty.Request) {
req.SetBody(data)
}
if callback != nil {
callback(req)
}
body, err := utils.Json.Marshal(req.Body)
if err != nil {
return nil, err
}
sign := calSign(string(body), ts, randStr)
svcType := "1"
if d.isFamily() {
svcType = "2"
}
req.SetHeaders(map[string]string{
"Accept": "application/json, text/plain, */*",
"CMS-DEVICE": "default",
"Authorization": "Basic " + d.getAuthorization(),
"mcloud-channel": "1000101",
"mcloud-client": "10701",
//"mcloud-route": "001",
"mcloud-sign": fmt.Sprintf("%s,%s,%s", ts, randStr, sign),
//"mcloud-skey":"",
"mcloud-version": "7.14.0",
"Origin": "https://yun.139.com",
"Referer": "https://yun.139.com/w/",
"x-DeviceInfo": "||9|7.14.0|chrome|120.0.0.0|||windows 10||zh-CN|||",
"x-huawei-channelSrc": "10000034",
"x-inner-ntwk": "2",
"x-m4c-caller": "PC",
"x-m4c-src": "10002",
"x-SvcType": svcType,
"Inner-Hcy-Router-Https": "1",
})
var e BaseResp
req.SetResult(&e)
res, err := req.Execute(http.MethodPost, url)
log.Debugln(res.String())
if !e.Success {
return nil, errors.New(e.Message)
}
if resp != nil {
err = utils.Json.Unmarshal(res.Body(), resp)
if err != nil {
return nil, err
}
}
return res.Body(), nil
}
func (d *Yun139) post(pathname string, data interface{}, resp interface{}) ([]byte, error) {
return d.request(pathname, http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
@@ -151,7 +235,7 @@ func (d *Yun139) getFiles(catalogID string) ([]model.Obj, error) {
"catalogSortType": 0,
"contentSortType": 0,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
@@ -199,7 +283,7 @@ func (d *Yun139) newJson(data map[string]interface{}) base.Json {
"cloudID": d.CloudID,
"cloudType": 1,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
@@ -220,10 +304,11 @@ func (d *Yun139) familyGetFiles(catalogID string) ([]model.Obj, error) {
"sortDirection": 1,
})
var resp QueryContentListResp
_, err := d.post("/orchestration/familyCloud/content/v1.0/queryContentList", data, &resp)
_, err := d.post("/orchestration/familyCloud-rebuild/content/v1.2/queryContentList", data, &resp)
if err != nil {
return nil, err
}
path := resp.Data.Path
for _, catalog := range resp.Data.CloudCatalogList {
f := model.Object{
ID: catalog.CatalogID,
@@ -232,6 +317,7 @@ func (d *Yun139) familyGetFiles(catalogID string) ([]model.Obj, error) {
IsFolder: true,
Modified: getTime(catalog.LastUpdateTime),
Ctime: getTime(catalog.CreateTime),
Path: path, // 文件夹上一级的Path
}
files = append(files, &f)
}
@@ -243,13 +329,14 @@ func (d *Yun139) familyGetFiles(catalogID string) ([]model.Obj, error) {
Size: content.ContentSize,
Modified: getTime(content.LastUpdateTime),
Ctime: getTime(content.CreateTime),
Path: path, // 文件所在目录的Path
},
Thumbnail: model.Thumbnail{Thumbnail: content.ThumbnailURL},
//Thumbnail: content.BigthumbnailURL,
}
files = append(files, &f)
}
if 100*pageNum > resp.Data.TotalCount {
if resp.Data.TotalCount == 0 {
break
}
pageNum++
@@ -257,12 +344,67 @@ func (d *Yun139) familyGetFiles(catalogID string) ([]model.Obj, error) {
return files, nil
}
func (d *Yun139) groupGetFiles(catalogID string) ([]model.Obj, error) {
pageNum := 1
files := make([]model.Obj, 0)
for {
data := d.newJson(base.Json{
"groupID": d.CloudID,
"catalogID": path.Base(catalogID),
"contentSortType": 0,
"sortDirection": 1,
"startNumber": pageNum,
"endNumber": pageNum + 99,
"path": path.Join(d.RootFolderID, catalogID),
})
var resp QueryGroupContentListResp
_, err := d.post("/orchestration/group-rebuild/content/v1.0/queryGroupContentList", data, &resp)
if err != nil {
return nil, err
}
path := resp.Data.GetGroupContentResult.ParentCatalogID
for _, catalog := range resp.Data.GetGroupContentResult.CatalogList {
f := model.Object{
ID: catalog.CatalogID,
Name: catalog.CatalogName,
Size: 0,
IsFolder: true,
Modified: getTime(catalog.UpdateTime),
Ctime: getTime(catalog.CreateTime),
Path: catalog.Path, // 文件夹的真实Path root:/开头
}
files = append(files, &f)
}
for _, content := range resp.Data.GetGroupContentResult.ContentList {
f := model.ObjThumb{
Object: model.Object{
ID: content.ContentID,
Name: content.ContentName,
Size: content.ContentSize,
Modified: getTime(content.UpdateTime),
Ctime: getTime(content.CreateTime),
Path: path, // 文件所在目录的Path
},
Thumbnail: model.Thumbnail{Thumbnail: content.ThumbnailURL},
//Thumbnail: content.BigthumbnailURL,
}
files = append(files, &f)
}
if (pageNum + 99) > resp.Data.GetGroupContentResult.NodeCount {
break
}
pageNum = pageNum + 100
}
return files, nil
}
func (d *Yun139) getLink(contentId string) (string, error) {
data := base.Json{
"appName": "",
"contentID": contentId,
"commonAccountInfo": base.Json{
"account": d.Account,
"account": d.getAccount(),
"accountType": 1,
},
}
@@ -273,6 +415,32 @@ func (d *Yun139) getLink(contentId string) (string, error) {
}
return jsoniter.Get(res, "data", "downloadURL").ToString(), nil
}
func (d *Yun139) familyGetLink(contentId string, path string) (string, error) {
data := d.newJson(base.Json{
"contentID": contentId,
"path": path,
})
res, err := d.post("/orchestration/familyCloud-rebuild/content/v1.0/getFileDownLoadURL",
data, nil)
if err != nil {
return "", err
}
return jsoniter.Get(res, "data", "downloadURL").ToString(), nil
}
func (d *Yun139) groupGetLink(contentId string, path string) (string, error) {
data := d.newJson(base.Json{
"contentID": contentId,
"groupID": d.CloudID,
"path": path,
})
res, err := d.post("/orchestration/group-rebuild/groupManage/v1.0/getGroupFileDownLoadURL",
data, nil)
if err != nil {
return "", err
}
return jsoniter.Get(res, "data", "downloadURL").ToString(), nil
}
func unicode(str string) string {
textQuoted := strconv.QuoteToASCII(str)
@@ -281,7 +449,7 @@ func unicode(str string) string {
}
func (d *Yun139) personalRequest(pathname string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
url := "https://personal-kd-njs.yun.139.com" + pathname
url := d.getPersonalCloudHost() + pathname
req := base.RestyClient.R()
randStr := random.String(16)
ts := time.Now().Format("2006-01-02 15:04:05")
@@ -299,17 +467,15 @@ func (d *Yun139) personalRequest(pathname string, method string, callback base.R
}
req.SetHeaders(map[string]string{
"Accept": "application/json, text/plain, */*",
"Authorization": "Basic " + d.Authorization,
"Authorization": "Basic " + d.getAuthorization(),
"Caller": "web",
"Cms-Device": "default",
"Mcloud-Channel": "1000101",
"Mcloud-Client": "10701",
"Mcloud-Route": "001",
"Mcloud-Sign": fmt.Sprintf("%s,%s,%s", ts, randStr, sign),
"Mcloud-Version": "7.13.0",
"Origin": "https://yun.139.com",
"Referer": "https://yun.139.com/w/",
"x-DeviceInfo": "||9|7.13.0|chrome|120.0.0.0|||windows 10||zh-CN|||",
"Mcloud-Version": "7.14.0",
"x-DeviceInfo": "||9|7.14.0|chrome|120.0.0.0|||windows 10||zh-CN|||",
"x-huawei-channelSrc": "10000034",
"x-inner-ntwk": "2",
"x-m4c-caller": "PC",
@@ -318,7 +484,7 @@ func (d *Yun139) personalRequest(pathname string, method string, callback base.R
"X-Yun-Api-Version": "v1",
"X-Yun-App-Channel": "10000034",
"X-Yun-Channel-Source": "10000034",
"X-Yun-Client-Info": "||9|7.13.0|chrome|120.0.0.0|||windows 10||zh-CN|||dW5kZWZpbmVk||",
"X-Yun-Client-Info": "||9|7.14.0|chrome|120.0.0.0|||windows 10||zh-CN|||dW5kZWZpbmVk||",
"X-Yun-Module-Type": "100",
"X-Yun-Svc-Type": "1",
})
@@ -370,7 +536,7 @@ func (d *Yun139) personalGetFiles(fileId string) ([]model.Obj, error) {
"parentFileId": fileId,
}
var resp PersonalListResp
_, err := d.personalPost("/hcy/file/list", data, &resp)
_, err := d.personalPost("/file/list", data, &resp)
if err != nil {
return nil, err
}
@@ -390,7 +556,15 @@ func (d *Yun139) personalGetFiles(fileId string) ([]model.Obj, error) {
} else {
var Thumbnails = item.Thumbnails
var ThumbnailUrl string
if len(Thumbnails) > 0 {
if d.UseLargeThumbnail {
for _, thumb := range Thumbnails {
if strings.Contains(thumb.Style, "Large") {
ThumbnailUrl = thumb.Url
break
}
}
}
if ThumbnailUrl == "" && len(Thumbnails) > 0 {
ThumbnailUrl = Thumbnails[len(Thumbnails)-1].Url
}
f = &model.ObjThumb{
@@ -418,7 +592,7 @@ func (d *Yun139) personalGetLink(fileId string) (string, error) {
data := base.Json{
"fileId": fileId,
}
res, err := d.personalPost("/hcy/file/getDownloadUrl",
res, err := d.personalPost("/file/getDownloadUrl",
data, nil)
if err != nil {
return "", err
@@ -430,3 +604,22 @@ func (d *Yun139) personalGetLink(fileId string) (string, error) {
return jsoniter.Get(res, "data", "url").ToString(), nil
}
}
func (d *Yun139) getAuthorization() string {
if d.ref != nil {
return d.ref.getAuthorization()
}
return d.Authorization
}
func (d *Yun139) getAccount() string {
if d.ref != nil {
return d.ref.getAccount()
}
return d.Account
}
func (d *Yun139) getPersonalCloudHost() string {
if d.ref != nil {
return d.ref.getPersonalCloudHost()
}
return d.PersonalCloudHost
}

View File

@@ -365,7 +365,7 @@ func (d *Cloud189) newUpload(ctx context.Context, dstDir model.Obj, file model.F
log.Debugf("uploadData: %+v", uploadData)
requestURL := uploadData.RequestURL
uploadHeaders := strings.Split(decodeURIComponent(uploadData.RequestHeader), "&")
req, err := http.NewRequest(http.MethodPut, requestURL, bytes.NewReader(byteData))
req, err := http.NewRequest(http.MethodPut, requestURL, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
@@ -375,11 +375,11 @@ func (d *Cloud189) newUpload(ctx context.Context, dstDir model.Obj, file model.F
req.Header.Set(v[0:i], v[i+1:])
}
r, err := base.HttpClient.Do(req)
log.Debugf("%+v %+v", r, r.Request.Header)
r.Body.Close()
if err != nil {
return err
}
log.Debugf("%+v %+v", r, r.Request.Header)
_ = r.Body.Close()
up(float64(i) * 100 / float64(count))
}
fileMd5 := hex.EncodeToString(md5Sum.Sum(nil))

View File

@@ -1,8 +1,8 @@
package _189pc
import (
"container/ring"
"context"
"fmt"
"net/http"
"strconv"
"strings"
@@ -14,6 +14,7 @@ import (
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
)
type Cloud189PC struct {
@@ -29,10 +30,11 @@ type Cloud189PC struct {
uploadThread int
familyTransferFolder *ring.Ring
familyTransferFolder *Cloud189Folder
cleanFamilyTransferFile func()
storageConfig driver.Config
ref *Cloud189PC
}
func (y *Cloud189PC) Config() driver.Config {
@@ -47,9 +49,18 @@ func (y *Cloud189PC) GetAddition() driver.Additional {
}
func (y *Cloud189PC) Init(ctx context.Context) (err error) {
// 兼容旧上传接口
y.storageConfig.NoOverwriteUpload = y.isFamily() && (y.Addition.RapidUpload || y.Addition.UploadMethod == "old")
y.storageConfig = config
if y.isFamily() {
// 兼容旧上传接口
if y.Addition.RapidUpload || y.Addition.UploadMethod == "old" {
y.storageConfig.NoOverwriteUpload = true
}
} else {
// 家庭云转存,不支持覆盖上传
if y.Addition.FamilyTransfer {
y.storageConfig.NoOverwriteUpload = true
}
}
// 处理个人云和家庭云参数
if y.isFamily() && y.RootFolderID == "-11" {
y.RootFolderID = ""
@@ -64,20 +75,22 @@ func (y *Cloud189PC) Init(ctx context.Context) (err error) {
y.uploadThread, y.UploadThread = 3, "3"
}
// 初始化请求客户端
if y.client == nil {
y.client = base.NewRestyClient().SetHeaders(map[string]string{
"Accept": "application/json;charset=UTF-8",
"Referer": WEB_URL,
})
}
if y.ref == nil {
// 初始化请求客户端
if y.client == nil {
y.client = base.NewRestyClient().SetHeaders(map[string]string{
"Accept": "application/json;charset=UTF-8",
"Referer": WEB_URL,
})
}
// 避免重复登陆
identity := utils.GetMD5EncodeStr(y.Username + y.Password)
if !y.isLogin() || y.identity != identity {
y.identity = identity
if err = y.login(); err != nil {
return
// 避免重复登陆
identity := utils.GetMD5EncodeStr(y.Username + y.Password)
if !y.isLogin() || y.identity != identity {
y.identity = identity
if err = y.login(); err != nil {
return
}
}
}
@@ -88,13 +101,14 @@ func (y *Cloud189PC) Init(ctx context.Context) (err error) {
}
}
// 创建中转文件夹,防止重名文件
// 创建中转文件夹
if y.FamilyTransfer {
if y.familyTransferFolder, err = y.createFamilyTransferFolder(32); err != nil {
if err := y.createFamilyTransferFolder(); err != nil {
return err
}
}
// 清理转存文件节流
y.cleanFamilyTransferFile = utils.NewThrottle2(time.Minute, func() {
if err := y.cleanFamilyTransfer(context.TODO()); err != nil {
utils.Log.Errorf("cleanFamilyTransferFolderError:%s", err)
@@ -103,7 +117,17 @@ func (y *Cloud189PC) Init(ctx context.Context) (err error) {
return
}
func (d *Cloud189PC) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*Cloud189PC)
if ok {
d.ref = refStorage
return nil
}
return errs.NotSupport
}
func (y *Cloud189PC) Drop(ctx context.Context) error {
y.ref = nil
return nil
}
@@ -314,35 +338,49 @@ func (y *Cloud189PC) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
if !isFamily && y.FamilyTransfer {
// 修改上传目标为家庭云文件夹
transferDstDir := dstDir
dstDir = (y.familyTransferFolder.Value).(*Cloud189Folder)
y.familyTransferFolder = y.familyTransferFolder.Next()
dstDir = y.familyTransferFolder
// 使用临时文件名
srcName := stream.GetName()
stream = &WrapFileStreamer{
FileStreamer: stream,
Name: fmt.Sprintf("0%s.transfer", uuid.NewString()),
}
// 使用家庭云上传
isFamily = true
overwrite = false
defer func() {
if newObj != nil {
// 批量任务有概率删不掉
y.cleanFamilyTransferFile()
// 转存家庭云文件到个人云
err = y.SaveFamilyFileToPersonCloud(context.TODO(), y.FamilyID, newObj, transferDstDir, true)
task := BatchTaskInfo{
FileId: newObj.GetID(),
FileName: newObj.GetName(),
IsFolder: BoolToNumber(newObj.IsDir()),
// 删除家庭云源文件
go y.Delete(context.TODO(), y.FamilyID, newObj)
// 批量任务有概率删不掉
go y.cleanFamilyTransferFile()
// 转存失败返回错误
if err != nil {
return
}
// 删除源文件
if resp, err := y.CreateBatchTask("DELETE", y.FamilyID, "", nil, task); err == nil {
y.WaitBatchTask("DELETE", resp.TaskID, time.Second)
// 永久删除
if resp, err := y.CreateBatchTask("CLEAR_RECYCLE", y.FamilyID, "", nil, task); err == nil {
y.WaitBatchTask("CLEAR_RECYCLE", resp.TaskID, time.Second)
// 查找转存文件
var file *Cloud189File
file, err = y.findFileByName(context.TODO(), newObj.GetName(), transferDstDir.GetID(), false)
if err != nil {
if err == errs.ObjectNotFound {
err = fmt.Errorf("unknown error: No transfer file obtained %s", newObj.GetName())
}
return
}
newObj = nil
// 重命名转存文件
newObj, err = y.Rename(context.TODO(), file, srcName)
if err != nil {
// 重命名失败删除源文件
_ = y.Delete(context.TODO(), "", file)
}
return
}
}()
}

View File

@@ -18,6 +18,7 @@ import (
"strings"
"time"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils/random"
)
@@ -208,3 +209,12 @@ func IF[V any](o bool, t V, f V) V {
}
return f
}
type WrapFileStreamer struct {
model.FileStreamer
Name string
}
func (w *WrapFileStreamer) GetName() string {
return w.Name
}

View File

@@ -2,30 +2,32 @@ package _189pc
import (
"bytes"
"container/ring"
"context"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"encoding/xml"
"fmt"
"io"
"math"
"net/http"
"net/http/cookiejar"
"net/url"
"os"
"regexp"
"sort"
"strconv"
"strings"
"time"
"golang.org/x/sync/semaphore"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/setting"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/errgroup"
"github.com/alist-org/alist/v3/pkg/utils"
@@ -57,11 +59,11 @@ const (
func (y *Cloud189PC) SignatureHeader(url, method, params string, isFamily bool) map[string]string {
dateOfGmt := getHttpDateStr()
sessionKey := y.tokenInfo.SessionKey
sessionSecret := y.tokenInfo.SessionSecret
sessionKey := y.getTokenInfo().SessionKey
sessionSecret := y.getTokenInfo().SessionSecret
if isFamily {
sessionKey = y.tokenInfo.FamilySessionKey
sessionSecret = y.tokenInfo.FamilySessionSecret
sessionKey = y.getTokenInfo().FamilySessionKey
sessionSecret = y.getTokenInfo().FamilySessionSecret
}
header := map[string]string{
@@ -74,9 +76,9 @@ func (y *Cloud189PC) SignatureHeader(url, method, params string, isFamily bool)
}
func (y *Cloud189PC) EncryptParams(params Params, isFamily bool) string {
sessionSecret := y.tokenInfo.SessionSecret
sessionSecret := y.getTokenInfo().SessionSecret
if isFamily {
sessionSecret = y.tokenInfo.FamilySessionSecret
sessionSecret = y.getTokenInfo().FamilySessionSecret
}
if params != nil {
return AesECBEncrypt(params.Encode(), sessionSecret[:16])
@@ -85,7 +87,7 @@ func (y *Cloud189PC) EncryptParams(params Params, isFamily bool) string {
}
func (y *Cloud189PC) request(url, method string, callback base.ReqCallback, params Params, resp interface{}, isFamily ...bool) ([]byte, error) {
req := y.client.R().SetQueryParams(clientSuffix())
req := y.getClient().R().SetQueryParams(clientSuffix())
// 设置params
paramsData := y.EncryptParams(params, isBool(isFamily...))
@@ -174,8 +176,8 @@ func (y *Cloud189PC) put(ctx context.Context, url string, headers map[string]str
}
var erron RespErr
jsoniter.Unmarshal(body, &erron)
xml.Unmarshal(body, &erron)
_ = jsoniter.Unmarshal(body, &erron)
_ = xml.Unmarshal(body, &erron)
if erron.HasError() {
return nil, &erron
}
@@ -185,39 +187,9 @@ func (y *Cloud189PC) put(ctx context.Context, url string, headers map[string]str
return body, nil
}
func (y *Cloud189PC) getFiles(ctx context.Context, fileId string, isFamily bool) ([]model.Obj, error) {
fullUrl := API_URL
if isFamily {
fullUrl += "/family/file"
}
fullUrl += "/listFiles.action"
res := make([]model.Obj, 0, 130)
res := make([]model.Obj, 0, 100)
for pageNum := 1; ; pageNum++ {
var resp Cloud189FilesResp
_, err := y.get(fullUrl, func(r *resty.Request) {
r.SetContext(ctx)
r.SetQueryParams(map[string]string{
"folderId": fileId,
"fileType": "0",
"mediaAttr": "0",
"iconOption": "5",
"pageNum": fmt.Sprint(pageNum),
"pageSize": "130",
})
if isFamily {
r.SetQueryParams(map[string]string{
"familyId": y.FamilyID,
"orderBy": toFamilyOrderBy(y.OrderBy),
"descending": toDesc(y.OrderDirection),
})
} else {
r.SetQueryParams(map[string]string{
"recursive": "0",
"orderBy": y.OrderBy,
"descending": toDesc(y.OrderDirection),
})
}
}, &resp, isFamily)
resp, err := y.getFilesWithPage(ctx, fileId, isFamily, pageNum, 1000, y.OrderBy, y.OrderDirection)
if err != nil {
return nil, err
}
@@ -236,6 +208,63 @@ func (y *Cloud189PC) getFiles(ctx context.Context, fileId string, isFamily bool)
return res, nil
}
func (y *Cloud189PC) getFilesWithPage(ctx context.Context, fileId string, isFamily bool, pageNum int, pageSize int, orderBy string, orderDirection string) (*Cloud189FilesResp, error) {
fullUrl := API_URL
if isFamily {
fullUrl += "/family/file"
}
fullUrl += "/listFiles.action"
var resp Cloud189FilesResp
_, err := y.get(fullUrl, func(r *resty.Request) {
r.SetContext(ctx)
r.SetQueryParams(map[string]string{
"folderId": fileId,
"fileType": "0",
"mediaAttr": "0",
"iconOption": "5",
"pageNum": fmt.Sprint(pageNum),
"pageSize": fmt.Sprint(pageSize),
})
if isFamily {
r.SetQueryParams(map[string]string{
"familyId": y.FamilyID,
"orderBy": toFamilyOrderBy(orderBy),
"descending": toDesc(orderDirection),
})
} else {
r.SetQueryParams(map[string]string{
"recursive": "0",
"orderBy": orderBy,
"descending": toDesc(orderDirection),
})
}
}, &resp, isFamily)
if err != nil {
return nil, err
}
return &resp, nil
}
func (y *Cloud189PC) findFileByName(ctx context.Context, searchName string, folderId string, isFamily bool) (*Cloud189File, error) {
for pageNum := 1; ; pageNum++ {
resp, err := y.getFilesWithPage(ctx, folderId, isFamily, pageNum, 10, "filename", "asc")
if err != nil {
return nil, err
}
// 获取完毕跳出
if resp.FileListAO.Count == 0 {
return nil, errs.ObjectNotFound
}
for i := 0; i < len(resp.FileListAO.FileList); i++ {
file := resp.FileListAO.FileList[i]
if file.Name == searchName {
return &file, nil
}
}
}
}
func (y *Cloud189PC) login() (err error) {
// 初始化登陆所需参数
if y.loginParam == nil {
@@ -295,7 +324,7 @@ func (y *Cloud189PC) login() (err error) {
_, err = y.client.R().
SetResult(&tokenInfo).SetError(&erron).
SetQueryParams(clientSuffix()).
SetQueryParam("redirectURL", url.QueryEscape(loginresp.ToUrl)).
SetQueryParam("redirectURL", loginresp.ToUrl).
Post(API_URL + "/getSessionForPC.action")
if err != nil {
return
@@ -403,6 +432,9 @@ func (y *Cloud189PC) initLoginParam() error {
// 刷新会话
func (y *Cloud189PC) refreshSession() (err error) {
if y.ref != nil {
return y.ref.refreshSession()
}
var erron RespErr
var userSessionResp UserSessionResp
_, err = y.client.R().
@@ -441,12 +473,8 @@ func (y *Cloud189PC) refreshSession() (err error) {
// 普通上传
// 无法上传大小为0的文件
func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
var sliceSize = partSize(file.GetSize())
count := int(math.Ceil(float64(file.GetSize()) / float64(sliceSize)))
lastPartSize := file.GetSize() % sliceSize
if file.GetSize() > 0 && lastPartSize == 0 {
lastPartSize = sliceSize
}
size := file.GetSize()
sliceSize := partSize(size)
params := Params{
"parentFolderId": dstDir.GetID(),
@@ -478,24 +506,32 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
retry.Attempts(3),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
sem := semaphore.NewWeighted(3)
fileMd5 := md5.New()
silceMd5 := md5.New()
count := int(size / sliceSize)
lastPartSize := size % sliceSize
if lastPartSize > 0 {
count++
} else {
lastPartSize = sliceSize
}
fileMd5 := utils.MD5.NewFunc()
silceMd5 := utils.MD5.NewFunc()
silceMd5Hexs := make([]string, 0, count)
teeReader := io.TeeReader(file, io.MultiWriter(fileMd5, silceMd5))
byteSize := sliceSize
for i := 1; i <= count; i++ {
if utils.IsCanceled(upCtx) {
break
}
byteData := make([]byte, sliceSize)
if i == count {
byteData = byteData[:lastPartSize]
byteSize = lastPartSize
}
byteData := make([]byte, byteSize)
// 读取块
silceMd5.Reset()
if _, err := io.ReadFull(io.TeeReader(file, io.MultiWriter(fileMd5, silceMd5)), byteData); err != io.EOF && err != nil {
if _, err := io.ReadFull(teeReader, byteData); err != io.EOF && err != nil {
sem.Release(1)
return nil, err
}
@@ -505,6 +541,10 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
partInfo := fmt.Sprintf("%d-%s", i, base64.StdEncoding.EncodeToString(md5Bytes))
threadG.Go(func(ctx context.Context) error {
if err = sem.Acquire(ctx, 1); err != nil {
return err
}
defer sem.Release(1)
uploadUrls, err := y.GetMultiUploadUrls(ctx, isFamily, initMultiUpload.Data.UploadFileID, partInfo)
if err != nil {
return err
@@ -512,7 +552,8 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
// step.4 上传切片
uploadUrl := uploadUrls[0]
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, bytes.NewReader(byteData), isFamily)
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false,
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)), isFamily)
if err != nil {
return err
}
@@ -569,24 +610,43 @@ func (y *Cloud189PC) RapidUpload(ctx context.Context, dstDir model.Obj, stream m
// 快传
func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
tempFile, err := file.CacheFullInTempFile()
if err != nil {
return nil, err
var (
cache = file.GetFile()
tmpF *os.File
err error
)
size := file.GetSize()
if _, ok := cache.(io.ReaderAt); !ok && size > 0 {
tmpF, err = os.CreateTemp(conf.Conf.TempDir, "file-*")
if err != nil {
return nil, err
}
defer func() {
_ = tmpF.Close()
_ = os.Remove(tmpF.Name())
}()
cache = tmpF
}
var sliceSize = partSize(file.GetSize())
count := int(math.Ceil(float64(file.GetSize()) / float64(sliceSize)))
lastSliceSize := file.GetSize() % sliceSize
if file.GetSize() > 0 && lastSliceSize == 0 {
sliceSize := partSize(size)
count := int(size / sliceSize)
lastSliceSize := size % sliceSize
if lastSliceSize > 0 {
count++
} else {
lastSliceSize = sliceSize
}
//step.1 优先计算所需信息
byteSize := sliceSize
fileMd5 := md5.New()
silceMd5 := md5.New()
silceMd5Hexs := make([]string, 0, count)
fileMd5 := utils.MD5.NewFunc()
sliceMd5 := utils.MD5.NewFunc()
sliceMd5Hexs := make([]string, 0, count)
partInfos := make([]string, 0, count)
writers := []io.Writer{fileMd5, sliceMd5}
if tmpF != nil {
writers = append(writers, tmpF)
}
written := int64(0)
for i := 1; i <= count; i++ {
if utils.IsCanceled(ctx) {
return nil, ctx.Err()
@@ -596,19 +656,31 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
byteSize = lastSliceSize
}
silceMd5.Reset()
if _, err := utils.CopyWithBufferN(io.MultiWriter(fileMd5, silceMd5), tempFile, byteSize); err != nil && err != io.EOF {
n, err := utils.CopyWithBufferN(io.MultiWriter(writers...), file, byteSize)
written += n
if err != nil && err != io.EOF {
return nil, err
}
md5Byte := silceMd5.Sum(nil)
silceMd5Hexs = append(silceMd5Hexs, strings.ToUpper(hex.EncodeToString(md5Byte)))
md5Byte := sliceMd5.Sum(nil)
sliceMd5Hexs = append(sliceMd5Hexs, strings.ToUpper(hex.EncodeToString(md5Byte)))
partInfos = append(partInfos, fmt.Sprint(i, "-", base64.StdEncoding.EncodeToString(md5Byte)))
sliceMd5.Reset()
}
if tmpF != nil {
if size > 0 && written != size {
return nil, errs.NewErr(err, "CreateTempFile failed, incoming stream actual size= %d, expect = %d ", written, size)
}
_, err = tmpF.Seek(0, io.SeekStart)
if err != nil {
return nil, errs.NewErr(err, "CreateTempFile failed, can't seek to 0 ")
}
}
fileMd5Hex := strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
sliceMd5Hex := fileMd5Hex
if file.GetSize() > sliceSize {
sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(silceMd5Hexs, "\n")))
if size > sliceSize {
sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(sliceMd5Hexs, "\n")))
}
fullUrl := UPLOAD_URL
@@ -620,7 +692,7 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
}
// 尝试恢复进度
uploadProgress, ok := base.GetUploadProgress[*UploadProgress](y, y.tokenInfo.SessionKey, fileMd5Hex)
uploadProgress, ok := base.GetUploadProgress[*UploadProgress](y, y.getTokenInfo().SessionKey, fileMd5Hex)
if !ok {
//step.2 预上传
params := Params{
@@ -674,7 +746,7 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
}
// step.4 上传切片
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, io.NewSectionReader(tempFile, offset, byteSize), isFamily)
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, io.NewSectionReader(cache, offset, byteSize), isFamily)
if err != nil {
return err
}
@@ -687,7 +759,7 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
if err = threadG.Wait(); err != nil {
if errors.Is(err, context.Canceled) {
uploadProgress.UploadParts = utils.SliceFilter(uploadProgress.UploadParts, func(s string) bool { return s != "" })
base.SaveUploadProgress(y, uploadProgress, y.tokenInfo.SessionKey, fileMd5Hex)
base.SaveUploadProgress(y, uploadProgress, y.getTokenInfo().SessionKey, fileMd5Hex)
}
return nil, err
}
@@ -756,14 +828,11 @@ func (y *Cloud189PC) GetMultiUploadUrls(ctx context.Context, isFamily bool, uplo
// 旧版本上传,家庭云不支持覆盖
func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
tempFile, err := file.CacheFullInTempFile()
if err != nil {
return nil, err
}
fileMd5, err := utils.HashFile(utils.MD5, tempFile)
tempFile, fileMd5, err := stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil {
return nil, err
}
rateLimited := driver.NewLimitedUploadStream(ctx, io.NopCloser(tempFile))
// 创建上传会话
uploadInfo, err := y.OldUploadCreate(ctx, dstDir.GetID(), fileMd5, file.GetName(), fmt.Sprint(file.GetSize()), isFamily)
@@ -790,7 +859,7 @@ func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model
header["Edrive-UploadFileId"] = fmt.Sprint(status.UploadFileId)
}
_, err := y.put(ctx, status.FileUploadUrl, header, true, io.NopCloser(tempFile), isFamily)
_, err := y.put(ctx, status.FileUploadUrl, header, true, rateLimited, isFamily)
if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" {
return nil, err
}
@@ -899,8 +968,7 @@ func (y *Cloud189PC) isLogin() bool {
}
// 创建家庭云中转文件夹
func (y *Cloud189PC) createFamilyTransferFolder(count int) (*ring.Ring, error) {
folders := ring.New(count)
func (y *Cloud189PC) createFamilyTransferFolder() error {
var rootFolder Cloud189Folder
_, err := y.post(API_URL+"/family/file/createFolder.action", func(req *resty.Request) {
req.SetQueryParams(map[string]string{
@@ -909,81 +977,61 @@ func (y *Cloud189PC) createFamilyTransferFolder(count int) (*ring.Ring, error) {
})
}, &rootFolder, true)
if err != nil {
return nil, err
return err
}
folderCount := 0
// 获取已有目录
files, err := y.getFiles(context.TODO(), rootFolder.GetID(), true)
if err != nil {
return nil, err
}
for _, file := range files {
if folder, ok := file.(*Cloud189Folder); ok {
folders.Value = folder
folders = folders.Next()
folderCount++
}
}
// 创建新的目录
for folderCount < count {
var newFolder Cloud189Folder
_, err := y.post(API_URL+"/family/file/createFolder.action", func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"folderName": uuid.NewString(),
"familyId": y.FamilyID,
"parentId": rootFolder.GetID(),
})
}, &newFolder, true)
if err != nil {
return nil, err
}
folders.Value = &newFolder
folders = folders.Next()
folderCount++
}
return folders, nil
y.familyTransferFolder = &rootFolder
return nil
}
// 清理中转文件夹
func (y *Cloud189PC) cleanFamilyTransfer(ctx context.Context) error {
var tasks []BatchTaskInfo
r := y.familyTransferFolder
for p := r.Next(); p != r; p = p.Next() {
folder := p.Value.(*Cloud189Folder)
files, err := y.getFiles(ctx, folder.GetID(), true)
transferFolderId := y.familyTransferFolder.GetID()
for pageNum := 1; ; pageNum++ {
resp, err := y.getFilesWithPage(ctx, transferFolderId, true, pageNum, 100, "lastOpTime", "asc")
if err != nil {
return err
}
for _, file := range files {
// 获取完毕跳出
if resp.FileListAO.Count == 0 {
break
}
var tasks []BatchTaskInfo
for i := 0; i < len(resp.FileListAO.FolderList); i++ {
folder := resp.FileListAO.FolderList[i]
tasks = append(tasks, BatchTaskInfo{
FileId: folder.GetID(),
FileName: folder.GetName(),
IsFolder: BoolToNumber(folder.IsDir()),
})
}
for i := 0; i < len(resp.FileListAO.FileList); i++ {
file := resp.FileListAO.FileList[i]
tasks = append(tasks, BatchTaskInfo{
FileId: file.GetID(),
FileName: file.GetName(),
IsFolder: BoolToNumber(file.IsDir()),
})
}
}
if len(tasks) > 0 {
// 删除
resp, err := y.CreateBatchTask("DELETE", y.FamilyID, "", nil, tasks...)
if err != nil {
if len(tasks) > 0 {
// 删除
resp, err := y.CreateBatchTask("DELETE", y.FamilyID, "", nil, tasks...)
if err != nil {
return err
}
err = y.WaitBatchTask("DELETE", resp.TaskID, time.Second)
if err != nil {
return err
}
// 永久删除
resp, err = y.CreateBatchTask("CLEAR_RECYCLE", y.FamilyID, "", nil, tasks...)
if err != nil {
return err
}
err = y.WaitBatchTask("CLEAR_RECYCLE", resp.TaskID, time.Second)
return err
}
err = y.WaitBatchTask("DELETE", resp.TaskID, time.Second)
if err != nil {
return err
}
// 永久删除
resp, err = y.CreateBatchTask("CLEAR_RECYCLE", y.FamilyID, "", nil, tasks...)
if err != nil {
return err
}
err = y.WaitBatchTask("CLEAR_RECYCLE", resp.TaskID, time.Second)
return err
}
return nil
}
@@ -1008,7 +1056,7 @@ func (y *Cloud189PC) getFamilyID() (string, error) {
return "", fmt.Errorf("cannot get automatically,please input family_id")
}
for _, info := range infos {
if strings.Contains(y.tokenInfo.LoginName, info.RemarkName) {
if strings.Contains(y.getTokenInfo().LoginName, info.RemarkName) {
return fmt.Sprint(info.FamilyID), nil
}
}
@@ -1060,6 +1108,34 @@ func (y *Cloud189PC) SaveFamilyFileToPersonCloud(ctx context.Context, familyId s
}
}
// 永久删除文件
func (y *Cloud189PC) Delete(ctx context.Context, familyId string, srcObj model.Obj) error {
task := BatchTaskInfo{
FileId: srcObj.GetID(),
FileName: srcObj.GetName(),
IsFolder: BoolToNumber(srcObj.IsDir()),
}
// 删除源文件
resp, err := y.CreateBatchTask("DELETE", familyId, "", nil, task)
if err != nil {
return err
}
err = y.WaitBatchTask("DELETE", resp.TaskID, time.Second)
if err != nil {
return err
}
// 清除回收站
resp, err = y.CreateBatchTask("CLEAR_RECYCLE", familyId, "", nil, task)
if err != nil {
return err
}
err = y.WaitBatchTask("CLEAR_RECYCLE", resp.TaskID, time.Second)
if err != nil {
return err
}
return nil
}
func (y *Cloud189PC) CreateBatchTask(aType string, familyID string, targetFolderId string, other map[string]string, taskInfos ...BatchTaskInfo) (*CreateBatchTaskResp, error) {
var resp CreateBatchTaskResp
_, err := y.post(API_URL+"/batch/createBatchTask.action", func(req *resty.Request) {
@@ -1142,3 +1218,17 @@ func (y *Cloud189PC) WaitBatchTask(aType string, taskID string, t time.Duration)
time.Sleep(t)
}
}
func (y *Cloud189PC) getTokenInfo() *AppSessionResp {
if y.ref != nil {
return y.ref.getTokenInfo()
}
return y.tokenInfo
}
func (y *Cloud189PC) getClient() *resty.Client {
if y.ref != nil {
return y.ref.getClient()
}
return y.client
}

View File

@@ -3,6 +3,7 @@ package alias
import (
"context"
"errors"
stdpath "path"
"strings"
"github.com/alist-org/alist/v3/internal/driver"
@@ -110,14 +111,62 @@ func (d *Alias) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
for _, dst := range dsts {
link, err := d.link(ctx, dst, sub, args)
if err == nil {
if !args.Redirect && len(link.URL) > 0 {
// 正常情况下 多并发 仅支持返回URL的驱动
// alias套娃alias 可以让crypt、mega等驱动(不返回URL的) 支持并发
if d.DownloadConcurrency > 0 {
link.Concurrency = d.DownloadConcurrency
}
if d.DownloadPartSize > 0 {
link.PartSize = d.DownloadPartSize * utils.KB
}
}
return link, nil
}
}
return nil, errs.ObjectNotFound
}
func (d *Alias) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, parentDir, true)
if err == nil {
return fs.MakeDir(ctx, stdpath.Join(*reqPath, dirName))
}
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot make sub-dir")
}
return err
}
func (d *Alias) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
if !d.Writable {
return errs.PermissionDenied
}
srcPath, err := d.getReqPath(ctx, srcObj, false)
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot be moved")
}
if err != nil {
return err
}
dstPath, err := d.getReqPath(ctx, dstDir, true)
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be moved to")
}
if err != nil {
return err
}
return fs.Move(ctx, *srcPath, *dstPath)
}
func (d *Alias) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
reqPath, err := d.getReqPath(ctx, srcObj)
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, srcObj, false)
if err == nil {
return fs.Rename(ctx, *reqPath, newName)
}
@@ -127,8 +176,33 @@ func (d *Alias) Rename(ctx context.Context, srcObj model.Obj, newName string) er
return err
}
func (d *Alias) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
if !d.Writable {
return errs.PermissionDenied
}
srcPath, err := d.getReqPath(ctx, srcObj, false)
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot be copied")
}
if err != nil {
return err
}
dstPath, err := d.getReqPath(ctx, dstDir, true)
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be copied to")
}
if err != nil {
return err
}
_, err = fs.Copy(ctx, *srcPath, *dstPath)
return err
}
func (d *Alias) Remove(ctx context.Context, obj model.Obj) error {
reqPath, err := d.getReqPath(ctx, obj)
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, obj, false)
if err == nil {
return fs.Remove(ctx, *reqPath)
}
@@ -138,4 +212,110 @@ func (d *Alias) Remove(ctx context.Context, obj model.Obj) error {
return err
}
func (d *Alias) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) error {
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, dstDir, true)
if err == nil {
return fs.PutDirectly(ctx, *reqPath, s)
}
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be Put")
}
return err
}
func (d *Alias) PutURL(ctx context.Context, dstDir model.Obj, name, url string) error {
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, dstDir, true)
if err == nil {
return fs.PutURL(ctx, *reqPath, name, url)
}
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot offline download")
}
return err
}
func (d *Alias) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
root, sub := d.getRootAndPath(obj.GetPath())
dsts, ok := d.pathMap[root]
if !ok {
return nil, errs.ObjectNotFound
}
for _, dst := range dsts {
meta, err := d.getArchiveMeta(ctx, dst, sub, args)
if err == nil {
return meta, nil
}
}
return nil, errs.NotImplement
}
func (d *Alias) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
root, sub := d.getRootAndPath(obj.GetPath())
dsts, ok := d.pathMap[root]
if !ok {
return nil, errs.ObjectNotFound
}
for _, dst := range dsts {
l, err := d.listArchive(ctx, dst, sub, args)
if err == nil {
return l, nil
}
}
return nil, errs.NotImplement
}
func (d *Alias) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// alias的两个驱动一个支持驱动提取一个不支持如何兼容
// 如果访问的是不支持驱动提取的驱动内的压缩文件GetArchiveMeta就会返回errs.NotImplement提取URL前缀就会是/aeExtract就不会被调用
// 如果访问的是支持驱动提取的驱动内的压缩文件GetArchiveMeta就会返回有效值提取URL前缀就会是/adExtract就会被调用
root, sub := d.getRootAndPath(obj.GetPath())
dsts, ok := d.pathMap[root]
if !ok {
return nil, errs.ObjectNotFound
}
for _, dst := range dsts {
link, err := d.extract(ctx, dst, sub, args)
if err == nil {
if !args.Redirect && len(link.URL) > 0 {
if d.DownloadConcurrency > 0 {
link.Concurrency = d.DownloadConcurrency
}
if d.DownloadPartSize > 0 {
link.PartSize = d.DownloadPartSize * utils.KB
}
}
return link, nil
}
}
return nil, errs.NotImplement
}
func (d *Alias) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) error {
if !d.Writable {
return errs.PermissionDenied
}
srcPath, err := d.getReqPath(ctx, srcObj, false)
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot be decompressed")
}
if err != nil {
return err
}
dstPath, err := d.getReqPath(ctx, dstDir, true)
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be decompressed to")
}
if err != nil {
return err
}
_, err = fs.ArchiveDecompress(ctx, *srcPath, *dstPath, args)
return err
}
var _ driver.Driver = (*Alias)(nil)

View File

@@ -9,15 +9,18 @@ type Addition struct {
// Usually one of two
// driver.RootPath
// define other
Paths string `json:"paths" required:"true" type:"text"`
ProtectSameName bool `json:"protect_same_name" default:"true" required:"false" help:"Protects same-name files from Delete or Rename"`
Paths string `json:"paths" required:"true" type:"text"`
ProtectSameName bool `json:"protect_same_name" default:"true" required:"false" help:"Protects same-name files from Delete or Rename"`
DownloadConcurrency int `json:"download_concurrency" default:"0" required:"false" type:"number" help:"Need to enable proxy"`
DownloadPartSize int `json:"download_part_size" default:"0" type:"number" required:"false" help:"Need to enable proxy. Unit: KB"`
Writable bool `json:"writable" type:"bool" default:"false"`
}
var config = driver.Config{
Name: "Alias",
LocalSort: true,
NoCache: true,
NoUpload: true,
NoUpload: false,
DefaultRoot: "/",
ProxyRangeOption: true,
}

View File

@@ -3,12 +3,15 @@ package alias
import (
"context"
"fmt"
"net/url"
stdpath "path"
"strings"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/fs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/sign"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/server/common"
@@ -62,6 +65,7 @@ func (d *Alias) get(ctx context.Context, path string, dst, sub string) (model.Ob
Size: obj.GetSize(),
Modified: obj.ModTime(),
IsFolder: obj.IsDir(),
HashInfo: obj.GetHash(),
}, nil
}
@@ -94,10 +98,15 @@ func (d *Alias) list(ctx context.Context, dst, sub string, args *fs.ListArgs) ([
func (d *Alias) link(ctx context.Context, dst, sub string, args model.LinkArgs) (*model.Link, error) {
reqPath := stdpath.Join(dst, sub)
storage, err := fs.GetStorage(reqPath, &fs.GetStoragesArgs{})
// 参考 crypt 驱动
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil {
return nil, err
}
if _, ok := storage.(*Alias); !ok && !args.Redirect {
link, _, err := op.Link(ctx, storage, reqActualPath, args)
return link, err
}
_, err = fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true})
if err != nil {
return nil, err
@@ -114,13 +123,13 @@ func (d *Alias) link(ctx context.Context, dst, sub string, args model.LinkArgs)
}
return link, nil
}
link, _, err := fs.Link(ctx, reqPath, args)
link, _, err := op.Link(ctx, storage, reqActualPath, args)
return link, err
}
func (d *Alias) getReqPath(ctx context.Context, obj model.Obj) (*string, error) {
func (d *Alias) getReqPath(ctx context.Context, obj model.Obj, isParent bool) (*string, error) {
root, sub := d.getRootAndPath(obj.GetPath())
if sub == "" {
if sub == "" && !isParent {
return nil, errs.NotSupport
}
dsts, ok := d.pathMap[root]
@@ -149,3 +158,68 @@ func (d *Alias) getReqPath(ctx context.Context, obj model.Obj) (*string, error)
}
return reqPath, nil
}
func (d *Alias) getArchiveMeta(ctx context.Context, dst, sub string, args model.ArchiveArgs) (model.ArchiveMeta, error) {
reqPath := stdpath.Join(dst, sub)
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil {
return nil, err
}
if _, ok := storage.(driver.ArchiveReader); ok {
return op.GetArchiveMeta(ctx, storage, reqActualPath, model.ArchiveMetaArgs{
ArchiveArgs: args,
Refresh: true,
})
}
return nil, errs.NotImplement
}
func (d *Alias) listArchive(ctx context.Context, dst, sub string, args model.ArchiveInnerArgs) ([]model.Obj, error) {
reqPath := stdpath.Join(dst, sub)
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil {
return nil, err
}
if _, ok := storage.(driver.ArchiveReader); ok {
return op.ListArchive(ctx, storage, reqActualPath, model.ArchiveListArgs{
ArchiveInnerArgs: args,
Refresh: true,
})
}
return nil, errs.NotImplement
}
func (d *Alias) extract(ctx context.Context, dst, sub string, args model.ArchiveInnerArgs) (*model.Link, error) {
reqPath := stdpath.Join(dst, sub)
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil {
return nil, err
}
if _, ok := storage.(driver.ArchiveReader); ok {
if _, ok := storage.(*Alias); !ok && !args.Redirect {
link, _, err := op.DriverExtract(ctx, storage, reqActualPath, args)
return link, err
}
_, err = fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true})
if err != nil {
return nil, err
}
if common.ShouldProxy(storage, stdpath.Base(sub)) {
link := &model.Link{
URL: fmt.Sprintf("%s/ap%s?inner=%s&pass=%s&sign=%s",
common.GetApiUrl(args.HttpReq),
utils.EncodePath(reqPath, true),
utils.EncodePath(args.InnerPath, true),
url.QueryEscape(args.Password),
sign.SignArchive(reqPath)),
}
if args.HttpReq != nil && d.ProxyRange {
link.RangeReadCloser = common.NoProxyRange
}
return link, nil
}
link, _, err := op.DriverExtract(ctx, storage, reqActualPath, args)
return link, err
}
return nil, errs.NotImplement
}

View File

@@ -5,12 +5,14 @@ import (
"fmt"
"io"
"net/http"
"net/url"
"path"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/server/common"
@@ -34,7 +36,7 @@ func (d *AListV3) GetAddition() driver.Additional {
func (d *AListV3) Init(ctx context.Context) error {
d.Addition.Address = strings.TrimSuffix(d.Addition.Address, "/")
var resp common.Resp[MeResp]
_, err := d.request("/me", http.MethodGet, func(req *resty.Request) {
_, _, err := d.request("/me", http.MethodGet, func(req *resty.Request) {
req.SetResult(&resp)
})
if err != nil {
@@ -48,15 +50,15 @@ func (d *AListV3) Init(ctx context.Context) error {
}
}
// re-get the user info
_, err = d.request("/me", http.MethodGet, func(req *resty.Request) {
_, _, err = d.request("/me", http.MethodGet, func(req *resty.Request) {
req.SetResult(&resp)
})
if err != nil {
return err
}
if resp.Data.Role == model.GUEST {
url := d.Address + "/api/public/settings"
res, err := base.RestyClient.R().Get(url)
if utils.SliceContains(resp.Data.Role, model.GUEST) {
u := d.Address + "/api/public/settings"
res, err := base.RestyClient.R().Get(u)
if err != nil {
return err
}
@@ -74,7 +76,7 @@ func (d *AListV3) Drop(ctx context.Context) error {
func (d *AListV3) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var resp common.Resp[FsListResp]
_, err := d.request("/fs/list", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/list", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ListReq{
PageReq: model.PageReq{
Page: 1,
@@ -116,7 +118,7 @@ func (d *AListV3) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
userAgent = base.UserAgent
}
}
_, err := d.request("/fs/get", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/get", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(FsGetReq{
Path: file.GetPath(),
Password: d.MetaPassword,
@@ -131,7 +133,7 @@ func (d *AListV3) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
}
func (d *AListV3) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
_, err := d.request("/fs/mkdir", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/mkdir", http.MethodPost, func(req *resty.Request) {
req.SetBody(MkdirOrLinkReq{
Path: path.Join(parentDir.GetPath(), dirName),
})
@@ -140,7 +142,7 @@ func (d *AListV3) MakeDir(ctx context.Context, parentDir model.Obj, dirName stri
}
func (d *AListV3) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
_, err := d.request("/fs/move", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/move", http.MethodPost, func(req *resty.Request) {
req.SetBody(MoveCopyReq{
SrcDir: path.Dir(srcObj.GetPath()),
DstDir: dstDir.GetPath(),
@@ -151,7 +153,7 @@ func (d *AListV3) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
}
func (d *AListV3) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
_, err := d.request("/fs/rename", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/rename", http.MethodPost, func(req *resty.Request) {
req.SetBody(RenameReq{
Path: srcObj.GetPath(),
Name: newName,
@@ -161,7 +163,7 @@ func (d *AListV3) Rename(ctx context.Context, srcObj model.Obj, newName string)
}
func (d *AListV3) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
_, err := d.request("/fs/copy", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/copy", http.MethodPost, func(req *resty.Request) {
req.SetBody(MoveCopyReq{
SrcDir: path.Dir(srcObj.GetPath()),
DstDir: dstDir.GetPath(),
@@ -172,7 +174,7 @@ func (d *AListV3) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
}
func (d *AListV3) Remove(ctx context.Context, obj model.Obj) error {
_, err := d.request("/fs/remove", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/remove", http.MethodPost, func(req *resty.Request) {
req.SetBody(RemoveReq{
Dir: path.Dir(obj.GetPath()),
Names: []string{obj.GetName()},
@@ -181,16 +183,29 @@ func (d *AListV3) Remove(ctx context.Context, obj model.Obj) error {
return err
}
func (d *AListV3) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
req, err := http.NewRequestWithContext(ctx, http.MethodPut, d.Address+"/api/fs/put", stream)
func (d *AListV3) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) error {
reader := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: s,
UpdateProgress: up,
})
req, err := http.NewRequestWithContext(ctx, http.MethodPut, d.Address+"/api/fs/put", reader)
if err != nil {
return err
}
req.Header.Set("Authorization", d.Token)
req.Header.Set("File-Path", path.Join(dstDir.GetPath(), stream.GetName()))
req.Header.Set("File-Path", path.Join(dstDir.GetPath(), s.GetName()))
req.Header.Set("Password", d.MetaPassword)
if md5 := s.GetHash().GetHash(utils.MD5); len(md5) > 0 {
req.Header.Set("X-File-Md5", md5)
}
if sha1 := s.GetHash().GetHash(utils.SHA1); len(sha1) > 0 {
req.Header.Set("X-File-Sha1", sha1)
}
if sha256 := s.GetHash().GetHash(utils.SHA256); len(sha256) > 0 {
req.Header.Set("X-File-Sha256", sha256)
}
req.ContentLength = stream.GetSize()
req.ContentLength = s.GetSize()
// client := base.NewHttpClient()
// client.Timeout = time.Hour * 6
res, err := base.HttpClient.Do(req)
@@ -219,6 +234,127 @@ func (d *AListV3) Put(ctx context.Context, dstDir model.Obj, stream model.FileSt
return nil
}
func (d *AListV3) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
if !d.ForwardArchiveReq {
return nil, errs.NotImplement
}
var resp common.Resp[ArchiveMetaResp]
_, code, err := d.request("/fs/archive/meta", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveMetaReq{
ArchivePass: args.Password,
Password: d.MetaPassword,
Path: obj.GetPath(),
Refresh: false,
})
})
if code == 202 {
return nil, errs.WrongArchivePassword
}
if err != nil {
return nil, err
}
var tree []model.ObjTree
if resp.Data.Content != nil {
tree = make([]model.ObjTree, 0, len(resp.Data.Content))
for _, content := range resp.Data.Content {
tree = append(tree, &content)
}
}
return &model.ArchiveMetaInfo{
Comment: resp.Data.Comment,
Encrypted: resp.Data.Encrypted,
Tree: tree,
}, nil
}
func (d *AListV3) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
if !d.ForwardArchiveReq {
return nil, errs.NotImplement
}
var resp common.Resp[ArchiveListResp]
_, code, err := d.request("/fs/archive/list", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveListReq{
ArchiveMetaReq: ArchiveMetaReq{
ArchivePass: args.Password,
Password: d.MetaPassword,
Path: obj.GetPath(),
Refresh: false,
},
PageReq: model.PageReq{
Page: 1,
PerPage: 0,
},
InnerPath: args.InnerPath,
})
})
if code == 202 {
return nil, errs.WrongArchivePassword
}
if err != nil {
return nil, err
}
var files []model.Obj
for _, f := range resp.Data.Content {
file := model.ObjThumb{
Object: model.Object{
Name: f.Name,
Modified: f.Modified,
Ctime: f.Created,
Size: f.Size,
IsFolder: f.IsDir,
HashInfo: utils.FromString(f.HashInfo),
},
Thumbnail: model.Thumbnail{Thumbnail: f.Thumb},
}
files = append(files, &file)
}
return files, nil
}
func (d *AListV3) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
if !d.ForwardArchiveReq {
return nil, errs.NotSupport
}
var resp common.Resp[ArchiveMetaResp]
_, _, err := d.request("/fs/archive/meta", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveMetaReq{
ArchivePass: args.Password,
Password: d.MetaPassword,
Path: obj.GetPath(),
Refresh: false,
})
})
if err != nil {
return nil, err
}
return &model.Link{
URL: fmt.Sprintf("%s?inner=%s&pass=%s&sign=%s",
resp.Data.RawURL,
utils.EncodePath(args.InnerPath, true),
url.QueryEscape(args.Password),
resp.Data.Sign),
}, nil
}
func (d *AListV3) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) error {
if !d.ForwardArchiveReq {
return errs.NotImplement
}
dir, name := path.Split(srcObj.GetPath())
_, _, err := d.request("/fs/archive/decompress", http.MethodPost, func(req *resty.Request) {
req.SetBody(DecompressReq{
ArchivePass: args.Password,
CacheFull: args.CacheFull,
DstDir: dstDir.GetPath(),
InnerPath: args.InnerPath,
Name: []string{name},
PutIntoNewDir: args.PutIntoNewDir,
SrcDir: dir,
})
})
return err
}
//func (d *AList) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}

View File

@@ -7,12 +7,13 @@ import (
type Addition struct {
driver.RootPath
Address string `json:"url" required:"true"`
MetaPassword string `json:"meta_password"`
Username string `json:"username"`
Password string `json:"password"`
Token string `json:"token"`
PassUAToUpsteam bool `json:"pass_ua_to_upsteam" default:"true"`
Address string `json:"url" required:"true"`
MetaPassword string `json:"meta_password"`
Username string `json:"username"`
Password string `json:"password"`
Token string `json:"token"`
PassUAToUpsteam bool `json:"pass_ua_to_upsteam" default:"true"`
ForwardArchiveReq bool `json:"forward_archive_requests" default:"true"`
}
var config = driver.Config{

View File

@@ -1,9 +1,11 @@
package alist_v3
import (
"encoding/json"
"time"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
)
type ListReq struct {
@@ -71,13 +73,113 @@ type LoginResp struct {
}
type MeResp struct {
Id int `json:"id"`
Username string `json:"username"`
Password string `json:"password"`
BasePath string `json:"base_path"`
Role int `json:"role"`
Disabled bool `json:"disabled"`
Permission int `json:"permission"`
SsoId string `json:"sso_id"`
Otp bool `json:"otp"`
Id int `json:"id"`
Username string `json:"username"`
Password string `json:"password"`
BasePath string `json:"base_path"`
Role IntSlice `json:"role"`
Disabled bool `json:"disabled"`
Permission int `json:"permission"`
SsoId string `json:"sso_id"`
Otp bool `json:"otp"`
}
type ArchiveMetaReq struct {
ArchivePass string `json:"archive_pass"`
Password string `json:"password"`
Path string `json:"path"`
Refresh bool `json:"refresh"`
}
type TreeResp struct {
ObjResp
Children []TreeResp `json:"children"`
hashCache *utils.HashInfo
}
func (t *TreeResp) GetSize() int64 {
return t.Size
}
func (t *TreeResp) GetName() string {
return t.Name
}
func (t *TreeResp) ModTime() time.Time {
return t.Modified
}
func (t *TreeResp) CreateTime() time.Time {
return t.Created
}
func (t *TreeResp) IsDir() bool {
return t.ObjResp.IsDir
}
func (t *TreeResp) GetHash() utils.HashInfo {
return utils.FromString(t.HashInfo)
}
func (t *TreeResp) GetID() string {
return ""
}
func (t *TreeResp) GetPath() string {
return ""
}
func (t *TreeResp) GetChildren() []model.ObjTree {
ret := make([]model.ObjTree, 0, len(t.Children))
for _, child := range t.Children {
ret = append(ret, &child)
}
return ret
}
func (t *TreeResp) Thumb() string {
return t.ObjResp.Thumb
}
type ArchiveMetaResp struct {
Comment string `json:"comment"`
Encrypted bool `json:"encrypted"`
Content []TreeResp `json:"content"`
RawURL string `json:"raw_url"`
Sign string `json:"sign"`
}
type ArchiveListReq struct {
model.PageReq
ArchiveMetaReq
InnerPath string `json:"inner_path"`
}
type ArchiveListResp struct {
Content []ObjResp `json:"content"`
Total int64 `json:"total"`
}
type DecompressReq struct {
ArchivePass string `json:"archive_pass"`
CacheFull bool `json:"cache_full"`
DstDir string `json:"dst_dir"`
InnerPath string `json:"inner_path"`
Name []string `json:"name"`
PutIntoNewDir bool `json:"put_into_new_dir"`
SrcDir string `json:"src_dir"`
}
type IntSlice []int
func (s *IntSlice) UnmarshalJSON(data []byte) error {
if len(data) > 0 && data[0] == '[' {
return json.Unmarshal(data, (*[]int)(s))
}
var single int
if err := json.Unmarshal(data, &single); err != nil {
return err
}
*s = []int{single}
return nil
}

View File

@@ -17,7 +17,7 @@ func (d *AListV3) login() error {
return nil
}
var resp common.Resp[LoginResp]
_, err := d.request("/auth/login", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/auth/login", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(base.Json{
"username": d.Username,
"password": d.Password,
@@ -31,7 +31,7 @@ func (d *AListV3) login() error {
return nil
}
func (d *AListV3) request(api, method string, callback base.ReqCallback, retry ...bool) ([]byte, error) {
func (d *AListV3) request(api, method string, callback base.ReqCallback, retry ...bool) ([]byte, int, error) {
url := d.Address + "/api" + api
req := base.RestyClient.R()
req.SetHeader("Authorization", d.Token)
@@ -40,22 +40,26 @@ func (d *AListV3) request(api, method string, callback base.ReqCallback, retry .
}
res, err := req.Execute(method, url)
if err != nil {
return nil, err
code := 0
if res != nil {
code = res.StatusCode()
}
return nil, code, err
}
log.Debugf("[alist_v3] response body: %s", res.String())
if res.StatusCode() >= 400 {
return nil, fmt.Errorf("request failed, status: %s", res.Status())
return nil, res.StatusCode(), fmt.Errorf("request failed, status: %s", res.Status())
}
code := utils.Json.Get(res.Body(), "code").ToInt()
if code != 200 {
if (code == 401 || code == 403) && !utils.IsBool(retry...) {
err = d.login()
if err != nil {
return nil, err
return nil, code, err
}
return d.request(api, method, callback, true)
}
return nil, fmt.Errorf("request failed,code: %d, message: %s", code, utils.Json.Get(res.Body(), "message").ToString())
return nil, code, fmt.Errorf("request failed,code: %d, message: %s", code, utils.Json.Get(res.Body(), "message").ToString())
}
return res.Body(), nil
return res.Body(), 200, nil
}

View File

@@ -14,13 +14,12 @@ import (
"os"
"time"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/cron"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
@@ -56,7 +55,7 @@ func (d *AliDrive) Init(ctx context.Context) error {
if err != nil {
return err
}
d.DriveId = utils.Json.Get(res, "default_drive_id").ToString()
d.DriveId = d.Addition.DeviceID
d.UserID = utils.Json.Get(res, "user_id").ToString()
d.cron = cron.NewCron(time.Hour * 2)
d.cron.Do(func() {
@@ -194,7 +193,10 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.Fil
}
if d.RapidUpload {
buf := bytes.NewBuffer(make([]byte, 0, 1024))
utils.CopyWithBufferN(buf, file, 1024)
_, err := utils.CopyWithBufferN(buf, file, 1024)
if err != nil {
return err
}
reqBody["pre_hash"] = utils.HashData(utils.SHA1, buf.Bytes())
if localFile != nil {
if _, err := localFile.Seek(0, io.SeekStart); err != nil {
@@ -286,6 +288,7 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.Fil
file.Reader = localFile
}
rateLimited := driver.NewLimitedUploadStream(ctx, file)
for i, partInfo := range resp.PartInfoList {
if utils.IsCanceled(ctx) {
return ctx.Err()
@@ -294,7 +297,7 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.Fil
if d.InternalUpload {
url = partInfo.InternalUploadUrl
}
req, err := http.NewRequest("PUT", url, io.LimitReader(file, DEFAULT))
req, err := http.NewRequest("PUT", url, io.LimitReader(rateLimited, DEFAULT))
if err != nil {
return err
}
@@ -303,7 +306,7 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.Fil
if err != nil {
return err
}
res.Body.Close()
_ = res.Body.Close()
if count > 0 {
up(float64(i) * 100 / float64(count))
}

View File

@@ -7,8 +7,8 @@ import (
type Addition struct {
driver.RootID
RefreshToken string `json:"refresh_token" required:"true"`
//DeviceID string `json:"device_id" required:"true"`
RefreshToken string `json:"refresh_token" required:"true"`
DeviceID string `json:"device_id" required:"true"`
OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at"`
OrderDirection string `json:"order_direction" type:"select" options:"ASC,DESC"`
RapidUpload bool `json:"rapid_upload"`

View File

@@ -5,6 +5,7 @@ import (
"errors"
"fmt"
"net/http"
"path/filepath"
"time"
"github.com/Xhofe/rateg"
@@ -14,17 +15,18 @@ import (
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus"
)
type AliyundriveOpen struct {
model.Storage
Addition
base string
DriveId string
limitList func(ctx context.Context, data base.Json) (*Files, error)
limitLink func(ctx context.Context, file model.Obj) (*model.Link, error)
ref *AliyundriveOpen
}
func (d *AliyundriveOpen) Config() driver.Config {
@@ -58,10 +60,32 @@ func (d *AliyundriveOpen) Init(ctx context.Context) error {
return nil
}
func (d *AliyundriveOpen) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*AliyundriveOpen)
if ok {
d.ref = refStorage
return nil
}
return errs.NotSupport
}
func (d *AliyundriveOpen) Drop(ctx context.Context) error {
d.ref = nil
return nil
}
// GetRoot implements the driver.GetRooter interface to properly set up the root object
func (d *AliyundriveOpen) GetRoot(ctx context.Context) (model.Obj, error) {
return &model.Object{
ID: d.RootFolderID,
Path: "/",
Name: "root",
Size: 0,
Modified: d.Modified,
IsFolder: true,
}, nil
}
func (d *AliyundriveOpen) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
if d.limitList == nil {
return nil, fmt.Errorf("driver not init")
@@ -70,9 +94,17 @@ func (d *AliyundriveOpen) List(ctx context.Context, dir model.Obj, args model.Li
if err != nil {
return nil, err
}
return utils.SliceConvert(files, func(src File) (model.Obj, error) {
return fileToObj(src), nil
objs, err := utils.SliceConvert(files, func(src File) (model.Obj, error) {
obj := fileToObj(src)
// Set the correct path for the object
if dir.GetPath() != "" {
obj.Path = filepath.Join(dir.GetPath(), obj.GetName())
}
return obj, nil
})
return objs, err
}
func (d *AliyundriveOpen) link(ctx context.Context, file model.Obj) (*model.Link, error) {
@@ -122,7 +154,16 @@ func (d *AliyundriveOpen) MakeDir(ctx context.Context, parentDir model.Obj, dirN
if err != nil {
return nil, err
}
return fileToObj(newDir), nil
obj := fileToObj(newDir)
// Set the correct Path for the returned directory object
if parentDir.GetPath() != "" {
obj.Path = filepath.Join(parentDir.GetPath(), dirName)
} else {
obj.Path = "/" + dirName
}
return obj, nil
}
func (d *AliyundriveOpen) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
@@ -132,20 +173,24 @@ func (d *AliyundriveOpen) Move(ctx context.Context, srcObj, dstDir model.Obj) (m
"drive_id": d.DriveId,
"file_id": srcObj.GetID(),
"to_parent_file_id": dstDir.GetID(),
"check_name_mode": "refuse", // optional:ignore,auto_rename,refuse
"check_name_mode": "ignore", // optional:ignore,auto_rename,refuse
//"new_name": "newName", // The new name to use when a file of the same name exists
}).SetResult(&resp)
})
if err != nil {
return nil, err
}
if resp.Exist {
return nil, errors.New("existence of files with the same name")
}
if srcObj, ok := srcObj.(*model.ObjThumb); ok {
srcObj.ID = resp.FileID
srcObj.Modified = time.Now()
srcObj.Path = filepath.Join(dstDir.GetPath(), srcObj.GetName())
// Check for duplicate files in the destination directory
if err := d.removeDuplicateFiles(ctx, dstDir.GetPath(), srcObj.GetName(), srcObj.GetID()); err != nil {
// Only log a warning instead of returning an error since the move operation has already completed successfully
log.Warnf("Failed to remove duplicate files after move: %v", err)
}
return srcObj, nil
}
return nil, nil
@@ -163,19 +208,47 @@ func (d *AliyundriveOpen) Rename(ctx context.Context, srcObj model.Obj, newName
if err != nil {
return nil, err
}
return fileToObj(newFile), nil
// Check for duplicate files in the parent directory
parentPath := filepath.Dir(srcObj.GetPath())
if err := d.removeDuplicateFiles(ctx, parentPath, newName, newFile.FileId); err != nil {
// Only log a warning instead of returning an error since the rename operation has already completed successfully
log.Warnf("Failed to remove duplicate files after rename: %v", err)
}
obj := fileToObj(newFile)
// Set the correct Path for the renamed object
if parentPath != "" && parentPath != "." {
obj.Path = filepath.Join(parentPath, newName)
} else {
obj.Path = "/" + newName
}
return obj, nil
}
func (d *AliyundriveOpen) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
var resp MoveOrCopyResp
_, err := d.request("/adrive/v1.0/openFile/copy", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
"file_id": srcObj.GetID(),
"to_parent_file_id": dstDir.GetID(),
"auto_rename": true,
})
"auto_rename": false,
}).SetResult(&resp)
})
return err
if err != nil {
return err
}
// Check for duplicate files in the destination directory
if err := d.removeDuplicateFiles(ctx, dstDir.GetPath(), srcObj.GetName(), resp.FileID); err != nil {
// Only log a warning instead of returning an error since the copy operation has already completed successfully
log.Warnf("Failed to remove duplicate files after copy: %v", err)
}
return nil
}
func (d *AliyundriveOpen) Remove(ctx context.Context, obj model.Obj) error {
@@ -193,7 +266,18 @@ func (d *AliyundriveOpen) Remove(ctx context.Context, obj model.Obj) error {
}
func (d *AliyundriveOpen) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
return d.upload(ctx, dstDir, stream, up)
obj, err := d.upload(ctx, dstDir, stream, up)
// Set the correct Path for the returned file object
if obj != nil && obj.GetPath() == "" {
if dstDir.GetPath() != "" {
if objWithPath, ok := obj.(model.SetPath); ok {
objWithPath.SetPath(filepath.Join(dstDir.GetPath(), obj.GetName()))
}
}
}
return obj, err
}
func (d *AliyundriveOpen) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
@@ -225,3 +309,4 @@ var _ driver.MkdirResult = (*AliyundriveOpen)(nil)
var _ driver.MoveResult = (*AliyundriveOpen)(nil)
var _ driver.RenameResult = (*AliyundriveOpen)(nil)
var _ driver.PutResult = (*AliyundriveOpen)(nil)
var _ driver.GetRooter = (*AliyundriveOpen)(nil)

View File

@@ -11,7 +11,7 @@ type Addition struct {
RefreshToken string `json:"refresh_token" required:"true"`
OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at"`
OrderDirection string `json:"order_direction" type:"select" options:"ASC,DESC"`
OauthTokenURL string `json:"oauth_token_url" default:"https://api.nn.ci/alist/ali_open/token"`
OauthTokenURL string `json:"oauth_token_url" default:"https://api.alistgo.com/alist/ali_open/token"`
ClientID string `json:"client_id" required:"false" help:"Keep it empty if you don't have one"`
ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"`
RemoveWay string `json:"remove_way" required:"true" type:"select" options:"trash,delete"`
@@ -32,11 +32,10 @@ var config = driver.Config{
DefaultRoot: "root",
NoOverwriteUpload: true,
}
var API_URL = "https://openapi.alipan.com"
func init() {
op.RegisterDriver(func() driver.Driver {
return &AliyundriveOpen{
base: "https://openapi.alipan.com",
}
return &AliyundriveOpen{}
})
}

View File

@@ -1,7 +1,6 @@
package aliyundrive_open
import (
"bytes"
"context"
"encoding/base64"
"fmt"
@@ -15,6 +14,7 @@ import (
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
streamPkg "github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/avast/retry-go"
@@ -77,7 +77,7 @@ func (d *AliyundriveOpen) uploadPart(ctx context.Context, r io.Reader, partInfo
if err != nil {
return err
}
res.Body.Close()
_ = res.Body.Close()
if res.StatusCode != http.StatusOK && res.StatusCode != http.StatusConflict {
return fmt.Errorf("upload status: %d", res.StatusCode)
}
@@ -126,21 +126,24 @@ func getProofRange(input string, size int64) (*ProofRange, error) {
}
func (d *AliyundriveOpen) calProofCode(stream model.FileStreamer) (string, error) {
proofRange, err := getProofRange(d.AccessToken, stream.GetSize())
proofRange, err := getProofRange(d.getAccessToken(), stream.GetSize())
if err != nil {
return "", err
}
length := proofRange.End - proofRange.Start
buf := bytes.NewBuffer(make([]byte, 0, length))
reader, err := stream.RangeRead(http_range.Range{Start: proofRange.Start, Length: length})
if err != nil {
return "", err
}
_, err = utils.CopyWithBufferN(buf, reader, length)
buf := make([]byte, length)
n, err := io.ReadFull(reader, buf)
if err == io.ErrUnexpectedEOF {
return "", fmt.Errorf("can't read data, expected=%d, got=%d", len(buf), n)
}
if err != nil {
return "", err
}
return base64.StdEncoding.EncodeToString(buf.Bytes()), nil
return base64.StdEncoding.EncodeToString(buf), nil
}
func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
@@ -183,25 +186,18 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
_, err, e := d.requestReturnErrResp("/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
req.SetBody(createData).SetResult(&createResp)
})
var tmpF model.File
if err != nil {
if e.Code != "PreHashMatched" || !rapidUpload {
return nil, err
}
log.Debugf("[aliyundrive_open] pre_hash matched, start rapid upload")
hi := stream.GetHash()
hash := hi.GetHash(utils.SHA1)
if len(hash) <= 0 {
tmpF, err = stream.CacheFullInTempFile()
hash := stream.GetHash().GetHash(utils.SHA1)
if len(hash) != utils.SHA1.Width {
_, hash, err = streamPkg.CacheFullInTempFileAndHash(stream, utils.SHA1)
if err != nil {
return nil, err
}
hash, err = utils.HashFile(utils.SHA1, tmpF)
if err != nil {
return nil, err
}
}
delete(createData, "pre_hash")
@@ -251,8 +247,9 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
rd = utils.NewMultiReadable(srd)
}
err = retry.Do(func() error {
rd.Reset()
return d.uploadPart(ctx, rd, createResp.PartInfoList[i])
_ = rd.Reset()
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
return d.uploadPart(ctx, rateLimitedRd, createResp.PartInfoList[i])
},
retry.Attempts(3),
retry.DelayType(retry.BackOffDelay),

View File

@@ -10,6 +10,7 @@ import (
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
@@ -19,7 +20,7 @@ import (
// do others that not defined in Driver interface
func (d *AliyundriveOpen) _refreshToken() (string, string, error) {
url := d.base + "/oauth/access_token"
url := API_URL + "/oauth/access_token"
if d.OauthTokenURL != "" && d.ClientID == "" {
url = d.OauthTokenURL
}
@@ -74,6 +75,9 @@ func getSub(token string) (string, error) {
}
func (d *AliyundriveOpen) refreshToken() error {
if d.ref != nil {
return d.ref.refreshToken()
}
refresh, access, err := d._refreshToken()
for i := 0; i < 3; i++ {
if err == nil {
@@ -100,7 +104,7 @@ func (d *AliyundriveOpen) request(uri, method string, callback base.ReqCallback,
func (d *AliyundriveOpen) requestReturnErrResp(uri, method string, callback base.ReqCallback, retry ...bool) ([]byte, error, *ErrResp) {
req := base.RestyClient.R()
// TODO check whether access_token is expired
req.SetHeader("Authorization", "Bearer "+d.AccessToken)
req.SetHeader("Authorization", "Bearer "+d.getAccessToken())
if method == http.MethodPost {
req.SetHeader("Content-Type", "application/json")
}
@@ -109,7 +113,7 @@ func (d *AliyundriveOpen) requestReturnErrResp(uri, method string, callback base
}
var e ErrResp
req.SetError(&e)
res, err := req.Execute(method, d.base+uri)
res, err := req.Execute(method, API_URL+uri)
if err != nil {
if res != nil {
log.Errorf("[aliyundrive_open] request error: %s", res.String())
@@ -118,7 +122,7 @@ func (d *AliyundriveOpen) requestReturnErrResp(uri, method string, callback base
}
isRetry := len(retry) > 0 && retry[0]
if e.Code != "" {
if !isRetry && (utils.SliceContains([]string{"AccessTokenInvalid", "AccessTokenExpired", "I400JD"}, e.Code) || d.AccessToken == "") {
if !isRetry && (utils.SliceContains([]string{"AccessTokenInvalid", "AccessTokenExpired", "I400JD"}, e.Code) || d.getAccessToken() == "") {
err = d.refreshToken()
if err != nil {
return nil, err, nil
@@ -176,3 +180,43 @@ func getNowTime() (time.Time, string) {
nowTimeStr := nowTime.Format("2006-01-02T15:04:05.000Z")
return nowTime, nowTimeStr
}
func (d *AliyundriveOpen) getAccessToken() string {
if d.ref != nil {
return d.ref.getAccessToken()
}
return d.AccessToken
}
// Remove duplicate files with the same name in the given directory path,
// preserving the file with the given skipID if provided
func (d *AliyundriveOpen) removeDuplicateFiles(ctx context.Context, parentPath string, fileName string, skipID string) error {
// Handle empty path (root directory) case
if parentPath == "" {
parentPath = "/"
}
// List all files in the parent directory
files, err := op.List(ctx, d, parentPath, model.ListArgs{})
if err != nil {
return err
}
// Find all files with the same name
var duplicates []model.Obj
for _, file := range files {
if file.GetName() == fileName && file.GetID() != skipID {
duplicates = append(duplicates, file)
}
}
// Remove all duplicates files, except the file with the given ID
for _, file := range duplicates {
err := d.Remove(ctx, file)
if err != nil {
return err
}
}
return nil
}

View File

@@ -2,9 +2,11 @@ package drivers
import (
_ "github.com/alist-org/alist/v3/drivers/115"
_ "github.com/alist-org/alist/v3/drivers/115_open"
_ "github.com/alist-org/alist/v3/drivers/115_share"
_ "github.com/alist-org/alist/v3/drivers/123"
_ "github.com/alist-org/alist/v3/drivers/123_link"
_ "github.com/alist-org/alist/v3/drivers/123_open"
_ "github.com/alist-org/alist/v3/drivers/123_share"
_ "github.com/alist-org/alist/v3/drivers/139"
_ "github.com/alist-org/alist/v3/drivers/189"
@@ -15,15 +17,23 @@ import (
_ "github.com/alist-org/alist/v3/drivers/aliyundrive"
_ "github.com/alist-org/alist/v3/drivers/aliyundrive_open"
_ "github.com/alist-org/alist/v3/drivers/aliyundrive_share"
_ "github.com/alist-org/alist/v3/drivers/azure_blob"
_ "github.com/alist-org/alist/v3/drivers/baidu_netdisk"
_ "github.com/alist-org/alist/v3/drivers/baidu_photo"
_ "github.com/alist-org/alist/v3/drivers/baidu_share"
_ "github.com/alist-org/alist/v3/drivers/bitqiu"
_ "github.com/alist-org/alist/v3/drivers/chaoxing"
_ "github.com/alist-org/alist/v3/drivers/cloudreve"
_ "github.com/alist-org/alist/v3/drivers/cloudreve_v4"
_ "github.com/alist-org/alist/v3/drivers/crypt"
_ "github.com/alist-org/alist/v3/drivers/doubao"
_ "github.com/alist-org/alist/v3/drivers/doubao_share"
_ "github.com/alist-org/alist/v3/drivers/dropbox"
_ "github.com/alist-org/alist/v3/drivers/febbox"
_ "github.com/alist-org/alist/v3/drivers/ftp"
_ "github.com/alist-org/alist/v3/drivers/github"
_ "github.com/alist-org/alist/v3/drivers/github_releases"
_ "github.com/alist-org/alist/v3/drivers/gofile"
_ "github.com/alist-org/alist/v3/drivers/google_drive"
_ "github.com/alist-org/alist/v3/drivers/google_photo"
_ "github.com/alist-org/alist/v3/drivers/halalcloud"
@@ -33,15 +43,19 @@ import (
_ "github.com/alist-org/alist/v3/drivers/lanzou"
_ "github.com/alist-org/alist/v3/drivers/lenovonas_share"
_ "github.com/alist-org/alist/v3/drivers/local"
_ "github.com/alist-org/alist/v3/drivers/mediafire"
_ "github.com/alist-org/alist/v3/drivers/mediatrack"
_ "github.com/alist-org/alist/v3/drivers/mega"
_ "github.com/alist-org/alist/v3/drivers/misskey"
_ "github.com/alist-org/alist/v3/drivers/mopan"
_ "github.com/alist-org/alist/v3/drivers/netease_music"
_ "github.com/alist-org/alist/v3/drivers/onedrive"
_ "github.com/alist-org/alist/v3/drivers/onedrive_app"
_ "github.com/alist-org/alist/v3/drivers/onedrive_sharelink"
_ "github.com/alist-org/alist/v3/drivers/pcloud"
_ "github.com/alist-org/alist/v3/drivers/pikpak"
_ "github.com/alist-org/alist/v3/drivers/pikpak_share"
_ "github.com/alist-org/alist/v3/drivers/proton_drive"
_ "github.com/alist-org/alist/v3/drivers/quark_uc"
_ "github.com/alist-org/alist/v3/drivers/quark_uc_tv"
_ "github.com/alist-org/alist/v3/drivers/quqi"

View File

@@ -0,0 +1,313 @@
package azure_blob
import (
"context"
"fmt"
"io"
"path"
"regexp"
"strings"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/sas"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
)
// Azure Blob Storage based on the blob APIs
// Link: https://learn.microsoft.com/rest/api/storageservices/blob-service-rest-api
type AzureBlob struct {
model.Storage
Addition
client *azblob.Client
containerClient *container.Client
config driver.Config
}
// Config returns the driver configuration.
func (d *AzureBlob) Config() driver.Config {
return d.config
}
// GetAddition returns additional settings specific to Azure Blob Storage.
func (d *AzureBlob) GetAddition() driver.Additional {
return &d.Addition
}
// Init initializes the Azure Blob Storage client using shared key authentication.
func (d *AzureBlob) Init(ctx context.Context) error {
// Validate the endpoint URL
accountName := extractAccountName(d.Addition.Endpoint)
if !regexp.MustCompile(`^[a-z0-9]+$`).MatchString(accountName) {
return fmt.Errorf("invalid storage account name: must be chars of lowercase letters or numbers only")
}
credential, err := azblob.NewSharedKeyCredential(accountName, d.Addition.AccessKey)
if err != nil {
return fmt.Errorf("failed to create credential: %w", err)
}
// Check if Endpoint is just account name
endpoint := d.Addition.Endpoint
if accountName == endpoint {
endpoint = fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
}
// Initialize Azure Blob client with retry policy
client, err := azblob.NewClientWithSharedKeyCredential(endpoint, credential,
&azblob.ClientOptions{ClientOptions: azcore.ClientOptions{
Retry: policy.RetryOptions{
MaxRetries: MaxRetries,
RetryDelay: RetryDelay,
},
}})
if err != nil {
return fmt.Errorf("failed to create client: %w", err)
}
d.client = client
// Ensure container exists or create it
containerName := strings.Trim(d.Addition.ContainerName, "/ \\")
if containerName == "" {
return fmt.Errorf("container name cannot be empty")
}
return d.createContainerIfNotExists(ctx, containerName)
}
// Drop releases resources associated with the Azure Blob client.
func (d *AzureBlob) Drop(ctx context.Context) error {
d.client = nil
return nil
}
// List retrieves blobs and directories under the specified path.
func (d *AzureBlob) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
prefix := ensureTrailingSlash(dir.GetPath())
pager := d.containerClient.NewListBlobsHierarchyPager("/", &container.ListBlobsHierarchyOptions{
Prefix: &prefix,
})
var objs []model.Obj
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
return nil, fmt.Errorf("failed to list blobs: %w", err)
}
// Process directories
for _, blobPrefix := range page.Segment.BlobPrefixes {
objs = append(objs, &model.Object{
Name: path.Base(strings.TrimSuffix(*blobPrefix.Name, "/")),
Path: *blobPrefix.Name,
Modified: *blobPrefix.Properties.LastModified,
Ctime: *blobPrefix.Properties.CreationTime,
IsFolder: true,
})
}
// Process files
for _, blob := range page.Segment.BlobItems {
if strings.HasSuffix(*blob.Name, "/") {
continue
}
objs = append(objs, &model.Object{
Name: path.Base(*blob.Name),
Path: *blob.Name,
Size: *blob.Properties.ContentLength,
Modified: *blob.Properties.LastModified,
Ctime: *blob.Properties.CreationTime,
IsFolder: false,
})
}
}
return objs, nil
}
// Link generates a temporary SAS URL for accessing a blob.
func (d *AzureBlob) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
blobClient := d.containerClient.NewBlobClient(file.GetPath())
expireDuration := time.Hour * time.Duration(d.SignURLExpire)
sasURL, err := blobClient.GetSASURL(sas.BlobPermissions{Read: true}, time.Now().Add(expireDuration), nil)
if err != nil {
return nil, fmt.Errorf("failed to generate SAS URL: %w", err)
}
return &model.Link{URL: sasURL}, nil
}
// MakeDir creates a virtual directory by uploading an empty blob as a marker.
func (d *AzureBlob) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
dirPath := path.Join(parentDir.GetPath(), dirName)
if err := d.mkDir(ctx, dirPath); err != nil {
return nil, fmt.Errorf("failed to create directory marker: %w", err)
}
return &model.Object{
Path: dirPath,
Name: dirName,
IsFolder: true,
}, nil
}
// Move relocates an object (file or directory) to a new directory.
func (d *AzureBlob) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
srcPath := srcObj.GetPath()
dstPath := path.Join(dstDir.GetPath(), srcObj.GetName())
if err := d.moveOrRename(ctx, srcPath, dstPath, srcObj.IsDir(), srcObj.GetSize()); err != nil {
return nil, fmt.Errorf("move operation failed: %w", err)
}
return &model.Object{
Path: dstPath,
Name: srcObj.GetName(),
Modified: time.Now(),
IsFolder: srcObj.IsDir(),
Size: srcObj.GetSize(),
}, nil
}
// Rename changes the name of an existing object.
func (d *AzureBlob) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
srcPath := srcObj.GetPath()
dstPath := path.Join(path.Dir(srcPath), newName)
if err := d.moveOrRename(ctx, srcPath, dstPath, srcObj.IsDir(), srcObj.GetSize()); err != nil {
return nil, fmt.Errorf("rename operation failed: %w", err)
}
return &model.Object{
Path: dstPath,
Name: newName,
Modified: time.Now(),
IsFolder: srcObj.IsDir(),
Size: srcObj.GetSize(),
}, nil
}
// Copy duplicates an object (file or directory) to a specified destination directory.
func (d *AzureBlob) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
dstPath := path.Join(dstDir.GetPath(), srcObj.GetName())
// Handle directory copying using flat listing
if srcObj.IsDir() {
srcPrefix := srcObj.GetPath()
srcPrefix = ensureTrailingSlash(srcPrefix)
// Get all blobs under the source directory
blobs, err := d.flattenListBlobs(ctx, srcPrefix)
if err != nil {
return nil, fmt.Errorf("failed to list source directory contents: %w", err)
}
// Process each blob - copy to destination
for _, blob := range blobs {
// Skip the directory marker itself
if *blob.Name == srcPrefix {
continue
}
// Calculate relative path from source
relPath := strings.TrimPrefix(*blob.Name, srcPrefix)
itemDstPath := path.Join(dstPath, relPath)
if strings.HasSuffix(itemDstPath, "/") || (blob.Metadata["hdi_isfolder"] != nil && *blob.Metadata["hdi_isfolder"] == "true") {
// Create directory marker at destination
err := d.mkDir(ctx, itemDstPath)
if err != nil {
return nil, fmt.Errorf("failed to create directory marker [%s]: %w", itemDstPath, err)
}
} else {
// Copy the blob
if err := d.copyFile(ctx, *blob.Name, itemDstPath); err != nil {
return nil, fmt.Errorf("failed to copy %s: %w", *blob.Name, err)
}
}
}
// Create directory marker at destination if needed
if len(blobs) == 0 {
err := d.mkDir(ctx, dstPath)
if err != nil {
return nil, fmt.Errorf("failed to create directory [%s]: %w", dstPath, err)
}
}
return &model.Object{
Path: dstPath,
Name: srcObj.GetName(),
Modified: time.Now(),
IsFolder: true,
}, nil
}
// Copy a single file
if err := d.copyFile(ctx, srcObj.GetPath(), dstPath); err != nil {
return nil, fmt.Errorf("failed to copy blob: %w", err)
}
return &model.Object{
Path: dstPath,
Name: srcObj.GetName(),
Size: srcObj.GetSize(),
Modified: time.Now(),
IsFolder: false,
}, nil
}
// Remove deletes a specified blob or recursively deletes a directory and its contents.
func (d *AzureBlob) Remove(ctx context.Context, obj model.Obj) error {
path := obj.GetPath()
// Handle recursive directory deletion
if obj.IsDir() {
return d.deleteFolder(ctx, path)
}
// Delete single file
return d.deleteFile(ctx, path, false)
}
// Put uploads a file stream to Azure Blob Storage with progress tracking.
func (d *AzureBlob) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
blobPath := path.Join(dstDir.GetPath(), stream.GetName())
blobClient := d.containerClient.NewBlockBlobClient(blobPath)
// Determine optimal upload options based on file size
options := optimizedUploadOptions(stream.GetSize())
// Track upload progress
progressTracker := &progressTracker{
total: stream.GetSize(),
updateProgress: up,
}
// Wrap stream to handle context cancellation and progress tracking
limitedStream := driver.NewLimitedUploadStream(ctx, io.TeeReader(stream, progressTracker))
// Upload the stream to Azure Blob Storage
_, err := blobClient.UploadStream(ctx, limitedStream, options)
if err != nil {
return nil, fmt.Errorf("failed to upload file: %w", err)
}
return &model.Object{
Path: blobPath,
Name: stream.GetName(),
Size: stream.GetSize(),
Modified: time.Now(),
IsFolder: false,
}, nil
}
// The following methods related to archive handling are not implemented yet.
// func (d *AzureBlob) GetArchiveMeta(...) {...}
// func (d *AzureBlob) ListArchive(...) {...}
// func (d *AzureBlob) Extract(...) {...}
// func (d *AzureBlob) ArchiveDecompress(...) {...}
// Ensure AzureBlob implements the driver.Driver interface.
var _ driver.Driver = (*AzureBlob)(nil)

View File

@@ -0,0 +1,32 @@
package azure_blob
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
Endpoint string `json:"endpoint" required:"true" default:"https://<accountname>.blob.core.windows.net/" help:"e.g. https://accountname.blob.core.windows.net/. The full endpoint URL for Azure Storage, including the unique storage account name (3 ~ 24 numbers and lowercase letters only)."`
AccessKey string `json:"access_key" required:"true" help:"The access key for Azure Storage, used for authentication. https://learn.microsoft.com/azure/storage/common/storage-account-keys-manage"`
ContainerName string `json:"container_name" required:"true" help:"The name of the container in Azure Storage (created in the Azure portal). https://learn.microsoft.com/azure/storage/blobs/blob-containers-portal"`
SignURLExpire int `json:"sign_url_expire" type:"number" default:"4" help:"The expiration time for SAS URLs, in hours."`
}
// implement GetRootId interface
func (r Addition) GetRootId() string {
return r.ContainerName
}
var config = driver.Config{
Name: "Azure Blob Storage",
LocalSort: true,
CheckStatus: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &AzureBlob{
config: config,
}
})
}

View File

@@ -0,0 +1,20 @@
package azure_blob
import "github.com/alist-org/alist/v3/internal/driver"
// progressTracker is used to track upload progress
type progressTracker struct {
total int64
current int64
updateProgress driver.UpdateProgress
}
// Write implements io.Writer to track progress
func (pt *progressTracker) Write(p []byte) (n int, err error) {
n = len(p)
pt.current += int64(n)
if pt.updateProgress != nil && pt.total > 0 {
pt.updateProgress(float64(pt.current) * 100 / float64(pt.total))
}
return n, nil
}

401
drivers/azure_blob/util.go Normal file
View File

@@ -0,0 +1,401 @@
package azure_blob
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"path"
"sort"
"strings"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/sas"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/service"
log "github.com/sirupsen/logrus"
)
const (
// MaxRetries defines the maximum number of retry attempts for Azure operations
MaxRetries = 3
// RetryDelay defines the base delay between retries
RetryDelay = 3 * time.Second
// MaxBatchSize defines the maximum number of operations in a single batch request
MaxBatchSize = 128
)
// extractAccountName 从 Azure 存储 Endpoint 中提取账户名
func extractAccountName(endpoint string) string {
// 移除协议前缀
endpoint = strings.TrimPrefix(endpoint, "https://")
endpoint = strings.TrimPrefix(endpoint, "http://")
// 获取第一个点之前的部分(即账户名)
parts := strings.Split(endpoint, ".")
if len(parts) > 0 {
// to lower case
return strings.ToLower(parts[0])
}
return ""
}
// isNotFoundError checks if the error is a "not found" type error
func isNotFoundError(err error) bool {
var storageErr *azcore.ResponseError
if errors.As(err, &storageErr) {
return storageErr.StatusCode == 404
}
// Fallback to string matching for backwards compatibility
return err != nil && strings.Contains(err.Error(), "BlobNotFound")
}
// flattenListBlobs - Optimize blob listing to handle pagination better
func (d *AzureBlob) flattenListBlobs(ctx context.Context, prefix string) ([]container.BlobItem, error) {
// Standardize prefix format
prefix = ensureTrailingSlash(prefix)
var blobItems []container.BlobItem
pager := d.containerClient.NewListBlobsFlatPager(&container.ListBlobsFlatOptions{
Prefix: &prefix,
Include: container.ListBlobsInclude{
Metadata: true,
},
})
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
return nil, fmt.Errorf("failed to list blobs: %w", err)
}
for _, blob := range page.Segment.BlobItems {
blobItems = append(blobItems, *blob)
}
}
return blobItems, nil
}
// batchDeleteBlobs - Simplify batch deletion logic
func (d *AzureBlob) batchDeleteBlobs(ctx context.Context, blobPaths []string) error {
if len(blobPaths) == 0 {
return nil
}
// Process in batches of MaxBatchSize
for i := 0; i < len(blobPaths); i += MaxBatchSize {
end := min(i+MaxBatchSize, len(blobPaths))
currentBatch := blobPaths[i:end]
// Create batch builder
batchBuilder, err := d.containerClient.NewBatchBuilder()
if err != nil {
return fmt.Errorf("failed to create batch builder: %w", err)
}
// Add delete operations
for _, blobPath := range currentBatch {
if err := batchBuilder.Delete(blobPath, nil); err != nil {
return fmt.Errorf("failed to add delete operation for %s: %w", blobPath, err)
}
}
// Submit batch
responses, err := d.containerClient.SubmitBatch(ctx, batchBuilder, nil)
if err != nil {
return fmt.Errorf("batch delete request failed: %w", err)
}
// Check responses
for _, resp := range responses.Responses {
if resp.Error != nil && !isNotFoundError(resp.Error) {
// 获取 blob 名称以提供更好的错误信息
blobName := "unknown"
if resp.BlobName != nil {
blobName = *resp.BlobName
}
return fmt.Errorf("failed to delete blob %s: %v", blobName, resp.Error)
}
}
}
return nil
}
// deleteFolder recursively deletes a directory and all its contents
func (d *AzureBlob) deleteFolder(ctx context.Context, prefix string) error {
// Ensure directory path ends with slash
prefix = ensureTrailingSlash(prefix)
// Get all blobs under the directory using flattenListBlobs
globs, err := d.flattenListBlobs(ctx, prefix)
if err != nil {
return fmt.Errorf("failed to list blobs for deletion: %w", err)
}
// If there are blobs in the directory, delete them
if len(globs) > 0 {
// 分离文件和目录标记
var filePaths []string
var dirPaths []string
for _, blob := range globs {
blobName := *blob.Name
if isDirectory(blob) {
// remove trailing slash for directory names
dirPaths = append(dirPaths, strings.TrimSuffix(blobName, "/"))
} else {
filePaths = append(filePaths, blobName)
}
}
// 先删除文件,再删除目录
if len(filePaths) > 0 {
if err := d.batchDeleteBlobs(ctx, filePaths); err != nil {
return err
}
}
if len(dirPaths) > 0 {
// 按路径深度分组
depthMap := make(map[int][]string)
for _, dir := range dirPaths {
depth := strings.Count(dir, "/") // 计算目录深度
depthMap[depth] = append(depthMap[depth], dir)
}
// 按深度从大到小排序
var depths []int
for depth := range depthMap {
depths = append(depths, depth)
}
sort.Sort(sort.Reverse(sort.IntSlice(depths)))
// 按深度逐层批量删除
for _, depth := range depths {
batch := depthMap[depth]
if err := d.batchDeleteBlobs(ctx, batch); err != nil {
return err
}
}
}
}
// 最后删除目录标记本身
return d.deleteEmptyDirectory(ctx, prefix)
}
// deleteFile deletes a single file or blob with better error handling
func (d *AzureBlob) deleteFile(ctx context.Context, path string, isDir bool) error {
blobClient := d.containerClient.NewBlobClient(path)
_, err := blobClient.Delete(ctx, nil)
if err != nil && !(isDir && isNotFoundError(err)) {
return err
}
return nil
}
// copyFile copies a single blob from source path to destination path
func (d *AzureBlob) copyFile(ctx context.Context, srcPath, dstPath string) error {
srcBlob := d.containerClient.NewBlobClient(srcPath)
dstBlob := d.containerClient.NewBlobClient(dstPath)
// Use configured expiration time for SAS URL
expireDuration := time.Hour * time.Duration(d.SignURLExpire)
srcURL, err := srcBlob.GetSASURL(sas.BlobPermissions{Read: true}, time.Now().Add(expireDuration), nil)
if err != nil {
return fmt.Errorf("failed to generate source SAS URL: %w", err)
}
_, err = dstBlob.StartCopyFromURL(ctx, srcURL, nil)
return err
}
// createContainerIfNotExists - Create container if not exists
// Clean up commented code
func (d *AzureBlob) createContainerIfNotExists(ctx context.Context, containerName string) error {
serviceClient := d.client.ServiceClient()
containerClient := serviceClient.NewContainerClient(containerName)
var options = service.CreateContainerOptions{}
_, err := containerClient.Create(ctx, &options)
if err != nil {
var responseErr *azcore.ResponseError
if errors.As(err, &responseErr) && responseErr.ErrorCode != "ContainerAlreadyExists" {
return fmt.Errorf("failed to create or access container [%s]: %w", containerName, err)
}
}
d.containerClient = containerClient
return nil
}
// mkDir creates a virtual directory marker by uploading an empty blob with metadata.
func (d *AzureBlob) mkDir(ctx context.Context, fullDirName string) error {
dirPath := ensureTrailingSlash(fullDirName)
blobClient := d.containerClient.NewBlockBlobClient(dirPath)
// Upload an empty blob with metadata indicating it's a directory
_, err := blobClient.Upload(ctx, struct {
*bytes.Reader
io.Closer
}{
Reader: bytes.NewReader([]byte{}),
Closer: io.NopCloser(nil),
}, &blockblob.UploadOptions{
Metadata: map[string]*string{
"hdi_isfolder": to.Ptr("true"),
},
})
return err
}
// ensureTrailingSlash ensures the provided path ends with a trailing slash.
func ensureTrailingSlash(path string) string {
if !strings.HasSuffix(path, "/") {
return path + "/"
}
return path
}
// moveOrRename moves or renames blobs or directories from source to destination.
func (d *AzureBlob) moveOrRename(ctx context.Context, srcPath, dstPath string, isDir bool, srcSize int64) error {
if isDir {
// Normalize paths for directory operations
srcPath = ensureTrailingSlash(srcPath)
dstPath = ensureTrailingSlash(dstPath)
// List all blobs under the source directory
blobs, err := d.flattenListBlobs(ctx, srcPath)
if err != nil {
return fmt.Errorf("failed to list blobs: %w", err)
}
// Iterate and copy each blob to the destination
for _, item := range blobs {
srcBlobName := *item.Name
relPath := strings.TrimPrefix(srcBlobName, srcPath)
itemDstPath := path.Join(dstPath, relPath)
if isDirectory(item) {
// Create directory marker at destination
if err := d.mkDir(ctx, itemDstPath); err != nil {
return fmt.Errorf("failed to create directory marker [%s]: %w", itemDstPath, err)
}
} else {
// Copy file blob to destination
if err := d.copyFile(ctx, srcBlobName, itemDstPath); err != nil {
return fmt.Errorf("failed to copy blob [%s]: %w", srcBlobName, err)
}
}
}
// Handle empty directories by creating a marker at destination
if len(blobs) == 0 {
if err := d.mkDir(ctx, dstPath); err != nil {
return fmt.Errorf("failed to create directory [%s]: %w", dstPath, err)
}
}
// Delete source directory and its contents
if err := d.deleteFolder(ctx, srcPath); err != nil {
log.Warnf("failed to delete source directory [%s]: %v\n, and try again", srcPath, err)
// Retry deletion once more and ignore the result
if err := d.deleteFolder(ctx, srcPath); err != nil {
log.Errorf("Retry deletion of source directory [%s] failed: %v", srcPath, err)
}
}
return nil
}
// Single file move or rename operation
if err := d.copyFile(ctx, srcPath, dstPath); err != nil {
return fmt.Errorf("failed to copy file: %w", err)
}
// Delete source file after successful copy
if err := d.deleteFile(ctx, srcPath, false); err != nil {
log.Errorf("Error deleting source file [%s]: %v", srcPath, err)
}
return nil
}
// optimizedUploadOptions returns the optimal upload options based on file size
func optimizedUploadOptions(fileSize int64) *azblob.UploadStreamOptions {
options := &azblob.UploadStreamOptions{
BlockSize: 4 * 1024 * 1024, // 4MB block size
Concurrency: 4, // Default concurrency
}
// For large files, increase block size and concurrency
if fileSize > 256*1024*1024 { // For files larger than 256MB
options.BlockSize = 8 * 1024 * 1024 // 8MB blocks
options.Concurrency = 8 // More concurrent uploads
}
// For very large files (>1GB)
if fileSize > 1024*1024*1024 {
options.BlockSize = 16 * 1024 * 1024 // 16MB blocks
options.Concurrency = 16 // Higher concurrency
}
return options
}
// isDirectory determines if a blob represents a directory
// Checks multiple indicators: path suffix, metadata, and content type
func isDirectory(blob container.BlobItem) bool {
// Check path suffix
if strings.HasSuffix(*blob.Name, "/") {
return true
}
// Check metadata for directory marker
if blob.Metadata != nil {
if val, ok := blob.Metadata["hdi_isfolder"]; ok && val != nil && *val == "true" {
return true
}
// Azure Storage Explorer and other tools may use different metadata keys
if val, ok := blob.Metadata["is_directory"]; ok && val != nil && strings.ToLower(*val) == "true" {
return true
}
}
// Check content type (some tools mark directories with specific content types)
if blob.Properties != nil && blob.Properties.ContentType != nil {
contentType := strings.ToLower(*blob.Properties.ContentType)
if blob.Properties.ContentLength != nil && *blob.Properties.ContentLength == 0 && (contentType == "application/directory" || contentType == "directory") {
return true
}
}
return false
}
// deleteEmptyDirectory deletes a directory only if it's empty
func (d *AzureBlob) deleteEmptyDirectory(ctx context.Context, dirPath string) error {
// Directory is empty, delete the directory marker
blobClient := d.containerClient.NewBlobClient(strings.TrimSuffix(dirPath, "/"))
_, err := blobClient.Delete(ctx, nil)
// Also try deleting with trailing slash (for different directory marker formats)
if err != nil && isNotFoundError(err) {
blobClient = d.containerClient.NewBlobClient(dirPath)
_, err = blobClient.Delete(ctx, nil)
}
// Ignore not found errors
if err != nil && isNotFoundError(err) {
log.Infof("Directory [%s] not found during deletion: %v", dirPath, err)
return nil
}
return err
}

View File

@@ -6,13 +6,16 @@ import (
"encoding/hex"
"errors"
"io"
"math"
"net/url"
"os"
stdpath "path"
"strconv"
"time"
"golang.org/x/sync/semaphore"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
@@ -76,6 +79,8 @@ func (d *BaiduNetdisk) List(ctx context.Context, dir model.Obj, args model.ListA
func (d *BaiduNetdisk) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if d.DownloadAPI == "crack" {
return d.linkCrack(file, args)
} else if d.DownloadAPI == "crack_video" {
return d.linkCrackVideo(file, args)
}
return d.linkOfficial(file, args)
}
@@ -181,21 +186,35 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
return newObj, nil
}
tempFile, err := stream.CacheFullInTempFile()
if err != nil {
return nil, err
var (
cache = stream.GetFile()
tmpF *os.File
err error
)
if _, ok := cache.(io.ReaderAt); !ok {
tmpF, err = os.CreateTemp(conf.Conf.TempDir, "file-*")
if err != nil {
return nil, err
}
defer func() {
_ = tmpF.Close()
_ = os.Remove(tmpF.Name())
}()
cache = tmpF
}
streamSize := stream.GetSize()
sliceSize := d.getSliceSize()
count := int(math.Max(math.Ceil(float64(streamSize)/float64(sliceSize)), 1))
sliceSize := d.getSliceSize(streamSize)
count := int(streamSize / sliceSize)
lastBlockSize := streamSize % sliceSize
if streamSize > 0 && lastBlockSize == 0 {
if lastBlockSize > 0 {
count++
} else {
lastBlockSize = sliceSize
}
//cal md5 for first 256k data
const SliceSize int64 = 256 * 1024
const SliceSize int64 = 256 * utils.KB
// cal md5
blockList := make([]string, 0, count)
byteSize := sliceSize
@@ -203,6 +222,11 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
sliceMd5H := md5.New()
sliceMd5H2 := md5.New()
slicemd5H2Write := utils.LimitWriter(sliceMd5H2, SliceSize)
writers := []io.Writer{fileMd5H, sliceMd5H, slicemd5H2Write}
if tmpF != nil {
writers = append(writers, tmpF)
}
written := int64(0)
for i := 1; i <= count; i++ {
if utils.IsCanceled(ctx) {
@@ -211,13 +235,23 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
if i == count {
byteSize = lastBlockSize
}
_, err := utils.CopyWithBufferN(io.MultiWriter(fileMd5H, sliceMd5H, slicemd5H2Write), tempFile, byteSize)
n, err := utils.CopyWithBufferN(io.MultiWriter(writers...), stream, byteSize)
written += n
if err != nil && err != io.EOF {
return nil, err
}
blockList = append(blockList, hex.EncodeToString(sliceMd5H.Sum(nil)))
sliceMd5H.Reset()
}
if tmpF != nil {
if written != streamSize {
return nil, errs.NewErr(err, "CreateTempFile failed, incoming stream actual size= %d, expect = %d ", written, streamSize)
}
_, err = tmpF.Seek(0, io.SeekStart)
if err != nil {
return nil, errs.NewErr(err, "CreateTempFile failed, can't seek to 0 ")
}
}
contentMd5 := hex.EncodeToString(fileMd5H.Sum(nil))
sliceMd5 := hex.EncodeToString(sliceMd5H2.Sum(nil))
blockListStr, _ := utils.Json.MarshalToString(blockList)
@@ -260,9 +294,10 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
}
// step.2 上传分片
threadG, upCtx := errgroup.NewGroupWithContext(ctx, d.uploadThread,
retry.Attempts(3),
retry.Attempts(1),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
sem := semaphore.NewWeighted(3)
for i, partseq := range precreateResp.BlockList {
if utils.IsCanceled(upCtx) {
break
@@ -273,6 +308,10 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
byteSize = lastBlockSize
}
threadG.Go(func(ctx context.Context) error {
if err = sem.Acquire(ctx, 1); err != nil {
return err
}
defer sem.Release(1)
params := map[string]string{
"method": "upload",
"access_token": d.AccessToken,
@@ -281,7 +320,8 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
"uploadid": precreateResp.Uploadid,
"partseq": strconv.Itoa(partseq),
}
err := d.uploadSlice(ctx, params, stream.GetName(), io.NewSectionReader(tempFile, offset, byteSize))
err := d.uploadSlice(ctx, params, stream.GetName(),
driver.NewLimitedUploadStream(ctx, io.NewSectionReader(cache, offset, byteSize)))
if err != nil {
return err
}

View File

@@ -8,16 +8,18 @@ import (
type Addition struct {
RefreshToken string `json:"refresh_token" required:"true"`
driver.RootPath
OrderBy string `json:"order_by" type:"select" options:"name,time,size" default:"name"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
DownloadAPI string `json:"download_api" type:"select" options:"official,crack" default:"official"`
ClientID string `json:"client_id" required:"true" default:"iYCeC9g08h5vuP9UqvPHKKSVrKFXGa1v"`
ClientSecret string `json:"client_secret" required:"true" default:"jXiFMOPVPCWlO2M5CwWQzffpNPaGTRBG"`
CustomCrackUA string `json:"custom_crack_ua" required:"true" default:"netdisk"`
AccessToken string
UploadThread string `json:"upload_thread" default:"3" help:"1<=thread<=32"`
UploadAPI string `json:"upload_api" default:"https://d.pcs.baidu.com"`
CustomUploadPartSize int64 `json:"custom_upload_part_size" type:"number" default:"0" help:"0 for auto"`
OrderBy string `json:"order_by" type:"select" options:"name,time,size" default:"name"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
DownloadAPI string `json:"download_api" type:"select" options:"official,crack,crack_video" default:"official"`
ClientID string `json:"client_id" required:"true" default:"hq9yQ9w9kR4YHj1kyYafLygVocobh7Sf"`
ClientSecret string `json:"client_secret" required:"true" default:"YH2VpZcFJHYNnV6vLfHQXDBhcE7ZChyE"`
CustomCrackUA string `json:"custom_crack_ua" required:"true" default:"netdisk"`
AccessToken string
UploadThread string `json:"upload_thread" default:"3" help:"1<=thread<=32"`
UploadAPI string `json:"upload_api" default:"https://d.pcs.baidu.com"`
CustomUploadPartSize int64 `json:"custom_upload_part_size" type:"number" default:"0" help:"0 for auto"`
LowBandwithUploadMode bool `json:"low_bandwith_upload_mode" default:"false"`
OnlyListVideoFile bool `json:"only_list_video_file" default:"false"`
}
var config = driver.Config{

View File

@@ -17,7 +17,7 @@ type TokenErrResp struct {
type File struct {
//TkbindId int `json:"tkbind_id"`
//OwnerType int `json:"owner_type"`
//Category int `json:"category"`
Category int `json:"category"`
//RealCategory string `json:"real_category"`
FsId int64 `json:"fs_id"`
//OperId int `json:"oper_id"`

View File

@@ -79,6 +79,12 @@ func (d *BaiduNetdisk) request(furl string, method string, callback base.ReqCall
return retry.Unrecoverable(err2)
}
}
if 31023 == errno && d.DownloadAPI == "crack_video" {
result = res.Body()
return nil
}
return fmt.Errorf("req: [%s] ,errno: %d, refer to https://pan.baidu.com/union/doc/", furl, errno)
}
result = res.Body()
@@ -131,12 +137,21 @@ func (d *BaiduNetdisk) getFiles(dir string) ([]File, error) {
if len(resp.List) == 0 {
break
}
res = append(res, resp.List...)
if d.OnlyListVideoFile {
for _, file := range resp.List {
if file.Isdir == 1 || file.Category == 1 {
res = append(res, file)
}
}
} else {
res = append(res, resp.List...)
}
}
return res, nil
}
func (d *BaiduNetdisk) linkOfficial(file model.Obj, args model.LinkArgs) (*model.Link, error) {
func (d *BaiduNetdisk) linkOfficial(file model.Obj, _ model.LinkArgs) (*model.Link, error) {
var resp DownloadResp
params := map[string]string{
"method": "filemetas",
@@ -164,7 +179,7 @@ func (d *BaiduNetdisk) linkOfficial(file model.Obj, args model.LinkArgs) (*model
}, nil
}
func (d *BaiduNetdisk) linkCrack(file model.Obj, args model.LinkArgs) (*model.Link, error) {
func (d *BaiduNetdisk) linkCrack(file model.Obj, _ model.LinkArgs) (*model.Link, error) {
var resp DownloadResp2
param := map[string]string{
"target": fmt.Sprintf("[\"%s\"]", file.GetPath()),
@@ -187,6 +202,34 @@ func (d *BaiduNetdisk) linkCrack(file model.Obj, args model.LinkArgs) (*model.Li
}, nil
}
func (d *BaiduNetdisk) linkCrackVideo(file model.Obj, _ model.LinkArgs) (*model.Link, error) {
param := map[string]string{
"type": "VideoURL",
"path": fmt.Sprintf("%s", file.GetPath()),
"fs_id": file.GetID(),
"devuid": "0%1",
"clienttype": "1",
"channel": "android_15_25010PN30C_bd-netdisk_1523a",
"nom3u8": "1",
"dlink": "1",
"media": "1",
"origin": "dlna",
}
resp, err := d.request("https://pan.baidu.com/api/mediainfo", http.MethodGet, func(req *resty.Request) {
req.SetQueryParams(param)
}, nil)
if err != nil {
return nil, err
}
return &model.Link{
URL: utils.Json.Get(resp, "info", "dlink").ToString(),
Header: http.Header{
"User-Agent": []string{d.CustomCrackUA},
},
}, nil
}
func (d *BaiduNetdisk) manage(opera string, filelist any) ([]byte, error) {
params := map[string]string{
"method": "filemanager",
@@ -230,22 +273,72 @@ func joinTime(form map[string]string, ctime, mtime int64) {
const (
DefaultSliceSize int64 = 4 * utils.MB
VipSliceSize = 16 * utils.MB
SVipSliceSize = 32 * utils.MB
VipSliceSize int64 = 16 * utils.MB
SVipSliceSize int64 = 32 * utils.MB
MaxSliceNum = 2048 // 文档写的是 1024/没写 ,但实际测试是 2048
SliceStep int64 = 1 * utils.MB
)
func (d *BaiduNetdisk) getSliceSize() int64 {
if d.CustomUploadPartSize != 0 {
return d.CustomUploadPartSize
}
switch d.vipType {
case 1:
return VipSliceSize
case 2:
return SVipSliceSize
default:
func (d *BaiduNetdisk) getSliceSize(filesize int64) int64 {
// 非会员固定为 4MB
if d.vipType == 0 {
if d.CustomUploadPartSize != 0 {
log.Warnf("CustomUploadPartSize is not supported for non-vip user, use DefaultSliceSize")
}
if filesize > MaxSliceNum*DefaultSliceSize {
log.Warnf("File size(%d) is too large, may cause upload failure", filesize)
}
return DefaultSliceSize
}
if d.CustomUploadPartSize != 0 {
if d.CustomUploadPartSize < DefaultSliceSize {
log.Warnf("CustomUploadPartSize(%d) is less than DefaultSliceSize(%d), use DefaultSliceSize", d.CustomUploadPartSize, DefaultSliceSize)
return DefaultSliceSize
}
if d.vipType == 1 && d.CustomUploadPartSize > VipSliceSize {
log.Warnf("CustomUploadPartSize(%d) is greater than VipSliceSize(%d), use VipSliceSize", d.CustomUploadPartSize, VipSliceSize)
return VipSliceSize
}
if d.vipType == 2 && d.CustomUploadPartSize > SVipSliceSize {
log.Warnf("CustomUploadPartSize(%d) is greater than SVipSliceSize(%d), use SVipSliceSize", d.CustomUploadPartSize, SVipSliceSize)
return SVipSliceSize
}
return d.CustomUploadPartSize
}
maxSliceSize := DefaultSliceSize
switch d.vipType {
case 1:
maxSliceSize = VipSliceSize
case 2:
maxSliceSize = SVipSliceSize
}
// upload on low bandwidth
if d.LowBandwithUploadMode {
size := DefaultSliceSize
for size <= maxSliceSize {
if filesize <= MaxSliceNum*size {
return size
}
size += SliceStep
}
}
if filesize > MaxSliceNum*maxSliceSize {
log.Warnf("File size(%d) is too large, may cause upload failure", filesize)
}
return maxSliceSize
}
// func encodeURIComponent(str string) string {

View File

@@ -7,13 +7,16 @@ import (
"errors"
"fmt"
"io"
"math"
"os"
"regexp"
"strconv"
"strings"
"time"
"golang.org/x/sync/semaphore"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
@@ -27,9 +30,10 @@ type BaiduPhoto struct {
model.Storage
Addition
AccessToken string
Uk int64
root model.Obj
// AccessToken string
Uk int64
bdstoken string
root model.Obj
uploadThread int
}
@@ -48,9 +52,9 @@ func (d *BaiduPhoto) Init(ctx context.Context) error {
d.uploadThread, d.UploadThread = 3, "3"
}
if err := d.refreshToken(); err != nil {
return err
}
// if err := d.refreshToken(); err != nil {
// return err
// }
// root
if d.AlbumID != "" {
@@ -73,6 +77,10 @@ func (d *BaiduPhoto) Init(ctx context.Context) error {
if err != nil {
return err
}
d.bdstoken, err = d.getBDStoken()
if err != nil {
return err
}
d.Uk, err = strconv.ParseInt(info.YouaID, 10, 64)
return err
}
@@ -82,7 +90,7 @@ func (d *BaiduPhoto) GetRoot(ctx context.Context) (model.Obj, error) {
}
func (d *BaiduPhoto) Drop(ctx context.Context) error {
d.AccessToken = ""
// d.AccessToken = ""
d.Uk = 0
d.root = nil
return nil
@@ -140,14 +148,13 @@ func (d *BaiduPhoto) Link(ctx context.Context, file model.Obj, args model.LinkAr
// 处理共享相册
if d.Uk != file.Uk {
// 有概率无法获取到链接
return d.linkAlbum(ctx, file, args)
// return d.linkAlbum(ctx, file, args)
// 接口被限制只能使用cookie
// f, err := d.CopyAlbumFile(ctx, file)
// if err != nil {
// return nil, err
// }
// return d.linkFile(ctx, f, args)
f, err := d.CopyAlbumFile(ctx, file)
if err != nil {
return nil, err
}
return d.linkFile(ctx, f, args)
}
return d.linkFile(ctx, &file.File, args)
}
@@ -235,11 +242,21 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
// TODO:
// 暂时没有找到妙传方式
// 需要获取完整文件md5,必须支持 io.Seek
tempFile, err := stream.CacheFullInTempFile()
if err != nil {
return nil, err
var (
cache = stream.GetFile()
tmpF *os.File
err error
)
if _, ok := cache.(io.ReaderAt); !ok {
tmpF, err = os.CreateTemp(conf.Conf.TempDir, "file-*")
if err != nil {
return nil, err
}
defer func() {
_ = tmpF.Close()
_ = os.Remove(tmpF.Name())
}()
cache = tmpF
}
const DEFAULT int64 = 1 << 22
@@ -247,9 +264,11 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
// 计算需要的数据
streamSize := stream.GetSize()
count := int(math.Ceil(float64(streamSize) / float64(DEFAULT)))
count := int(streamSize / DEFAULT)
lastBlockSize := streamSize % DEFAULT
if lastBlockSize == 0 {
if lastBlockSize > 0 {
count++
} else {
lastBlockSize = DEFAULT
}
@@ -260,6 +279,11 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
sliceMd5H := md5.New()
sliceMd5H2 := md5.New()
slicemd5H2Write := utils.LimitWriter(sliceMd5H2, SliceSize)
writers := []io.Writer{fileMd5H, sliceMd5H, slicemd5H2Write}
if tmpF != nil {
writers = append(writers, tmpF)
}
written := int64(0)
for i := 1; i <= count; i++ {
if utils.IsCanceled(ctx) {
return nil, ctx.Err()
@@ -267,13 +291,23 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
if i == count {
byteSize = lastBlockSize
}
_, err := utils.CopyWithBufferN(io.MultiWriter(fileMd5H, sliceMd5H, slicemd5H2Write), tempFile, byteSize)
n, err := utils.CopyWithBufferN(io.MultiWriter(writers...), stream, byteSize)
written += n
if err != nil && err != io.EOF {
return nil, err
}
sliceMD5List = append(sliceMD5List, hex.EncodeToString(sliceMd5H.Sum(nil)))
sliceMd5H.Reset()
}
if tmpF != nil {
if written != streamSize {
return nil, errs.NewErr(err, "CreateTempFile failed, incoming stream actual size= %d, expect = %d ", written, streamSize)
}
_, err = tmpF.Seek(0, io.SeekStart)
if err != nil {
return nil, errs.NewErr(err, "CreateTempFile failed, can't seek to 0 ")
}
}
contentMd5 := hex.EncodeToString(fileMd5H.Sum(nil))
sliceMd5 := hex.EncodeToString(sliceMd5H2.Sum(nil))
blockListStr, _ := utils.Json.MarshalToString(sliceMD5List)
@@ -285,18 +319,19 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
"rtype": "1",
"ctype": "11",
"path": fmt.Sprintf("/%s", stream.GetName()),
"size": fmt.Sprint(stream.GetSize()),
"size": fmt.Sprint(streamSize),
"slice-md5": sliceMd5,
"content-md5": contentMd5,
"block_list": blockListStr,
}
// 尝试获取之前的进度
precreateResp, ok := base.GetUploadProgress[*PrecreateResp](d, d.AccessToken, contentMd5)
precreateResp, ok := base.GetUploadProgress[*PrecreateResp](d, strconv.FormatInt(d.Uk, 10), contentMd5)
if !ok {
_, err = d.Post(FILE_API_URL_V1+"/precreate", func(r *resty.Request) {
r.SetContext(ctx)
r.SetFormData(params)
r.SetQueryParam("bdstoken", d.bdstoken)
}, &precreateResp)
if err != nil {
return nil, err
@@ -309,6 +344,7 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
retry.Attempts(3),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
sem := semaphore.NewWeighted(3)
for i, partseq := range precreateResp.BlockList {
if utils.IsCanceled(upCtx) {
break
@@ -320,17 +356,22 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
}
threadG.Go(func(ctx context.Context) error {
if err = sem.Acquire(ctx, 1); err != nil {
return err
}
defer sem.Release(1)
uploadParams := map[string]string{
"method": "upload",
"path": params["path"],
"partseq": fmt.Sprint(partseq),
"uploadid": precreateResp.UploadID,
"app_id": "16051585",
}
_, err = d.Post("https://c3.pcs.baidu.com/rest/2.0/pcs/superfile2", func(r *resty.Request) {
r.SetContext(ctx)
r.SetQueryParams(uploadParams)
r.SetFileReader("file", stream.GetName(), io.NewSectionReader(tempFile, offset, byteSize))
r.SetFileReader("file", stream.GetName(),
driver.NewLimitedUploadStream(ctx, io.NewSectionReader(cache, offset, byteSize)))
}, nil)
if err != nil {
return err
@@ -343,7 +384,7 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
if err = threadG.Wait(); err != nil {
if errors.Is(err, context.Canceled) {
precreateResp.BlockList = utils.SliceFilter(precreateResp.BlockList, func(s int) bool { return s >= 0 })
base.SaveUploadProgress(d, precreateResp, d.AccessToken, contentMd5)
base.SaveUploadProgress(d, strconv.FormatInt(d.Uk, 10), contentMd5)
}
return nil, err
}
@@ -353,6 +394,7 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
_, err = d.Post(FILE_API_URL_V1+"/create", func(r *resty.Request) {
r.SetContext(ctx)
r.SetFormData(params)
r.SetQueryParam("bdstoken", d.bdstoken)
}, &precreateResp)
if err != nil {
return nil, err

View File

@@ -6,13 +6,14 @@ import (
)
type Addition struct {
RefreshToken string `json:"refresh_token" required:"true"`
ShowType string `json:"show_type" type:"select" options:"root,root_only_album,root_only_file" default:"root"`
AlbumID string `json:"album_id"`
// RefreshToken string `json:"refresh_token" required:"true"`
Cookie string `json:"cookie" required:"true"`
ShowType string `json:"show_type" type:"select" options:"root,root_only_album,root_only_file" default:"root"`
AlbumID string `json:"album_id"`
//AlbumPassword string `json:"album_password"`
DeleteOrigin bool `json:"delete_origin"`
ClientID string `json:"client_id" required:"true" default:"iYCeC9g08h5vuP9UqvPHKKSVrKFXGa1v"`
ClientSecret string `json:"client_secret" required:"true" default:"jXiFMOPVPCWlO2M5CwWQzffpNPaGTRBG"`
DeleteOrigin bool `json:"delete_origin"`
// ClientID string `json:"client_id" required:"true" default:"iYCeC9g08h5vuP9UqvPHKKSVrKFXGa1v"`
// ClientSecret string `json:"client_secret" required:"true" default:"jXiFMOPVPCWlO2M5CwWQzffpNPaGTRBG"`
UploadThread string `json:"upload_thread" default:"3" help:"1<=thread<=32"`
}

View File

@@ -10,9 +10,7 @@ import (
"unicode"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
@@ -27,7 +25,8 @@ const (
func (d *BaiduPhoto) Request(client *resty.Client, furl string, method string, callback base.ReqCallback, resp interface{}) (*resty.Response, error) {
req := client.R().
SetQueryParam("access_token", d.AccessToken)
// SetQueryParam("access_token", d.AccessToken)
SetHeader("Cookie", d.Cookie)
if callback != nil {
callback(req)
}
@@ -49,10 +48,10 @@ func (d *BaiduPhoto) Request(client *resty.Client, furl string, method string, c
return nil, fmt.Errorf("no shared albums found")
case 50100:
return nil, fmt.Errorf("illegal title, only supports 50 characters")
case -6:
if err = d.refreshToken(); err != nil {
return nil, err
}
// case -6:
// if err = d.refreshToken(); err != nil {
// return nil, err
// }
default:
return nil, fmt.Errorf("errno: %d, refer to https://photo.baidu.com/union/doc", erron)
}
@@ -67,29 +66,29 @@ func (d *BaiduPhoto) Request(client *resty.Client, furl string, method string, c
// return res.Body(), nil
//}
func (d *BaiduPhoto) refreshToken() error {
u := "https://openapi.baidu.com/oauth/2.0/token"
var resp base.TokenResp
var e TokenErrResp
_, err := base.RestyClient.R().SetResult(&resp).SetError(&e).SetQueryParams(map[string]string{
"grant_type": "refresh_token",
"refresh_token": d.RefreshToken,
"client_id": d.ClientID,
"client_secret": d.ClientSecret,
}).Get(u)
if err != nil {
return err
}
if e.ErrorMsg != "" {
return &e
}
if resp.RefreshToken == "" {
return errs.EmptyToken
}
d.AccessToken, d.RefreshToken = resp.AccessToken, resp.RefreshToken
op.MustSaveDriverStorage(d)
return nil
}
// func (d *BaiduPhoto) refreshToken() error {
// u := "https://openapi.baidu.com/oauth/2.0/token"
// var resp base.TokenResp
// var e TokenErrResp
// _, err := base.RestyClient.R().SetResult(&resp).SetError(&e).SetQueryParams(map[string]string{
// "grant_type": "refresh_token",
// "refresh_token": d.RefreshToken,
// "client_id": d.ClientID,
// "client_secret": d.ClientSecret,
// }).Get(u)
// if err != nil {
// return err
// }
// if e.ErrorMsg != "" {
// return &e
// }
// if resp.RefreshToken == "" {
// return errs.EmptyToken
// }
// d.AccessToken, d.RefreshToken = resp.AccessToken, resp.RefreshToken
// op.MustSaveDriverStorage(d)
// return nil
// }
func (d *BaiduPhoto) Get(furl string, callback base.ReqCallback, resp interface{}) (*resty.Response, error) {
return d.Request(base.RestyClient, furl, http.MethodGet, callback, resp)
@@ -363,10 +362,6 @@ func (d *BaiduPhoto) linkAlbum(ctx context.Context, file *AlbumFile, args model.
location := resp.Header().Get("Location")
if err != nil {
return nil, err
}
link := &model.Link{
URL: location,
Header: http.Header{
@@ -388,36 +383,36 @@ func (d *BaiduPhoto) linkFile(ctx context.Context, file *File, args model.LinkAr
headers["X-Forwarded-For"] = args.IP
}
// var downloadUrl struct {
// Dlink string `json:"dlink"`
// }
// _, err := d.Get(FILE_API_URL_V1+"/download", func(r *resty.Request) {
// r.SetContext(ctx)
// r.SetHeaders(headers)
// r.SetQueryParams(map[string]string{
// "fsid": fmt.Sprint(file.Fsid),
// })
// }, &downloadUrl)
resp, err := d.Request(base.NoRedirectClient, FILE_API_URL_V1+"/download", http.MethodHead, func(r *resty.Request) {
var downloadUrl struct {
Dlink string `json:"dlink"`
}
_, err := d.Get(FILE_API_URL_V2+"/download", func(r *resty.Request) {
r.SetContext(ctx)
r.SetHeaders(headers)
r.SetQueryParams(map[string]string{
"fsid": fmt.Sprint(file.Fsid),
})
}, nil)
}, &downloadUrl)
// resp, err := d.Request(base.NoRedirectClient, FILE_API_URL_V1+"/download", http.MethodHead, func(r *resty.Request) {
// r.SetContext(ctx)
// r.SetHeaders(headers)
// r.SetQueryParams(map[string]string{
// "fsid": fmt.Sprint(file.Fsid),
// })
// }, nil)
if err != nil {
return nil, err
}
if resp.StatusCode() != 302 {
return nil, fmt.Errorf("not found 302 redirect")
}
// if resp.StatusCode() != 302 {
// return nil, fmt.Errorf("not found 302 redirect")
// }
location := resp.Header().Get("Location")
// location := resp.Header().Get("Location")
link := &model.Link{
URL: location,
URL: downloadUrl.Dlink,
Header: http.Header{
"User-Agent": []string{headers["User-Agent"]},
"Referer": []string{"https://photo.baidu.com/"},
@@ -481,6 +476,21 @@ func (d *BaiduPhoto) uInfo() (*UInfo, error) {
return &info, nil
}
func (d *BaiduPhoto) getBDStoken() (string, error) {
var info struct {
Result struct {
Bdstoken string `json:"bdstoken"`
Token string `json:"token"`
Uk int64 `json:"uk"`
} `json:"result"`
}
_, err := d.Get("https://pan.baidu.com/api/gettemplatevariable?fields=[%22bdstoken%22,%22token%22,%22uk%22]", nil, &info)
if err != nil {
return "", err
}
return info.Result.Bdstoken, nil
}
func DecryptMd5(encryptMd5 string) string {
if _, err := hex.DecodeString(encryptMd5); err == nil {
return encryptMd5

View File

@@ -6,6 +6,7 @@ import (
"time"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/net"
"github.com/go-resty/resty/v2"
)
@@ -26,7 +27,7 @@ func InitClient() {
NoRedirectClient.SetHeader("user-agent", UserAgent)
RestyClient = NewRestyClient()
HttpClient = NewHttpClient()
HttpClient = net.NewHttpClient()
}
func NewRestyClient() *resty.Client {
@@ -38,13 +39,3 @@ func NewRestyClient() *resty.Client {
SetTLSClientConfig(&tls.Config{InsecureSkipVerify: conf.Conf.TlsInsecureSkipVerify})
return client
}
func NewHttpClient() *http.Client {
return &http.Client{
Timeout: time.Hour * 48,
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
TLSClientConfig: &tls.Config{InsecureSkipVerify: conf.Conf.TlsInsecureSkipVerify},
},
}
}

767
drivers/bitqiu/driver.go Normal file
View File

@@ -0,0 +1,767 @@
package bitqiu
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http/cookiejar"
"path"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
streamPkg "github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
)
const (
baseURL = "https://pan.bitqiu.com"
loginURL = baseURL + "/loginServer/login"
userInfoURL = baseURL + "/user/getInfo"
listURL = baseURL + "/apiToken/cfi/fs/resources/pages"
uploadInitializeURL = baseURL + "/apiToken/cfi/fs/upload/v2/initialize"
uploadCompleteURL = baseURL + "/apiToken/cfi/fs/upload/v2/complete"
downloadURL = baseURL + "/download/getUrl"
createDirURL = baseURL + "/resource/create"
moveResourceURL = baseURL + "/resource/remove"
renameResourceURL = baseURL + "/resource/rename"
copyResourceURL = baseURL + "/apiToken/cfi/fs/async/copy"
copyManagerURL = baseURL + "/apiToken/cfi/fs/async/manager"
deleteResourceURL = baseURL + "/resource/delete"
successCode = "10200"
uploadSuccessCode = "30010"
copySubmittedCode = "10300"
orgChannel = "default|default|default"
)
const (
copyPollInterval = time.Second
copyPollMaxAttempts = 60
chunkSize = int64(1 << 20)
)
const defaultUserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.0.0 Safari/537.36"
type BitQiu struct {
model.Storage
Addition
client *resty.Client
userID string
}
func (d *BitQiu) Config() driver.Config {
return config
}
func (d *BitQiu) GetAddition() driver.Additional {
return &d.Addition
}
func (d *BitQiu) Init(ctx context.Context) error {
if d.Addition.UserPlatform == "" {
d.Addition.UserPlatform = uuid.NewString()
op.MustSaveDriverStorage(d)
}
if d.client == nil {
jar, err := cookiejar.New(nil)
if err != nil {
return err
}
d.client = base.NewRestyClient()
d.client.SetBaseURL(baseURL)
d.client.SetCookieJar(jar)
}
d.client.SetHeader("user-agent", d.userAgent())
return d.login(ctx)
}
func (d *BitQiu) Drop(ctx context.Context) error {
d.client = nil
d.userID = ""
return nil
}
func (d *BitQiu) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
if d.userID == "" {
if err := d.login(ctx); err != nil {
return nil, err
}
}
parentID := d.resolveParentID(dir)
dirPath := ""
if dir != nil {
dirPath = dir.GetPath()
}
pageSize := d.pageSize()
orderType := d.orderType()
desc := d.orderDesc()
var results []model.Obj
page := 1
for {
form := map[string]string{
"parentId": parentID,
"limit": strconv.Itoa(pageSize),
"orderType": orderType,
"desc": desc,
"model": "1",
"userId": d.userID,
"currentPage": strconv.Itoa(page),
"page": strconv.Itoa(page),
"org_channel": orgChannel,
}
var resp Response[ResourcePage]
if err := d.postForm(ctx, listURL, form, &resp); err != nil {
return nil, err
}
if resp.Code != successCode {
if resp.Code == "10401" || resp.Code == "10404" {
if err := d.login(ctx); err != nil {
return nil, err
}
continue
}
return nil, fmt.Errorf("list failed: %s", resp.Message)
}
objs, err := utils.SliceConvert(resp.Data.Data, func(item Resource) (model.Obj, error) {
return item.toObject(parentID, dirPath)
})
if err != nil {
return nil, err
}
results = append(results, objs...)
if !resp.Data.HasNext || len(resp.Data.Data) == 0 {
break
}
page++
}
return results, nil
}
func (d *BitQiu) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if file.IsDir() {
return nil, errs.NotFile
}
if d.userID == "" {
if err := d.login(ctx); err != nil {
return nil, err
}
}
form := map[string]string{
"fileIds": file.GetID(),
"org_channel": orgChannel,
}
for attempt := 0; attempt < 2; attempt++ {
var resp Response[DownloadData]
if err := d.postForm(ctx, downloadURL, form, &resp); err != nil {
return nil, err
}
switch resp.Code {
case successCode:
if resp.Data.URL == "" {
return nil, fmt.Errorf("empty download url returned")
}
return &model.Link{URL: resp.Data.URL}, nil
case "10401", "10404":
if err := d.login(ctx); err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("get link failed: %s", resp.Message)
}
}
return nil, fmt.Errorf("get link failed: retry limit reached")
}
func (d *BitQiu) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
if d.userID == "" {
if err := d.login(ctx); err != nil {
return nil, err
}
}
parentID := d.resolveParentID(parentDir)
parentPath := ""
if parentDir != nil {
parentPath = parentDir.GetPath()
}
form := map[string]string{
"parentId": parentID,
"name": dirName,
"org_channel": orgChannel,
}
for attempt := 0; attempt < 2; attempt++ {
var resp Response[CreateDirData]
if err := d.postForm(ctx, createDirURL, form, &resp); err != nil {
return nil, err
}
switch resp.Code {
case successCode:
newParentID := parentID
if resp.Data.ParentID != "" {
newParentID = resp.Data.ParentID
}
name := resp.Data.Name
if name == "" {
name = dirName
}
resource := Resource{
ResourceID: resp.Data.DirID,
ResourceType: 1,
Name: name,
ParentID: newParentID,
}
obj, err := resource.toObject(newParentID, parentPath)
if err != nil {
return nil, err
}
if o, ok := obj.(*Object); ok {
o.ParentID = newParentID
}
return obj, nil
case "10401", "10404":
if err := d.login(ctx); err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("create folder failed: %s", resp.Message)
}
}
return nil, fmt.Errorf("create folder failed: retry limit reached")
}
func (d *BitQiu) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if d.userID == "" {
if err := d.login(ctx); err != nil {
return nil, err
}
}
targetParentID := d.resolveParentID(dstDir)
form := map[string]string{
"dirIds": "",
"fileIds": "",
"parentId": targetParentID,
"org_channel": orgChannel,
}
if srcObj.IsDir() {
form["dirIds"] = srcObj.GetID()
} else {
form["fileIds"] = srcObj.GetID()
}
for attempt := 0; attempt < 2; attempt++ {
var resp Response[any]
if err := d.postForm(ctx, moveResourceURL, form, &resp); err != nil {
return nil, err
}
switch resp.Code {
case successCode:
dstPath := ""
if dstDir != nil {
dstPath = dstDir.GetPath()
}
if setter, ok := srcObj.(model.SetPath); ok {
setter.SetPath(path.Join(dstPath, srcObj.GetName()))
}
if o, ok := srcObj.(*Object); ok {
o.ParentID = targetParentID
}
return srcObj, nil
case "10401", "10404":
if err := d.login(ctx); err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("move failed: %s", resp.Message)
}
}
return nil, fmt.Errorf("move failed: retry limit reached")
}
func (d *BitQiu) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
if d.userID == "" {
if err := d.login(ctx); err != nil {
return nil, err
}
}
form := map[string]string{
"resourceId": srcObj.GetID(),
"name": newName,
"type": "0",
"org_channel": orgChannel,
}
if srcObj.IsDir() {
form["type"] = "1"
}
for attempt := 0; attempt < 2; attempt++ {
var resp Response[any]
if err := d.postForm(ctx, renameResourceURL, form, &resp); err != nil {
return nil, err
}
switch resp.Code {
case successCode:
return updateObjectName(srcObj, newName), nil
case "10401", "10404":
if err := d.login(ctx); err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("rename failed: %s", resp.Message)
}
}
return nil, fmt.Errorf("rename failed: retry limit reached")
}
func (d *BitQiu) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if d.userID == "" {
if err := d.login(ctx); err != nil {
return nil, err
}
}
targetParentID := d.resolveParentID(dstDir)
form := map[string]string{
"dirIds": "",
"fileIds": "",
"parentId": targetParentID,
"org_channel": orgChannel,
}
if srcObj.IsDir() {
form["dirIds"] = srcObj.GetID()
} else {
form["fileIds"] = srcObj.GetID()
}
for attempt := 0; attempt < 2; attempt++ {
var resp Response[any]
if err := d.postForm(ctx, copyResourceURL, form, &resp); err != nil {
return nil, err
}
switch resp.Code {
case successCode, copySubmittedCode:
return d.waitForCopiedObject(ctx, srcObj, dstDir)
case "10401", "10404":
if err := d.login(ctx); err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("copy failed: %s", resp.Message)
}
}
return nil, fmt.Errorf("copy failed: retry limit reached")
}
func (d *BitQiu) Remove(ctx context.Context, obj model.Obj) error {
if d.userID == "" {
if err := d.login(ctx); err != nil {
return err
}
}
form := map[string]string{
"dirIds": "",
"fileIds": "",
"org_channel": orgChannel,
}
if obj.IsDir() {
form["dirIds"] = obj.GetID()
} else {
form["fileIds"] = obj.GetID()
}
for attempt := 0; attempt < 2; attempt++ {
var resp Response[any]
if err := d.postForm(ctx, deleteResourceURL, form, &resp); err != nil {
return err
}
switch resp.Code {
case successCode:
return nil
case "10401", "10404":
if err := d.login(ctx); err != nil {
return err
}
default:
return fmt.Errorf("remove failed: %s", resp.Message)
}
}
return fmt.Errorf("remove failed: retry limit reached")
}
func (d *BitQiu) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
if d.userID == "" {
if err := d.login(ctx); err != nil {
return nil, err
}
}
up(0)
tmpFile, md5sum, err := streamPkg.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil {
return nil, err
}
defer tmpFile.Close()
parentID := d.resolveParentID(dstDir)
parentPath := ""
if dstDir != nil {
parentPath = dstDir.GetPath()
}
form := map[string]string{
"parentId": parentID,
"name": file.GetName(),
"size": strconv.FormatInt(file.GetSize(), 10),
"hash": md5sum,
"sampleMd5": md5sum,
"org_channel": orgChannel,
}
var resp Response[json.RawMessage]
if err = d.postForm(ctx, uploadInitializeURL, form, &resp); err != nil {
return nil, err
}
if resp.Code != uploadSuccessCode {
switch resp.Code {
case successCode:
var initData UploadInitData
if err := json.Unmarshal(resp.Data, &initData); err != nil {
return nil, fmt.Errorf("parse upload init response failed: %w", err)
}
serverCode, err := d.uploadFileInChunks(ctx, tmpFile, file.GetSize(), md5sum, initData, up)
if err != nil {
return nil, err
}
obj, err := d.completeChunkUpload(ctx, initData, parentID, parentPath, file.GetName(), file.GetSize(), md5sum, serverCode)
if err != nil {
return nil, err
}
up(100)
return obj, nil
default:
return nil, fmt.Errorf("upload failed: %s", resp.Message)
}
}
var resource Resource
if err := json.Unmarshal(resp.Data, &resource); err != nil {
return nil, fmt.Errorf("parse upload response failed: %w", err)
}
obj, err := resource.toObject(parentID, parentPath)
if err != nil {
return nil, err
}
up(100)
return obj, nil
}
func (d *BitQiu) uploadFileInChunks(ctx context.Context, tmpFile model.File, size int64, md5sum string, initData UploadInitData, up driver.UpdateProgress) (string, error) {
if d.client == nil {
return "", fmt.Errorf("client not initialized")
}
if size <= 0 {
return "", fmt.Errorf("invalid file size")
}
buf := make([]byte, chunkSize)
offset := int64(0)
var finishedFlag string
for offset < size {
chunkLen := chunkSize
remaining := size - offset
if remaining < chunkLen {
chunkLen = remaining
}
reader := io.NewSectionReader(tmpFile, offset, chunkLen)
chunkBuf := buf[:chunkLen]
if _, err := io.ReadFull(reader, chunkBuf); err != nil {
return "", fmt.Errorf("read chunk failed: %w", err)
}
headers := map[string]string{
"accept": "*/*",
"content-type": "application/octet-stream",
"appid": initData.AppID,
"token": initData.Token,
"userid": strconv.FormatInt(initData.UserID, 10),
"serialnumber": initData.SerialNumber,
"hash": md5sum,
"len": strconv.FormatInt(chunkLen, 10),
"offset": strconv.FormatInt(offset, 10),
"user-agent": d.userAgent(),
}
var chunkResp ChunkUploadResponse
req := d.client.R().
SetContext(ctx).
SetHeaders(headers).
SetBody(chunkBuf).
SetResult(&chunkResp)
if _, err := req.Post(initData.UploadURL); err != nil {
return "", err
}
if chunkResp.ErrCode != 0 {
return "", fmt.Errorf("chunk upload failed with code %d", chunkResp.ErrCode)
}
finishedFlag = chunkResp.FinishedFlag
offset += chunkLen
up(float64(offset) * 100 / float64(size))
}
if finishedFlag == "" {
return "", fmt.Errorf("upload finished without server code")
}
return finishedFlag, nil
}
func (d *BitQiu) completeChunkUpload(ctx context.Context, initData UploadInitData, parentID, parentPath, name string, size int64, md5sum, serverCode string) (model.Obj, error) {
form := map[string]string{
"currentPage": "1",
"limit": "1",
"userId": strconv.FormatInt(initData.UserID, 10),
"status": "0",
"parentId": parentID,
"name": name,
"fileUid": initData.FileUID,
"fileSid": initData.FileSID,
"size": strconv.FormatInt(size, 10),
"serverCode": serverCode,
"snapTime": "",
"hash": md5sum,
"sampleMd5": md5sum,
"org_channel": orgChannel,
}
var resp Response[Resource]
if err := d.postForm(ctx, uploadCompleteURL, form, &resp); err != nil {
return nil, err
}
if resp.Code != successCode {
return nil, fmt.Errorf("complete upload failed: %s", resp.Message)
}
return resp.Data.toObject(parentID, parentPath)
}
func (d *BitQiu) login(ctx context.Context) error {
if d.client == nil {
return fmt.Errorf("client not initialized")
}
form := map[string]string{
"passport": d.Username,
"password": utils.GetMD5EncodeStr(d.Password),
"remember": "0",
"captcha": "",
"org_channel": orgChannel,
}
var resp Response[LoginData]
if err := d.postForm(ctx, loginURL, form, &resp); err != nil {
return err
}
if resp.Code != successCode {
return fmt.Errorf("login failed: %s", resp.Message)
}
d.userID = strconv.FormatInt(resp.Data.UserID, 10)
return d.ensureRootFolderID(ctx)
}
func (d *BitQiu) ensureRootFolderID(ctx context.Context) error {
rootID := d.Addition.GetRootId()
if rootID != "" && rootID != "0" {
return nil
}
form := map[string]string{
"org_channel": orgChannel,
}
var resp Response[UserInfoData]
if err := d.postForm(ctx, userInfoURL, form, &resp); err != nil {
return err
}
if resp.Code != successCode {
return fmt.Errorf("get user info failed: %s", resp.Message)
}
if resp.Data.RootDirID == "" {
return fmt.Errorf("get user info failed: empty root dir id")
}
if d.Addition.RootFolderID != resp.Data.RootDirID {
d.Addition.RootFolderID = resp.Data.RootDirID
op.MustSaveDriverStorage(d)
}
return nil
}
func (d *BitQiu) postForm(ctx context.Context, url string, form map[string]string, result interface{}) error {
if d.client == nil {
return fmt.Errorf("client not initialized")
}
req := d.client.R().
SetContext(ctx).
SetHeaders(d.commonHeaders()).
SetFormData(form)
if result != nil {
req = req.SetResult(result)
}
_, err := req.Post(url)
return err
}
func (d *BitQiu) waitForCopiedObject(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
expectedName := srcObj.GetName()
expectedIsDir := srcObj.IsDir()
var lastListErr error
for attempt := 0; attempt < copyPollMaxAttempts; attempt++ {
if attempt > 0 {
if err := waitWithContext(ctx, copyPollInterval); err != nil {
return nil, err
}
}
if err := d.checkCopyFailure(ctx); err != nil {
return nil, err
}
obj, err := d.findObjectInDir(ctx, dstDir, expectedName, expectedIsDir)
if err != nil {
lastListErr = err
continue
}
if obj != nil {
return obj, nil
}
}
if lastListErr != nil {
return nil, lastListErr
}
return nil, fmt.Errorf("copy task timed out waiting for completion")
}
func (d *BitQiu) checkCopyFailure(ctx context.Context) error {
form := map[string]string{
"org_channel": orgChannel,
}
for attempt := 0; attempt < 2; attempt++ {
var resp Response[AsyncManagerData]
if err := d.postForm(ctx, copyManagerURL, form, &resp); err != nil {
return err
}
switch resp.Code {
case successCode:
if len(resp.Data.FailTasks) > 0 {
return fmt.Errorf("copy failed: %s", resp.Data.FailTasks[0].ErrorMessage())
}
return nil
case "10401", "10404":
if err := d.login(ctx); err != nil {
return err
}
default:
return fmt.Errorf("query copy status failed: %s", resp.Message)
}
}
return fmt.Errorf("query copy status failed: retry limit reached")
}
func (d *BitQiu) findObjectInDir(ctx context.Context, dir model.Obj, name string, isDir bool) (model.Obj, error) {
objs, err := d.List(ctx, dir, model.ListArgs{})
if err != nil {
return nil, err
}
for _, obj := range objs {
if obj.GetName() == name && obj.IsDir() == isDir {
return obj, nil
}
}
return nil, nil
}
func waitWithContext(ctx context.Context, d time.Duration) error {
timer := time.NewTimer(d)
defer timer.Stop()
select {
case <-ctx.Done():
return ctx.Err()
case <-timer.C:
return nil
}
}
func (d *BitQiu) commonHeaders() map[string]string {
headers := map[string]string{
"accept": "application/json, text/plain, */*",
"accept-language": "en-US,en;q=0.9",
"cache-control": "no-cache",
"pragma": "no-cache",
"user-platform": d.Addition.UserPlatform,
"x-kl-saas-ajax-request": "Ajax_Request",
"x-requested-with": "XMLHttpRequest",
"referer": baseURL + "/",
"origin": baseURL,
"user-agent": d.userAgent(),
}
return headers
}
func (d *BitQiu) userAgent() string {
if ua := strings.TrimSpace(d.Addition.UserAgent); ua != "" {
return ua
}
return defaultUserAgent
}
func (d *BitQiu) resolveParentID(dir model.Obj) string {
if dir != nil && dir.GetID() != "" {
return dir.GetID()
}
if root := d.Addition.GetRootId(); root != "" {
return root
}
return config.DefaultRoot
}
func (d *BitQiu) pageSize() int {
if size, err := strconv.Atoi(d.Addition.PageSize); err == nil && size > 0 {
return size
}
return 24
}
func (d *BitQiu) orderType() string {
if d.Addition.OrderType != "" {
return d.Addition.OrderType
}
return "updateTime"
}
func (d *BitQiu) orderDesc() string {
if d.Addition.OrderDesc {
return "1"
}
return "0"
}
var _ driver.Driver = (*BitQiu)(nil)
var _ driver.PutResult = (*BitQiu)(nil)

28
drivers/bitqiu/meta.go Normal file
View File

@@ -0,0 +1,28 @@
package bitqiu
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootID
Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
UserPlatform string `json:"user_platform" help:"Optional device identifier; auto-generated if empty."`
OrderType string `json:"order_type" type:"select" options:"updateTime,createTime,name,size" default:"updateTime"`
OrderDesc bool `json:"order_desc"`
PageSize string `json:"page_size" default:"24" help:"Number of entries to request per page."`
UserAgent string `json:"user_agent" default:"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.0.0 Safari/537.36"`
}
var config = driver.Config{
Name: "BitQiu",
DefaultRoot: "0",
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &BitQiu{}
})
}

107
drivers/bitqiu/types.go Normal file
View File

@@ -0,0 +1,107 @@
package bitqiu
import "encoding/json"
type Response[T any] struct {
Code string `json:"code"`
Message string `json:"message"`
Data T `json:"data"`
}
type LoginData struct {
UserID int64 `json:"userId"`
}
type ResourcePage struct {
CurrentPage int `json:"currentPage"`
PageSize int `json:"pageSize"`
TotalCount int `json:"totalCount"`
TotalPageCount int `json:"totalPageCount"`
Data []Resource `json:"data"`
HasNext bool `json:"hasNext"`
}
type Resource struct {
ResourceID string `json:"resourceId"`
ResourceUID string `json:"resourceUid"`
ResourceType int `json:"resourceType"`
ParentID string `json:"parentId"`
Name string `json:"name"`
ExtName string `json:"extName"`
Size *json.Number `json:"size"`
CreateTime *string `json:"createTime"`
UpdateTime *string `json:"updateTime"`
FileMD5 string `json:"fileMd5"`
}
type DownloadData struct {
URL string `json:"url"`
MD5 string `json:"md5"`
Size int64 `json:"size"`
}
type UserInfoData struct {
RootDirID string `json:"rootDirId"`
}
type CreateDirData struct {
DirID string `json:"dirId"`
Name string `json:"name"`
ParentID string `json:"parentId"`
}
type AsyncManagerData struct {
WaitTasks []AsyncTask `json:"waitTaskList"`
RunningTasks []AsyncTask `json:"runningTaskList"`
SuccessTasks []AsyncTask `json:"successTaskList"`
FailTasks []AsyncTask `json:"failTaskList"`
TaskList []AsyncTask `json:"taskList"`
}
type AsyncTask struct {
TaskID string `json:"taskId"`
Status int `json:"status"`
ErrorMsg string `json:"errorMsg"`
Message string `json:"message"`
Result *AsyncTaskInfo `json:"result"`
TargetName string `json:"targetName"`
TargetDirID string `json:"parentId"`
}
type AsyncTaskInfo struct {
Resource Resource `json:"resource"`
DirID string `json:"dirId"`
FileID string `json:"fileId"`
Name string `json:"name"`
ParentID string `json:"parentId"`
}
func (t AsyncTask) ErrorMessage() string {
if t.ErrorMsg != "" {
return t.ErrorMsg
}
if t.Message != "" {
return t.Message
}
return "unknown error"
}
type UploadInitData struct {
Name string `json:"name"`
Size int64 `json:"size"`
Token string `json:"token"`
FileUID string `json:"fileUid"`
FileSID string `json:"fileSid"`
ParentID string `json:"parentId"`
UserID int64 `json:"userId"`
SerialNumber string `json:"serialNumber"`
UploadURL string `json:"uploadUrl"`
AppID string `json:"appId"`
}
type ChunkUploadResponse struct {
ErrCode int `json:"errCode"`
Offset int64 `json:"offset"`
Finished int `json:"finished"`
FinishedFlag string `json:"finishedFlag"`
}

102
drivers/bitqiu/util.go Normal file
View File

@@ -0,0 +1,102 @@
package bitqiu
import (
"path"
"strings"
"time"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
)
type Object struct {
model.Object
ParentID string
}
func (r Resource) toObject(parentID, parentPath string) (model.Obj, error) {
id := r.ResourceID
if id == "" {
id = r.ResourceUID
}
obj := &Object{
Object: model.Object{
ID: id,
Name: r.Name,
IsFolder: r.ResourceType == 1,
},
ParentID: parentID,
}
if r.Size != nil {
if size, err := (*r.Size).Int64(); err == nil {
obj.Size = size
}
}
if ct := parseBitQiuTime(r.CreateTime); !ct.IsZero() {
obj.Ctime = ct
}
if mt := parseBitQiuTime(r.UpdateTime); !mt.IsZero() {
obj.Modified = mt
}
if r.FileMD5 != "" {
obj.HashInfo = utils.NewHashInfo(utils.MD5, strings.ToLower(r.FileMD5))
}
obj.SetPath(path.Join(parentPath, obj.Name))
return obj, nil
}
func parseBitQiuTime(value *string) time.Time {
if value == nil {
return time.Time{}
}
trimmed := strings.TrimSpace(*value)
if trimmed == "" {
return time.Time{}
}
if ts, err := time.ParseInLocation("2006-01-02 15:04:05", trimmed, time.Local); err == nil {
return ts
}
return time.Time{}
}
func updateObjectName(obj model.Obj, newName string) model.Obj {
newPath := path.Join(parentPathOf(obj.GetPath()), newName)
switch o := obj.(type) {
case *Object:
o.Name = newName
o.Object.Name = newName
o.SetPath(newPath)
return o
case *model.Object:
o.Name = newName
o.SetPath(newPath)
return o
}
if setter, ok := obj.(model.SetPath); ok {
setter.SetPath(newPath)
}
return &model.Object{
ID: obj.GetID(),
Path: newPath,
Name: newName,
Size: obj.GetSize(),
Modified: obj.ModTime(),
Ctime: obj.CreateTime(),
IsFolder: obj.IsDir(),
HashInfo: obj.GetHash(),
}
}
func parentPathOf(p string) string {
if p == "" {
return ""
}
dir := path.Dir(p)
if dir == "." {
return ""
}
return dir
}

View File

@@ -215,7 +215,7 @@ func (d *ChaoXing) Remove(ctx context.Context, obj model.Obj) error {
return nil
}
func (d *ChaoXing) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
func (d *ChaoXing) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
var resp UploadDataRsp
_, err := d.request("https://noteyd.chaoxing.com/pc/files/getUploadConfig", http.MethodGet, func(req *resty.Request) {
}, &resp)
@@ -227,11 +227,11 @@ func (d *ChaoXing) Put(ctx context.Context, dstDir model.Obj, stream model.FileS
}
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
filePart, err := writer.CreateFormFile("file", stream.GetName())
filePart, err := writer.CreateFormFile("file", file.GetName())
if err != nil {
return err
}
_, err = utils.CopyWithBuffer(filePart, stream)
_, err = utils.CopyWithBuffer(filePart, file)
if err != nil {
return err
}
@@ -248,7 +248,14 @@ func (d *ChaoXing) Put(ctx context.Context, dstDir model.Obj, stream model.FileS
if err != nil {
return err
}
req, err := http.NewRequest("POST", "https://pan-yz.chaoxing.com/upload", body)
r := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: &driver.SimpleReaderWithSize{
Reader: body,
Size: int64(body.Len()),
},
UpdateProgress: up,
})
req, err := http.NewRequestWithContext(ctx, "POST", "https://pan-yz.chaoxing.com/upload", r)
if err != nil {
return err
}

View File

@@ -5,11 +5,11 @@ import (
"io"
"net/http"
"path"
"strconv"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
@@ -18,6 +18,7 @@ import (
type Cloudreve struct {
model.Storage
Addition
ref *Cloudreve
}
func (d *Cloudreve) Config() driver.Config {
@@ -37,8 +38,18 @@ func (d *Cloudreve) Init(ctx context.Context) error {
return d.login()
}
func (d *Cloudreve) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*Cloudreve)
if ok {
d.ref = refStorage
return nil
}
return errs.NotSupport
}
func (d *Cloudreve) Drop(ctx context.Context) error {
d.Cookie = ""
d.ref = nil
return nil
}
@@ -134,6 +145,8 @@ func (d *Cloudreve) Put(ctx context.Context, dstDir model.Obj, stream model.File
if io.ReadCloser(stream) == http.NoBody {
return d.create(ctx, dstDir, stream)
}
// 获取存储策略
var r DirectoryResp
err := d.request(http.MethodGet, "/directory"+dstDir.GetPath(), nil, &r)
if err != nil {
@@ -144,8 +157,10 @@ func (d *Cloudreve) Put(ctx context.Context, dstDir model.Obj, stream model.File
"size": stream.GetSize(),
"name": stream.GetName(),
"policy_id": r.Policy.Id,
"last_modified": stream.ModTime().Unix(),
"last_modified": stream.ModTime().UnixMilli(),
}
// 获取上传会话信息
var u UploadInfo
err = d.request(http.MethodPut, "/file/upload", func(req *resty.Request) {
req.SetBody(uploadBody)
@@ -153,36 +168,26 @@ func (d *Cloudreve) Put(ctx context.Context, dstDir model.Obj, stream model.File
if err != nil {
return err
}
var chunkSize = u.ChunkSize
var buf []byte
var chunk int
for {
var n int
buf = make([]byte, chunkSize)
n, err = io.ReadAtLeast(stream, buf, chunkSize)
if err != nil && err != io.ErrUnexpectedEOF {
if err == io.EOF {
return nil
}
return err
}
if n == 0 {
break
}
buf = buf[:n]
err = d.request(http.MethodPost, "/file/upload/"+u.SessionID+"/"+strconv.Itoa(chunk), func(req *resty.Request) {
req.SetHeader("Content-Type", "application/octet-stream")
req.SetHeader("Content-Length", strconv.Itoa(n))
req.SetBody(buf)
}, nil)
if err != nil {
break
}
chunk++
// 根据存储方式选择分片上传的方法
switch r.Policy.Type {
case "onedrive":
err = d.upOneDrive(ctx, stream, u, up)
case "s3":
err = d.upS3(ctx, stream, u, up)
case "remote": // 从机存储
err = d.upRemote(ctx, stream, u, up)
case "local": // 本机存储
err = d.upLocal(ctx, stream, u, up)
default:
err = errs.NotImplement
}
return err
if err != nil {
// 删除失败的会话
_ = d.request(http.MethodDelete, "/file/upload/"+u.SessionID, nil, nil)
return err
}
return nil
}
func (d *Cloudreve) create(ctx context.Context, dir model.Obj, file model.Obj) error {

View File

@@ -21,9 +21,12 @@ type Policy struct {
}
type UploadInfo struct {
SessionID string `json:"sessionID"`
ChunkSize int `json:"chunkSize"`
Expires int `json:"expires"`
SessionID string `json:"sessionID"`
ChunkSize int `json:"chunkSize"`
Expires int `json:"expires"`
UploadURLs []string `json:"uploadURLs"`
Credential string `json:"credential,omitempty"` // local
CompleteURL string `json:"completeURL,omitempty"` // s3
}
type DirectoryResp struct {

View File

@@ -1,18 +1,26 @@
package cloudreve
import (
"bytes"
"context"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/setting"
"github.com/alist-org/alist/v3/pkg/cookie"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
json "github.com/json-iterator/go"
jsoniter "github.com/json-iterator/go"
)
@@ -20,17 +28,23 @@ import (
const loginPath = "/user/session"
func (d *Cloudreve) request(method string, path string, callback base.ReqCallback, out interface{}) error {
u := d.Address + "/api/v3" + path
ua := d.CustomUA
if ua == "" {
ua = base.UserAgent
func (d *Cloudreve) getUA() string {
if d.CustomUA != "" {
return d.CustomUA
}
return base.UserAgent
}
func (d *Cloudreve) request(method string, path string, callback base.ReqCallback, out interface{}) error {
if d.ref != nil {
return d.ref.request(method, path, callback, out)
}
u := d.Address + "/api/v3" + path
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"Cookie": "cloudreve-session=" + d.Cookie,
"Accept": "application/json, text/plain, */*",
"User-Agent": ua,
"User-Agent": d.getUA(),
})
var r Resp
@@ -69,11 +83,11 @@ func (d *Cloudreve) request(method string, path string, callback base.ReqCallbac
}
if out != nil && r.Data != nil {
var marshal []byte
marshal, err = json.Marshal(r.Data)
marshal, err = jsoniter.Marshal(r.Data)
if err != nil {
return err
}
err = json.Unmarshal(marshal, out)
err = jsoniter.Unmarshal(marshal, out)
if err != nil {
return err
}
@@ -93,7 +107,7 @@ func (d *Cloudreve) login() error {
if err == nil {
break
}
if err != nil && err.Error() != "CAPTCHA not match." {
if err.Error() != "CAPTCHA not match." {
break
}
}
@@ -154,15 +168,11 @@ func (d *Cloudreve) GetThumb(file Object) (model.Thumbnail, error) {
if !d.Addition.EnableThumbAndFolderSize {
return model.Thumbnail{}, nil
}
ua := d.CustomUA
if ua == "" {
ua = base.UserAgent
}
req := base.NoRedirectClient.R()
req.SetHeaders(map[string]string{
"Cookie": "cloudreve-session=" + d.Cookie,
"Accept": "image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8",
"User-Agent": ua,
"User-Agent": d.getUA(),
})
resp, err := req.Execute(http.MethodGet, d.Address+"/api/v3/file/thumb/"+file.Id)
if err != nil {
@@ -172,3 +182,281 @@ func (d *Cloudreve) GetThumb(file Object) (model.Thumbnail, error) {
Thumbnail: resp.Header().Get("Location"),
}, nil
}
func (d *Cloudreve) upLocal(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
var finish int64 = 0
var chunk int = 0
DEFAULT := int64(u.ChunkSize)
for finish < stream.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := stream.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Cloudreve-Local] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
err = d.request(http.MethodPost, "/file/upload/"+u.SessionID+"/"+strconv.Itoa(chunk), func(req *resty.Request) {
req.SetHeader("Content-Type", "application/octet-stream")
req.SetContentLength(true)
req.SetHeader("Content-Length", strconv.FormatInt(byteSize, 10))
req.SetHeader("User-Agent", d.getUA())
req.SetBody(driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
req.AddRetryCondition(func(r *resty.Response, err error) bool {
if err != nil {
return true
}
if r.IsError() {
return true
}
var retryResp Resp
jErr := base.RestyClient.JSONUnmarshal(r.Body(), &retryResp)
if jErr != nil {
return true
}
if retryResp.Code != 0 {
return true
}
return false
})
}, nil)
if err != nil {
return err
}
finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize()))
chunk++
}
return nil
}
func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
uploadUrl := u.UploadURLs[0]
credential := u.Credential
var finish int64 = 0
var chunk int = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < stream.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := stream.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Cloudreve-Remote] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("POST", uploadUrl+"?chunk="+strconv.Itoa(chunk),
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Authorization", fmt.Sprint(credential))
req.Header.Set("User-Agent", d.getUA())
err = func() error {
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != 200 {
return errors.New(res.Status)
}
body, err := io.ReadAll(res.Body)
if err != nil {
return err
}
var up Resp
err = json.Unmarshal(body, &up)
if err != nil {
return err
}
if up.Code != 0 {
return errors.New(up.Msg)
}
return nil
}()
if err == nil {
retryCount = 0
finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize()))
chunk++
} else {
retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors, error: %s", maxRetries, err)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-Remote] server errors while uploading, retrying after %v...", backoff)
time.Sleep(backoff)
}
}
return nil
}
func (d *Cloudreve) upOneDrive(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
uploadUrl := u.UploadURLs[0]
var finish int64 = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < stream.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := stream.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Cloudreve-OneDrive] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", finish, finish+byteSize-1, stream.GetSize()))
req.Header.Set("User-Agent", d.getUA())
finish += byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
// https://learn.microsoft.com/zh-cn/onedrive/developer/rest-api/api/driveitem_createuploadsession
switch {
case res.StatusCode >= 500 && res.StatusCode <= 504:
retryCount++
if retryCount > maxRetries {
res.Body.Close()
return fmt.Errorf("upload failed after %d retries due to server errors, error %d", maxRetries, res.StatusCode)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-OneDrive] server errors %d while uploading, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200:
data, _ := io.ReadAll(res.Body)
res.Body.Close()
return errors.New(string(data))
default:
res.Body.Close()
retryCount = 0
finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize()))
}
}
// 上传成功发送回调请求
return d.request(http.MethodPost, "/callback/onedrive/finish/"+u.SessionID, func(req *resty.Request) {
req.SetBody("{}")
}, nil)
}
func (d *Cloudreve) upS3(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
var finish int64 = 0
var chunk int = 0
var etags []string
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < stream.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := stream.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Cloudreve-S3] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("PUT", u.UploadURLs[chunk],
driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = byteSize
finish += byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
etag := res.Header.Get("ETag")
res.Body.Close()
switch {
case res.StatusCode != 200:
retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors, error %d", maxRetries, res.StatusCode)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-S3] server errors %d while uploading, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case etag == "":
return errors.New("faild to get ETag from header")
default:
retryCount = 0
etags = append(etags, etag)
finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize()))
chunk++
}
}
// s3LikeFinishUpload
// https://github.com/cloudreve/frontend/blob/b485bf297974cbe4834d2e8e744ae7b7e5b2ad39/src/component/Uploader/core/api/index.ts#L204-L252
bodyBuilder := &strings.Builder{}
bodyBuilder.WriteString("<CompleteMultipartUpload>")
for i, etag := range etags {
bodyBuilder.WriteString(fmt.Sprintf(
`<Part><PartNumber>%d</PartNumber><ETag>%s</ETag></Part>`,
i+1, // PartNumber 从 1 开始
etag,
))
}
bodyBuilder.WriteString("</CompleteMultipartUpload>")
req, err := http.NewRequest(
"POST",
u.CompleteURL,
strings.NewReader(bodyBuilder.String()),
)
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/xml")
req.Header.Set("User-Agent", d.getUA())
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
body, _ := io.ReadAll(res.Body)
return fmt.Errorf("up status: %d, error: %s", res.StatusCode, string(body))
}
// 上传成功发送回调请求
err = d.request(http.MethodGet, "/callback/s3/"+u.SessionID, nil, nil)
if err != nil {
return err
}
return nil
}

View File

@@ -0,0 +1,305 @@
package cloudreve_v4
import (
"context"
"errors"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
type CloudreveV4 struct {
model.Storage
Addition
ref *CloudreveV4
}
func (d *CloudreveV4) Config() driver.Config {
if d.ref != nil {
return d.ref.Config()
}
if d.EnableVersionUpload {
config.NoOverwriteUpload = false
}
return config
}
func (d *CloudreveV4) GetAddition() driver.Additional {
return &d.Addition
}
func (d *CloudreveV4) Init(ctx context.Context) error {
// removing trailing slash
d.Address = strings.TrimSuffix(d.Address, "/")
op.MustSaveDriverStorage(d)
if d.ref != nil {
return nil
}
if d.AccessToken == "" && d.RefreshToken != "" {
return d.refreshToken()
}
if d.Username != "" {
return d.login()
}
return nil
}
func (d *CloudreveV4) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*CloudreveV4)
if ok {
d.ref = refStorage
return nil
}
return errs.NotSupport
}
func (d *CloudreveV4) Drop(ctx context.Context) error {
d.ref = nil
return nil
}
func (d *CloudreveV4) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
const pageSize int = 100
var f []File
var r FileResp
params := map[string]string{
"page_size": strconv.Itoa(pageSize),
"uri": dir.GetPath(),
"order_by": d.OrderBy,
"order_direction": d.OrderDirection,
"page": "0",
}
for {
err := d.request(http.MethodGet, "/file", func(req *resty.Request) {
req.SetQueryParams(params)
}, &r)
if err != nil {
return nil, err
}
f = append(f, r.Files...)
if r.Pagination.NextToken == "" || len(r.Files) < pageSize {
break
}
params["next_page_token"] = r.Pagination.NextToken
}
return utils.SliceConvert(f, func(src File) (model.Obj, error) {
if d.EnableFolderSize && src.Type == 1 {
var ds FolderSummaryResp
err := d.request(http.MethodGet, "/file/info", func(req *resty.Request) {
req.SetQueryParam("uri", src.Path)
req.SetQueryParam("folder_summary", "true")
}, &ds)
if err == nil && ds.FolderSummary.Size > 0 {
src.Size = ds.FolderSummary.Size
}
}
var thumb model.Thumbnail
if d.EnableThumb && src.Type == 0 {
var t FileThumbResp
err := d.request(http.MethodGet, "/file/thumb", func(req *resty.Request) {
req.SetQueryParam("uri", src.Path)
}, &t)
if err == nil && t.URL != "" {
thumb = model.Thumbnail{
Thumbnail: t.URL,
}
}
}
return &model.ObjThumb{
Object: model.Object{
ID: src.ID,
Path: src.Path,
Name: src.Name,
Size: src.Size,
Modified: src.UpdatedAt,
Ctime: src.CreatedAt,
IsFolder: src.Type == 1,
},
Thumbnail: thumb,
}, nil
})
}
func (d *CloudreveV4) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
var url FileUrlResp
err := d.request(http.MethodPost, "/file/url", func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{file.GetPath()},
"download": true,
})
}, &url)
if err != nil {
return nil, err
}
if len(url.Urls) == 0 {
return nil, errors.New("server returns no url")
}
exp := time.Until(url.Expires)
return &model.Link{
URL: url.Urls[0].URL,
Expiration: &exp,
}, nil
}
func (d *CloudreveV4) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
return d.request(http.MethodPost, "/file/create", func(req *resty.Request) {
req.SetBody(base.Json{
"type": "folder",
"uri": parentDir.GetPath() + "/" + dirName,
"error_on_conflict": true,
})
}, nil)
}
func (d *CloudreveV4) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
return d.request(http.MethodPost, "/file/move", func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{srcObj.GetPath()},
"dst": dstDir.GetPath(),
"copy": false,
})
}, nil)
}
func (d *CloudreveV4) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
return d.request(http.MethodPost, "/file/create", func(req *resty.Request) {
req.SetBody(base.Json{
"new_name": newName,
"uri": srcObj.GetPath(),
})
}, nil)
}
func (d *CloudreveV4) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
return d.request(http.MethodPost, "/file/move", func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{srcObj.GetPath()},
"dst": dstDir.GetPath(),
"copy": true,
})
}, nil)
}
func (d *CloudreveV4) Remove(ctx context.Context, obj model.Obj) error {
return d.request(http.MethodDelete, "/file", func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{obj.GetPath()},
"unlink": false,
"skip_soft_delete": true,
})
}, nil)
}
func (d *CloudreveV4) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
if file.GetSize() == 0 {
// 空文件使用新建文件方法,避免上传卡锁
return d.request(http.MethodPost, "/file/create", func(req *resty.Request) {
req.SetBody(base.Json{
"type": "file",
"uri": dstDir.GetPath() + "/" + file.GetName(),
"error_on_conflict": true,
})
}, nil)
}
var p StoragePolicy
var r FileResp
var u FileUploadResp
var err error
params := map[string]string{
"page_size": "10",
"uri": dstDir.GetPath(),
"order_by": "created_at",
"order_direction": "asc",
"page": "0",
}
err = d.request(http.MethodGet, "/file", func(req *resty.Request) {
req.SetQueryParams(params)
}, &r)
if err != nil {
return err
}
p = r.StoragePolicy
body := base.Json{
"uri": dstDir.GetPath() + "/" + file.GetName(),
"size": file.GetSize(),
"policy_id": p.ID,
"last_modified": file.ModTime().UnixMilli(),
"mime_type": "",
}
if d.EnableVersionUpload {
body["entity_type"] = "version"
}
err = d.request(http.MethodPut, "/file/upload", func(req *resty.Request) {
req.SetBody(body)
}, &u)
if err != nil {
return err
}
if u.StoragePolicy.Relay {
err = d.upLocal(ctx, file, u, up)
} else {
switch u.StoragePolicy.Type {
case "local":
err = d.upLocal(ctx, file, u, up)
case "remote":
err = d.upRemote(ctx, file, u, up)
case "onedrive":
err = d.upOneDrive(ctx, file, u, up)
case "s3":
err = d.upS3(ctx, file, u, up)
default:
return errs.NotImplement
}
}
if err != nil {
// 删除失败的会话
_ = d.request(http.MethodDelete, "/file/upload", func(req *resty.Request) {
req.SetBody(base.Json{
"id": u.SessionID,
"uri": u.URI,
})
}, nil)
return err
}
return nil
}
func (d *CloudreveV4) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *CloudreveV4) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *CloudreveV4) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *CloudreveV4) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// return errs.NotImplement to use an internal archive tool
return nil, errs.NotImplement
}
//func (d *CloudreveV4) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*CloudreveV4)(nil)

View File

@@ -0,0 +1,44 @@
package cloudreve_v4
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
driver.RootPath
// driver.RootID
// define other
Address string `json:"address" required:"true"`
Username string `json:"username"`
Password string `json:"password"`
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
CustomUA string `json:"custom_ua"`
EnableFolderSize bool `json:"enable_folder_size"`
EnableThumb bool `json:"enable_thumb"`
EnableVersionUpload bool `json:"enable_version_upload"`
OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at" default:"name" required:"true"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc" required:"true"`
}
var config = driver.Config{
Name: "Cloudreve V4",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "cloudreve://my",
CheckStatus: true,
Alert: "",
NoOverwriteUpload: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &CloudreveV4{}
})
}

View File

@@ -0,0 +1,164 @@
package cloudreve_v4
import (
"time"
"github.com/alist-org/alist/v3/internal/model"
)
type Object struct {
model.Object
StoragePolicy StoragePolicy
}
type Resp struct {
Code int `json:"code"`
Msg string `json:"msg"`
Data any `json:"data"`
}
type BasicConfigResp struct {
InstanceID string `json:"instance_id"`
// Title string `json:"title"`
// Themes string `json:"themes"`
// DefaultTheme string `json:"default_theme"`
User struct {
ID string `json:"id"`
// Nickname string `json:"nickname"`
// CreatedAt time.Time `json:"created_at"`
// Anonymous bool `json:"anonymous"`
Group struct {
ID string `json:"id"`
Name string `json:"name"`
Permission string `json:"permission"`
} `json:"group"`
} `json:"user"`
// Logo string `json:"logo"`
// LogoLight string `json:"logo_light"`
// CaptchaReCaptchaKey string `json:"captcha_ReCaptchaKey"`
CaptchaType string `json:"captcha_type"` // support 'normal' only
// AppPromotion bool `json:"app_promotion"`
}
type SiteLoginConfigResp struct {
LoginCaptcha bool `json:"login_captcha"`
Authn bool `json:"authn"`
}
type PrepareLoginResp struct {
WebauthnEnabled bool `json:"webauthn_enabled"`
PasswordEnabled bool `json:"password_enabled"`
}
type CaptchaResp struct {
Image string `json:"image"`
Ticket string `json:"ticket"`
}
type Token struct {
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
AccessExpires time.Time `json:"access_expires"`
RefreshExpires time.Time `json:"refresh_expires"`
}
type TokenResponse struct {
User struct {
ID string `json:"id"`
// Email string `json:"email"`
// Nickname string `json:"nickname"`
Status string `json:"status"`
// CreatedAt time.Time `json:"created_at"`
Group struct {
ID string `json:"id"`
Name string `json:"name"`
Permission string `json:"permission"`
// DirectLinkBatchSize int `json:"direct_link_batch_size"`
// TrashRetention int `json:"trash_retention"`
} `json:"group"`
// Language string `json:"language"`
} `json:"user"`
Token Token `json:"token"`
}
type File struct {
Type int `json:"type"` // 0: file, 1: folder
ID string `json:"id"`
Name string `json:"name"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
Size int64 `json:"size"`
Metadata interface{} `json:"metadata"`
Path string `json:"path"`
Capability string `json:"capability"`
Owned bool `json:"owned"`
PrimaryEntity string `json:"primary_entity"`
}
type StoragePolicy struct {
ID string `json:"id"`
Name string `json:"name"`
Type string `json:"type"`
MaxSize int64 `json:"max_size"`
Relay bool `json:"relay,omitempty"`
}
type Pagination struct {
Page int `json:"page"`
PageSize int `json:"page_size"`
IsCursor bool `json:"is_cursor"`
NextToken string `json:"next_token,omitempty"`
}
type Props struct {
Capability string `json:"capability"`
MaxPageSize int `json:"max_page_size"`
OrderByOptions []string `json:"order_by_options"`
OrderDirectionOptions []string `json:"order_direction_options"`
}
type FileResp struct {
Files []File `json:"files"`
Parent File `json:"parent"`
Pagination Pagination `json:"pagination"`
Props Props `json:"props"`
ContextHint string `json:"context_hint"`
MixedType bool `json:"mixed_type"`
StoragePolicy StoragePolicy `json:"storage_policy"`
}
type FileUrlResp struct {
Urls []struct {
URL string `json:"url"`
} `json:"urls"`
Expires time.Time `json:"expires"`
}
type FileUploadResp struct {
// UploadID string `json:"upload_id"`
SessionID string `json:"session_id"`
ChunkSize int64 `json:"chunk_size"`
Expires int64 `json:"expires"`
StoragePolicy StoragePolicy `json:"storage_policy"`
URI string `json:"uri"`
CompleteURL string `json:"completeURL,omitempty"` // for S3-like
CallbackSecret string `json:"callback_secret,omitempty"` // for S3-like, OneDrive
UploadUrls []string `json:"upload_urls,omitempty"` // for not-local
Credential string `json:"credential,omitempty"` // for local
}
type FileThumbResp struct {
URL string `json:"url"`
Expires time.Time `json:"expires"`
}
type FolderSummaryResp struct {
File
FolderSummary struct {
Size int64 `json:"size"`
Files int64 `json:"files"`
Folders int64 `json:"folders"`
Completed bool `json:"completed"`
CalculatedAt time.Time `json:"calculated_at"`
} `json:"folder_summary"`
}

View File

@@ -0,0 +1,476 @@
package cloudreve_v4
import (
"bytes"
"context"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/setting"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go"
)
// do others that not defined in Driver interface
func (d *CloudreveV4) getUA() string {
if d.CustomUA != "" {
return d.CustomUA
}
return base.UserAgent
}
func (d *CloudreveV4) request(method string, path string, callback base.ReqCallback, out any) error {
if d.ref != nil {
return d.ref.request(method, path, callback, out)
}
u := d.Address + "/api/v4" + path
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"Accept": "application/json, text/plain, */*",
"User-Agent": d.getUA(),
})
if d.AccessToken != "" {
req.SetHeader("Authorization", "Bearer "+d.AccessToken)
}
var r Resp
req.SetResult(&r)
if callback != nil {
callback(req)
}
resp, err := req.Execute(method, u)
if err != nil {
return err
}
if !resp.IsSuccess() {
return errors.New(resp.String())
}
if r.Code != 0 {
if r.Code == 401 && d.RefreshToken != "" && path != "/session/token/refresh" {
// try to refresh token
err = d.refreshToken()
if err != nil {
return err
}
return d.request(method, path, callback, out)
}
return errors.New(r.Msg)
}
if out != nil && r.Data != nil {
var marshal []byte
marshal, err = json.Marshal(r.Data)
if err != nil {
return err
}
err = json.Unmarshal(marshal, out)
if err != nil {
return err
}
}
return nil
}
func (d *CloudreveV4) login() error {
var siteConfig SiteLoginConfigResp
err := d.request(http.MethodGet, "/site/config/login", nil, &siteConfig)
if err != nil {
return err
}
if !siteConfig.Authn {
return errors.New("authn not support")
}
var prepareLogin PrepareLoginResp
err = d.request(http.MethodGet, "/session/prepare?email="+d.Addition.Username, nil, &prepareLogin)
if err != nil {
return err
}
if !prepareLogin.PasswordEnabled {
return errors.New("password not enabled")
}
if prepareLogin.WebauthnEnabled {
return errors.New("webauthn not support")
}
for range 5 {
err = d.doLogin(siteConfig.LoginCaptcha)
if err == nil {
break
}
if err.Error() != "CAPTCHA not match." {
break
}
}
return err
}
func (d *CloudreveV4) doLogin(needCaptcha bool) error {
var err error
loginBody := base.Json{
"email": d.Username,
"password": d.Password,
}
if needCaptcha {
var config BasicConfigResp
err = d.request(http.MethodGet, "/site/config/basic", nil, &config)
if err != nil {
return err
}
if config.CaptchaType != "normal" {
return fmt.Errorf("captcha type %s not support", config.CaptchaType)
}
var captcha CaptchaResp
err = d.request(http.MethodGet, "/site/captcha", nil, &captcha)
if err != nil {
return err
}
if !strings.HasPrefix(captcha.Image, "data:image/png;base64,") {
return errors.New("can not get captcha")
}
loginBody["ticket"] = captcha.Ticket
i := strings.Index(captcha.Image, ",")
dec := base64.NewDecoder(base64.StdEncoding, strings.NewReader(captcha.Image[i+1:]))
vRes, err := base.RestyClient.R().SetMultipartField(
"image", "validateCode.png", "image/png", dec).
Post(setting.GetStr(conf.OcrApi))
if err != nil {
return err
}
if jsoniter.Get(vRes.Body(), "status").ToInt() != 200 {
return errors.New("ocr error:" + jsoniter.Get(vRes.Body(), "msg").ToString())
}
captchaCode := jsoniter.Get(vRes.Body(), "result").ToString()
if captchaCode == "" {
return errors.New("ocr error: empty result")
}
loginBody["captcha"] = captchaCode
}
var token TokenResponse
err = d.request(http.MethodPost, "/session/token", func(req *resty.Request) {
req.SetBody(loginBody)
}, &token)
if err != nil {
return err
}
d.AccessToken, d.RefreshToken = token.Token.AccessToken, token.Token.RefreshToken
op.MustSaveDriverStorage(d)
return nil
}
func (d *CloudreveV4) refreshToken() error {
var token Token
if token.RefreshToken == "" {
if d.Username != "" {
err := d.login()
if err != nil {
return fmt.Errorf("cannot login to get refresh token, error: %s", err)
}
}
return nil
}
err := d.request(http.MethodPost, "/session/token/refresh", func(req *resty.Request) {
req.SetBody(base.Json{
"refresh_token": d.RefreshToken,
})
}, &token)
if err != nil {
return err
}
d.AccessToken, d.RefreshToken = token.AccessToken, token.RefreshToken
op.MustSaveDriverStorage(d)
return nil
}
func (d *CloudreveV4) upLocal(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
var finish int64 = 0
var chunk int = 0
DEFAULT := int64(u.ChunkSize)
if DEFAULT == 0 {
// support relay
DEFAULT = file.GetSize()
}
for finish < file.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := file.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[CloudreveV4-Local] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(file, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
err = d.request(http.MethodPost, "/file/upload/"+u.SessionID+"/"+strconv.Itoa(chunk), func(req *resty.Request) {
req.SetHeader("Content-Type", "application/octet-stream")
req.SetContentLength(true)
req.SetHeader("Content-Length", strconv.FormatInt(byteSize, 10))
req.SetBody(driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
req.AddRetryCondition(func(r *resty.Response, err error) bool {
if err != nil {
return true
}
if r.IsError() {
return true
}
var retryResp Resp
jErr := base.RestyClient.JSONUnmarshal(r.Body(), &retryResp)
if jErr != nil {
return true
}
if retryResp.Code != 0 {
return true
}
return false
})
}, nil)
if err != nil {
return err
}
finish += byteSize
up(float64(finish) * 100 / float64(file.GetSize()))
chunk++
}
return nil
}
func (d *CloudreveV4) upRemote(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
uploadUrl := u.UploadUrls[0]
credential := u.Credential
var finish int64 = 0
var chunk int = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < file.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := file.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[CloudreveV4-Remote] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(file, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("POST", uploadUrl+"?chunk="+strconv.Itoa(chunk),
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Authorization", fmt.Sprint(credential))
req.Header.Set("User-Agent", d.getUA())
err = func() error {
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != 200 {
return errors.New(res.Status)
}
body, err := io.ReadAll(res.Body)
if err != nil {
return err
}
var up Resp
err = json.Unmarshal(body, &up)
if err != nil {
return err
}
if up.Code != 0 {
return errors.New(up.Msg)
}
return nil
}()
if err == nil {
retryCount = 0
finish += byteSize
up(float64(finish) * 100 / float64(file.GetSize()))
chunk++
} else {
retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors, error: %s", maxRetries, err)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-Remote] server errors while uploading, retrying after %v...", backoff)
time.Sleep(backoff)
}
}
return nil
}
func (d *CloudreveV4) upOneDrive(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
uploadUrl := u.UploadUrls[0]
var finish int64 = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < file.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := file.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[CloudreveV4-OneDrive] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(file, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest(http.MethodPut, uploadUrl, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", finish, finish+byteSize-1, file.GetSize()))
req.Header.Set("User-Agent", d.getUA())
finish += byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
// https://learn.microsoft.com/zh-cn/onedrive/developer/rest-api/api/driveitem_createuploadsession
switch {
case res.StatusCode >= 500 && res.StatusCode <= 504:
retryCount++
if retryCount > maxRetries {
res.Body.Close()
return fmt.Errorf("upload failed after %d retries due to server errors, error %d", maxRetries, res.StatusCode)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[CloudreveV4-OneDrive] server errors %d while uploading, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200:
data, _ := io.ReadAll(res.Body)
res.Body.Close()
return errors.New(string(data))
default:
res.Body.Close()
retryCount = 0
finish += byteSize
up(float64(finish) * 100 / float64(file.GetSize()))
}
}
// 上传成功发送回调请求
return d.request(http.MethodPost, "/callback/onedrive/"+u.SessionID+"/"+u.CallbackSecret, func(req *resty.Request) {
req.SetBody("{}")
}, nil)
}
func (d *CloudreveV4) upS3(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
var finish int64 = 0
var chunk int = 0
var etags []string
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < file.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := file.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[CloudreveV4-S3] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(file, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest(http.MethodPut, u.UploadUrls[chunk],
driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
etag := res.Header.Get("ETag")
res.Body.Close()
switch {
case res.StatusCode != 200:
retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors", maxRetries)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("server error %d, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case etag == "":
return errors.New("faild to get ETag from header")
default:
retryCount = 0
etags = append(etags, etag)
finish += byteSize
up(float64(finish) * 100 / float64(file.GetSize()))
chunk++
}
}
// s3LikeFinishUpload
bodyBuilder := &strings.Builder{}
bodyBuilder.WriteString("<CompleteMultipartUpload>")
for i, etag := range etags {
bodyBuilder.WriteString(fmt.Sprintf(
`<Part><PartNumber>%d</PartNumber><ETag>%s</ETag></Part>`,
i+1, // PartNumber 从 1 开始
etag,
))
}
bodyBuilder.WriteString("</CompleteMultipartUpload>")
req, err := http.NewRequest(
"POST",
u.CompleteURL,
strings.NewReader(bodyBuilder.String()),
)
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/xml")
req.Header.Set("User-Agent", d.getUA())
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
body, _ := io.ReadAll(res.Body)
return fmt.Errorf("up status: %d, error: %s", res.StatusCode, string(body))
}
// 上传成功发送回调请求
return d.request(http.MethodPost, "/callback/s3/"+u.SessionID+"/"+u.CallbackSecret, func(req *resty.Request) {
req.SetBody("{}")
}, nil)
}

View File

@@ -13,6 +13,7 @@ import (
"github.com/alist-org/alist/v3/internal/fs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/sign"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
@@ -160,7 +161,11 @@ func (d *Crypt) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([
// discarding hash as it's encrypted
}
if d.Thumbnail && thumb == "" {
thumb = utils.EncodePath(common.GetApiUrl(nil)+stdpath.Join("/d", args.ReqPath, ".thumbnails", name+".webp"), true)
thumbPath := stdpath.Join(args.ReqPath, ".thumbnails", name+".webp")
thumb = fmt.Sprintf("%s/d%s?sign=%s",
common.GetApiUrl(common.GetHttpReq(ctx)),
utils.EncodePath(thumbPath, true),
sign.Sign(thumbPath))
}
if !ok && !d.Thumbnail {
result = append(result, &objRes)
@@ -258,19 +263,13 @@ func (d *Crypt) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
}
rrc := remoteLink.RangeReadCloser
if len(remoteLink.URL) > 0 {
rangedRemoteLink := &model.Link{
URL: remoteLink.URL,
Header: remoteLink.Header,
}
var converted, err = stream.GetRangeReadCloserFromLink(remoteFileSize, rangedRemoteLink)
var converted, err = stream.GetRangeReadCloserFromLink(remoteFileSize, remoteLink)
if err != nil {
return nil, err
}
rrc = converted
}
if rrc != nil {
//remoteRangeReader, err :=
remoteReader, err := rrc.RangeRead(ctx, http_range.Range{Start: underlyingOffset, Length: length})
remoteClosers.AddClosers(rrc.GetClosers())
if err != nil {
@@ -283,7 +282,6 @@ func (d *Crypt) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
if err != nil {
return nil, err
}
//remoteClosers.Add(remoteLink.MFile)
//keep reuse same MFile and close at last.
remoteClosers.Add(remoteLink.MFile)
return io.NopCloser(remoteLink.MFile), nil
@@ -302,7 +300,6 @@ func (d *Crypt) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
resultRangeReadCloser := &model.RangeReadCloser{RangeReader: resultRangeReader, Closers: remoteClosers}
resultLink := &model.Link{
Header: remoteLink.Header,
RangeReadCloser: resultRangeReadCloser,
Expiration: remoteLink.Expiration,
}

271
drivers/doubao/driver.go Normal file
View File

@@ -0,0 +1,271 @@
package doubao
import (
"context"
"errors"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
)
type Doubao struct {
model.Storage
Addition
*UploadToken
UserId string
uploadThread int
}
func (d *Doubao) Config() driver.Config {
return config
}
func (d *Doubao) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Doubao) Init(ctx context.Context) error {
// TODO login / refresh token
//op.MustSaveDriverStorage(d)
uploadThread, err := strconv.Atoi(d.UploadThread)
if err != nil || uploadThread < 1 {
d.uploadThread, d.UploadThread = 3, "3" // Set default value
} else {
d.uploadThread = uploadThread
}
if d.UserId == "" {
userInfo, err := d.getUserInfo()
if err != nil {
return err
}
d.UserId = strconv.FormatInt(userInfo.UserID, 10)
}
if d.UploadToken == nil {
uploadToken, err := d.initUploadToken()
if err != nil {
return err
}
d.UploadToken = uploadToken
}
return nil
}
func (d *Doubao) Drop(ctx context.Context) error {
return nil
}
func (d *Doubao) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var files []model.Obj
fileList, err := d.getFiles(dir.GetID(), "")
if err != nil {
return nil, err
}
for _, child := range fileList {
files = append(files, &Object{
Object: model.Object{
ID: child.ID,
Path: child.ParentID,
Name: child.Name,
Size: child.Size,
Modified: time.Unix(child.UpdateTime, 0),
Ctime: time.Unix(child.CreateTime, 0),
IsFolder: child.NodeType == 1,
},
Key: child.Key,
NodeType: child.NodeType,
})
}
return files, nil
}
func (d *Doubao) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
var downloadUrl string
if u, ok := file.(*Object); ok {
switch d.DownloadApi {
case "get_download_info":
var r GetDownloadInfoResp
_, err := d.request("/samantha/aispace/get_download_info", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"requests": []base.Json{{"node_id": file.GetID()}},
})
}, &r)
if err != nil {
return nil, err
}
downloadUrl = r.Data.DownloadInfos[0].MainURL
case "get_file_url":
switch u.NodeType {
case VideoType, AudioType:
var r GetVideoFileUrlResp
_, err := d.request("/samantha/media/get_play_info", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"key": u.Key,
"node_id": file.GetID(),
})
}, &r)
if err != nil {
return nil, err
}
downloadUrl = r.Data.OriginalMediaInfo.MainURL
default:
var r GetFileUrlResp
_, err := d.request("/alice/message/get_file_url", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{u.Key},
"type": FileNodeType[u.NodeType],
})
}, &r)
if err != nil {
return nil, err
}
downloadUrl = r.Data.FileUrls[0].MainURL
}
default:
return nil, errs.NotImplement
}
// 生成标准的Content-Disposition
contentDisposition := generateContentDisposition(u.Name)
return &model.Link{
URL: downloadUrl,
Header: http.Header{
"User-Agent": []string{UserAgent},
"Content-Disposition": []string{contentDisposition},
},
}, nil
}
return nil, errors.New("can't convert obj to URL")
}
func (d *Doubao) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
var r UploadNodeResp
_, err := d.request("/samantha/aispace/upload_node", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"node_list": []base.Json{
{
"local_id": uuid.New().String(),
"name": dirName,
"parent_id": parentDir.GetID(),
"node_type": 1,
},
},
})
}, &r)
return err
}
func (d *Doubao) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
var r UploadNodeResp
_, err := d.request("/samantha/aispace/move_node", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"node_list": []base.Json{
{"id": srcObj.GetID()},
},
"current_parent_id": srcObj.GetPath(),
"target_parent_id": dstDir.GetID(),
})
}, &r)
return err
}
func (d *Doubao) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
var r BaseResp
_, err := d.request("/samantha/aispace/rename_node", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"node_id": srcObj.GetID(),
"node_name": newName,
})
}, &r)
return err
}
func (d *Doubao) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// TODO copy obj, optional
return nil, errs.NotImplement
}
func (d *Doubao) Remove(ctx context.Context, obj model.Obj) error {
var r BaseResp
_, err := d.request("/samantha/aispace/delete_node", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{"node_list": []base.Json{{"id": obj.GetID()}}})
}, &r)
return err
}
func (d *Doubao) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
// 根据MIME类型确定数据类型
mimetype := file.GetMimetype()
dataType := FileDataType
switch {
case strings.HasPrefix(mimetype, "video/"):
dataType = VideoDataType
case strings.HasPrefix(mimetype, "audio/"):
dataType = VideoDataType // 音频与视频使用相同的处理方式
case strings.HasPrefix(mimetype, "image/"):
dataType = ImgDataType
}
// 获取上传配置
uploadConfig := UploadConfig{}
if err := d.getUploadConfig(&uploadConfig, dataType, file); err != nil {
return nil, err
}
// 根据文件大小选择上传方式
if file.GetSize() <= 1*utils.MB { // 小于1MB使用普通模式上传
return d.Upload(&uploadConfig, dstDir, file, up, dataType)
}
// 大文件使用分片上传
return d.UploadByMultipart(ctx, &uploadConfig, file.GetSize(), dstDir, file, up, dataType)
}
func (d *Doubao) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Doubao) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Doubao) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Doubao) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// return errs.NotImplement to use an internal archive tool
return nil, errs.NotImplement
}
//func (d *Doubao) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Doubao)(nil)

36
drivers/doubao/meta.go Normal file
View File

@@ -0,0 +1,36 @@
package doubao
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
// driver.RootPath
driver.RootID
// define other
Cookie string `json:"cookie" type:"text"`
UploadThread string `json:"upload_thread" default:"3"`
DownloadApi string `json:"download_api" type:"select" options:"get_file_url,get_download_info" default:"get_file_url"`
}
var config = driver.Config{
Name: "Doubao",
LocalSort: true,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Doubao{}
})
}

415
drivers/doubao/types.go Normal file
View File

@@ -0,0 +1,415 @@
package doubao
import (
"encoding/json"
"fmt"
"time"
"github.com/alist-org/alist/v3/internal/model"
)
type BaseResp struct {
Code int `json:"code"`
Msg string `json:"msg"`
}
type NodeInfoResp struct {
BaseResp
Data struct {
NodeInfo File `json:"node_info"`
Children []File `json:"children"`
NextCursor string `json:"next_cursor"`
HasMore bool `json:"has_more"`
} `json:"data"`
}
type File struct {
ID string `json:"id"`
Name string `json:"name"`
Key string `json:"key"`
NodeType int `json:"node_type"` // 0: 文件, 1: 文件夹
Size int64 `json:"size"`
Source int `json:"source"`
NameReviewStatus int `json:"name_review_status"`
ContentReviewStatus int `json:"content_review_status"`
RiskReviewStatus int `json:"risk_review_status"`
ConversationID string `json:"conversation_id"`
ParentID string `json:"parent_id"`
CreateTime int64 `json:"create_time"`
UpdateTime int64 `json:"update_time"`
}
type GetDownloadInfoResp struct {
BaseResp
Data struct {
DownloadInfos []struct {
NodeID string `json:"node_id"`
MainURL string `json:"main_url"`
BackupURL string `json:"backup_url"`
} `json:"download_infos"`
} `json:"data"`
}
type GetFileUrlResp struct {
BaseResp
Data struct {
FileUrls []struct {
URI string `json:"uri"`
MainURL string `json:"main_url"`
BackURL string `json:"back_url"`
} `json:"file_urls"`
} `json:"data"`
}
type GetVideoFileUrlResp struct {
BaseResp
Data struct {
MediaType string `json:"media_type"`
MediaInfo []struct {
Meta struct {
Height string `json:"height"`
Width string `json:"width"`
Format string `json:"format"`
Duration float64 `json:"duration"`
CodecType string `json:"codec_type"`
Definition string `json:"definition"`
} `json:"meta"`
MainURL string `json:"main_url"`
BackupURL string `json:"backup_url"`
} `json:"media_info"`
OriginalMediaInfo struct {
Meta struct {
Height string `json:"height"`
Width string `json:"width"`
Format string `json:"format"`
Duration float64 `json:"duration"`
CodecType string `json:"codec_type"`
Definition string `json:"definition"`
} `json:"meta"`
MainURL string `json:"main_url"`
BackupURL string `json:"backup_url"`
} `json:"original_media_info"`
PosterURL string `json:"poster_url"`
PlayableStatus int `json:"playable_status"`
} `json:"data"`
}
type UploadNodeResp struct {
BaseResp
Data struct {
NodeList []struct {
LocalID string `json:"local_id"`
ID string `json:"id"`
ParentID string `json:"parent_id"`
Name string `json:"name"`
Key string `json:"key"`
NodeType int `json:"node_type"` // 0: 文件, 1: 文件夹
} `json:"node_list"`
} `json:"data"`
}
type Object struct {
model.Object
Key string
NodeType int
}
type UserInfoResp struct {
Data UserInfo `json:"data"`
Message string `json:"message"`
}
type AppUserInfo struct {
BuiAuditInfo string `json:"bui_audit_info"`
}
type AuditInfo struct {
}
type Details struct {
}
type BuiAuditInfo struct {
AuditInfo AuditInfo `json:"audit_info"`
IsAuditing bool `json:"is_auditing"`
AuditStatus int `json:"audit_status"`
LastUpdateTime int `json:"last_update_time"`
UnpassReason string `json:"unpass_reason"`
Details Details `json:"details"`
}
type Connects struct {
Platform string `json:"platform"`
ProfileImageURL string `json:"profile_image_url"`
ExpiredTime int `json:"expired_time"`
ExpiresIn int `json:"expires_in"`
PlatformScreenName string `json:"platform_screen_name"`
UserID int64 `json:"user_id"`
PlatformUID string `json:"platform_uid"`
SecPlatformUID string `json:"sec_platform_uid"`
PlatformAppID int `json:"platform_app_id"`
ModifyTime int `json:"modify_time"`
AccessToken string `json:"access_token"`
OpenID string `json:"open_id"`
}
type OperStaffRelationInfo struct {
HasPassword int `json:"has_password"`
Mobile string `json:"mobile"`
SecOperStaffUserID string `json:"sec_oper_staff_user_id"`
RelationMobileCountryCode int `json:"relation_mobile_country_code"`
}
type UserInfo struct {
AppID int `json:"app_id"`
AppUserInfo AppUserInfo `json:"app_user_info"`
AvatarURL string `json:"avatar_url"`
BgImgURL string `json:"bg_img_url"`
BuiAuditInfo BuiAuditInfo `json:"bui_audit_info"`
CanBeFoundByPhone int `json:"can_be_found_by_phone"`
Connects []Connects `json:"connects"`
CountryCode int `json:"country_code"`
Description string `json:"description"`
DeviceID int `json:"device_id"`
Email string `json:"email"`
EmailCollected bool `json:"email_collected"`
Gender int `json:"gender"`
HasPassword int `json:"has_password"`
HmRegion int `json:"hm_region"`
IsBlocked int `json:"is_blocked"`
IsBlocking int `json:"is_blocking"`
IsRecommendAllowed int `json:"is_recommend_allowed"`
IsVisitorAccount bool `json:"is_visitor_account"`
Mobile string `json:"mobile"`
Name string `json:"name"`
NeedCheckBindStatus bool `json:"need_check_bind_status"`
OdinUserType int `json:"odin_user_type"`
OperStaffRelationInfo OperStaffRelationInfo `json:"oper_staff_relation_info"`
PhoneCollected bool `json:"phone_collected"`
RecommendHintMessage string `json:"recommend_hint_message"`
ScreenName string `json:"screen_name"`
SecUserID string `json:"sec_user_id"`
SessionKey string `json:"session_key"`
UseHmRegion bool `json:"use_hm_region"`
UserCreateTime int `json:"user_create_time"`
UserID int64 `json:"user_id"`
UserIDStr string `json:"user_id_str"`
UserVerified bool `json:"user_verified"`
VerifiedContent string `json:"verified_content"`
}
// UploadToken 上传令牌配置
type UploadToken struct {
Alice map[string]UploadAuthToken
Samantha MediaUploadAuthToken
}
// UploadAuthToken 多种类型的上传配置:图片/文件
type UploadAuthToken struct {
ServiceID string `json:"service_id"`
UploadPathPrefix string `json:"upload_path_prefix"`
Auth struct {
AccessKeyID string `json:"access_key_id"`
SecretAccessKey string `json:"secret_access_key"`
SessionToken string `json:"session_token"`
ExpiredTime time.Time `json:"expired_time"`
CurrentTime time.Time `json:"current_time"`
} `json:"auth"`
UploadHost string `json:"upload_host"`
}
// MediaUploadAuthToken 媒体上传配置
type MediaUploadAuthToken struct {
StsToken struct {
AccessKeyID string `json:"access_key_id"`
SecretAccessKey string `json:"secret_access_key"`
SessionToken string `json:"session_token"`
ExpiredTime time.Time `json:"expired_time"`
CurrentTime time.Time `json:"current_time"`
} `json:"sts_token"`
UploadInfo struct {
VideoHost string `json:"video_host"`
SpaceName string `json:"space_name"`
} `json:"upload_info"`
}
type UploadAuthTokenResp struct {
BaseResp
Data UploadAuthToken `json:"data"`
}
type MediaUploadAuthTokenResp struct {
BaseResp
Data MediaUploadAuthToken `json:"data"`
}
type ResponseMetadata struct {
RequestID string `json:"RequestId"`
Action string `json:"Action"`
Version string `json:"Version"`
Service string `json:"Service"`
Region string `json:"Region"`
Error struct {
CodeN int `json:"CodeN,omitempty"`
Code string `json:"Code,omitempty"`
Message string `json:"Message,omitempty"`
} `json:"Error,omitempty"`
}
type UploadConfig struct {
UploadAddress UploadAddress `json:"UploadAddress"`
FallbackUploadAddress FallbackUploadAddress `json:"FallbackUploadAddress"`
InnerUploadAddress InnerUploadAddress `json:"InnerUploadAddress"`
RequestID string `json:"RequestId"`
SDKParam interface{} `json:"SDKParam"`
}
type UploadConfigResp struct {
ResponseMetadata `json:"ResponseMetadata"`
Result UploadConfig `json:"Result"`
}
// StoreInfo 存储信息
type StoreInfo struct {
StoreURI string `json:"StoreUri"`
Auth string `json:"Auth"`
UploadID string `json:"UploadID"`
UploadHeader map[string]interface{} `json:"UploadHeader,omitempty"`
StorageHeader map[string]interface{} `json:"StorageHeader,omitempty"`
}
// UploadAddress 上传地址信息
type UploadAddress struct {
StoreInfos []StoreInfo `json:"StoreInfos"`
UploadHosts []string `json:"UploadHosts"`
UploadHeader map[string]interface{} `json:"UploadHeader"`
SessionKey string `json:"SessionKey"`
Cloud string `json:"Cloud"`
}
// FallbackUploadAddress 备用上传地址
type FallbackUploadAddress struct {
StoreInfos []StoreInfo `json:"StoreInfos"`
UploadHosts []string `json:"UploadHosts"`
UploadHeader map[string]interface{} `json:"UploadHeader"`
SessionKey string `json:"SessionKey"`
Cloud string `json:"Cloud"`
}
// UploadNode 上传节点信息
type UploadNode struct {
Vid string `json:"Vid"`
Vids []string `json:"Vids"`
StoreInfos []StoreInfo `json:"StoreInfos"`
UploadHost string `json:"UploadHost"`
UploadHeader map[string]interface{} `json:"UploadHeader"`
Type string `json:"Type"`
Protocol string `json:"Protocol"`
SessionKey string `json:"SessionKey"`
NodeConfig struct {
UploadMode string `json:"UploadMode"`
} `json:"NodeConfig"`
Cluster string `json:"Cluster"`
}
// AdvanceOption 高级选项
type AdvanceOption struct {
Parallel int `json:"Parallel"`
Stream int `json:"Stream"`
SliceSize int `json:"SliceSize"`
EncryptionKey string `json:"EncryptionKey"`
}
// InnerUploadAddress 内部上传地址
type InnerUploadAddress struct {
UploadNodes []UploadNode `json:"UploadNodes"`
AdvanceOption AdvanceOption `json:"AdvanceOption"`
}
// UploadPart 上传分片信息
type UploadPart struct {
UploadId string `json:"uploadid,omitempty"`
PartNumber string `json:"part_number,omitempty"`
Crc32 string `json:"crc32,omitempty"`
Etag string `json:"etag,omitempty"`
Mode string `json:"mode,omitempty"`
}
// UploadResp 上传响应体
type UploadResp struct {
Code int `json:"code"`
ApiVersion string `json:"apiversion"`
Message string `json:"message"`
Data UploadPart `json:"data"`
}
type VideoCommitUpload struct {
Vid string `json:"Vid"`
VideoMeta struct {
URI string `json:"Uri"`
Height int `json:"Height"`
Width int `json:"Width"`
OriginHeight int `json:"OriginHeight"`
OriginWidth int `json:"OriginWidth"`
Duration float64 `json:"Duration"`
Bitrate int `json:"Bitrate"`
Md5 string `json:"Md5"`
Format string `json:"Format"`
Size int `json:"Size"`
FileType string `json:"FileType"`
Codec string `json:"Codec"`
} `json:"VideoMeta"`
WorkflowInput struct {
TemplateID string `json:"TemplateId"`
} `json:"WorkflowInput"`
GetPosterMode string `json:"GetPosterMode"`
}
type VideoCommitUploadResp struct {
ResponseMetadata ResponseMetadata `json:"ResponseMetadata"`
Result struct {
RequestID string `json:"RequestId"`
Results []VideoCommitUpload `json:"Results"`
} `json:"Result"`
}
type CommonResp struct {
Code int `json:"code"`
Msg string `json:"msg,omitempty"`
Message string `json:"message,omitempty"` // 错误情况下的消息
Data json.RawMessage `json:"data,omitempty"` // 原始数据,稍后解析
Error *struct {
Code int `json:"code"`
Message string `json:"message"`
Locale string `json:"locale"`
} `json:"error,omitempty"`
}
// IsSuccess 判断响应是否成功
func (r *CommonResp) IsSuccess() bool {
return r.Code == 0
}
// GetError 获取错误信息
func (r *CommonResp) GetError() error {
if r.IsSuccess() {
return nil
}
// 优先使用message字段
errMsg := r.Message
if errMsg == "" {
errMsg = r.Msg
}
// 如果error对象存在且有详细消息,则使用error中的信息
if r.Error != nil && r.Error.Message != "" {
errMsg = r.Error.Message
}
return fmt.Errorf("[doubao] API error (code: %d): %s", r.Code, errMsg)
}
// UnmarshalData 将data字段解析为指定类型
func (r *CommonResp) UnmarshalData(v interface{}) error {
if !r.IsSuccess() {
return r.GetError()
}
if len(r.Data) == 0 {
return nil
}
return json.Unmarshal(r.Data, v)
}

970
drivers/doubao/util.go Normal file
View File

@@ -0,0 +1,970 @@
package doubao
import (
"context"
"crypto/hmac"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/errgroup"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/avast/retry-go"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
log "github.com/sirupsen/logrus"
"hash/crc32"
"io"
"math"
"math/rand"
"net/http"
"net/url"
"path/filepath"
"sort"
"strconv"
"strings"
"sync"
"time"
)
const (
DirectoryType = 1
FileType = 2
LinkType = 3
ImageType = 4
PagesType = 5
VideoType = 6
AudioType = 7
MeetingMinutesType = 8
)
var FileNodeType = map[int]string{
1: "directory",
2: "file",
3: "link",
4: "image",
5: "pages",
6: "video",
7: "audio",
8: "meeting_minutes",
}
const (
BaseURL = "https://www.doubao.com"
FileDataType = "file"
ImgDataType = "image"
VideoDataType = "video"
DefaultChunkSize = int64(5 * 1024 * 1024) // 5MB
MaxRetryAttempts = 3 // 最大重试次数
UserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36"
Region = "cn-north-1"
UploadTimeout = 3 * time.Minute
)
// do others that not defined in Driver interface
func (d *Doubao) request(path string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
reqUrl := BaseURL + path
req := base.RestyClient.R()
req.SetHeader("Cookie", d.Cookie)
if callback != nil {
callback(req)
}
var commonResp CommonResp
res, err := req.Execute(method, reqUrl)
log.Debugln(res.String())
if err != nil {
return nil, err
}
body := res.Body()
// 先解析为通用响应
if err = json.Unmarshal(body, &commonResp); err != nil {
return nil, err
}
// 检查响应是否成功
if !commonResp.IsSuccess() {
return body, commonResp.GetError()
}
if resp != nil {
if err = json.Unmarshal(body, resp); err != nil {
return body, err
}
}
return body, nil
}
func (d *Doubao) getFiles(dirId, cursor string) (resp []File, err error) {
var r NodeInfoResp
var body = base.Json{
"node_id": dirId,
}
// 如果有游标,则设置游标和大小
if cursor != "" {
body["cursor"] = cursor
body["size"] = 50
} else {
body["need_full_path"] = false
}
_, err = d.request("/samantha/aispace/node_info", http.MethodPost, func(req *resty.Request) {
req.SetBody(body)
}, &r)
if err != nil {
return nil, err
}
if r.Data.Children != nil {
resp = r.Data.Children
}
if r.Data.NextCursor != "-1" {
// 递归获取下一页
nextFiles, err := d.getFiles(dirId, r.Data.NextCursor)
if err != nil {
return nil, err
}
resp = append(r.Data.Children, nextFiles...)
}
return resp, err
}
func (d *Doubao) getUserInfo() (UserInfo, error) {
var r UserInfoResp
_, err := d.request("/passport/account/info/v2/", http.MethodGet, nil, &r)
if err != nil {
return UserInfo{}, err
}
return r.Data, err
}
// 签名请求
func (d *Doubao) signRequest(req *resty.Request, method, tokenType, uploadUrl string) error {
parsedUrl, err := url.Parse(uploadUrl)
if err != nil {
return fmt.Errorf("invalid URL format: %w", err)
}
var accessKeyId, secretAccessKey, sessionToken string
var serviceName string
if tokenType == VideoDataType {
accessKeyId = d.UploadToken.Samantha.StsToken.AccessKeyID
secretAccessKey = d.UploadToken.Samantha.StsToken.SecretAccessKey
sessionToken = d.UploadToken.Samantha.StsToken.SessionToken
serviceName = "vod"
} else {
accessKeyId = d.UploadToken.Alice[tokenType].Auth.AccessKeyID
secretAccessKey = d.UploadToken.Alice[tokenType].Auth.SecretAccessKey
sessionToken = d.UploadToken.Alice[tokenType].Auth.SessionToken
serviceName = "imagex"
}
// 当前时间,格式为 ISO8601
now := time.Now().UTC()
amzDate := now.Format("20060102T150405Z")
dateStamp := now.Format("20060102")
req.SetHeader("X-Amz-Date", amzDate)
if sessionToken != "" {
req.SetHeader("X-Amz-Security-Token", sessionToken)
}
// 计算请求体的SHA256哈希
var bodyHash string
if req.Body != nil {
bodyBytes, ok := req.Body.([]byte)
if !ok {
return fmt.Errorf("request body must be []byte")
}
bodyHash = hashSHA256(string(bodyBytes))
req.SetHeader("X-Amz-Content-Sha256", bodyHash)
} else {
bodyHash = hashSHA256("")
}
// 创建规范请求
canonicalURI := parsedUrl.Path
if canonicalURI == "" {
canonicalURI = "/"
}
// 查询参数按照字母顺序排序
canonicalQueryString := getCanonicalQueryString(req.QueryParam)
// 规范请求头
canonicalHeaders, signedHeaders := getCanonicalHeadersFromMap(req.Header)
canonicalRequest := method + "\n" +
canonicalURI + "\n" +
canonicalQueryString + "\n" +
canonicalHeaders + "\n" +
signedHeaders + "\n" +
bodyHash
algorithm := "AWS4-HMAC-SHA256"
credentialScope := fmt.Sprintf("%s/%s/%s/aws4_request", dateStamp, Region, serviceName)
stringToSign := algorithm + "\n" +
amzDate + "\n" +
credentialScope + "\n" +
hashSHA256(canonicalRequest)
// 计算签名密钥
signingKey := getSigningKey(secretAccessKey, dateStamp, Region, serviceName)
// 计算签名
signature := hmacSHA256Hex(signingKey, stringToSign)
// 构建授权头
authorizationHeader := fmt.Sprintf(
"%s Credential=%s/%s, SignedHeaders=%s, Signature=%s",
algorithm,
accessKeyId,
credentialScope,
signedHeaders,
signature,
)
req.SetHeader("Authorization", authorizationHeader)
return nil
}
func (d *Doubao) requestApi(url, method, tokenType string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"user-agent": UserAgent,
})
if method == http.MethodPost {
req.SetHeader("Content-Type", "text/plain;charset=UTF-8")
}
if callback != nil {
callback(req)
}
if resp != nil {
req.SetResult(resp)
}
// 使用自定义AWS SigV4签名
err := d.signRequest(req, method, tokenType, url)
if err != nil {
return nil, err
}
res, err := req.Execute(method, url)
if err != nil {
return nil, err
}
return res.Body(), nil
}
func (d *Doubao) initUploadToken() (*UploadToken, error) {
uploadToken := &UploadToken{
Alice: make(map[string]UploadAuthToken),
Samantha: MediaUploadAuthToken{},
}
fileAuthToken, err := d.getUploadAuthToken(FileDataType)
if err != nil {
return nil, err
}
imgAuthToken, err := d.getUploadAuthToken(ImgDataType)
if err != nil {
return nil, err
}
mediaAuthToken, err := d.getSamantaUploadAuthToken()
if err != nil {
return nil, err
}
uploadToken.Alice[FileDataType] = fileAuthToken
uploadToken.Alice[ImgDataType] = imgAuthToken
uploadToken.Samantha = mediaAuthToken
return uploadToken, nil
}
func (d *Doubao) getUploadAuthToken(dataType string) (ut UploadAuthToken, err error) {
var r UploadAuthTokenResp
_, err = d.request("/alice/upload/auth_token", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"scene": "bot_chat",
"data_type": dataType,
})
}, &r)
return r.Data, err
}
func (d *Doubao) getSamantaUploadAuthToken() (mt MediaUploadAuthToken, err error) {
var r MediaUploadAuthTokenResp
_, err = d.request("/samantha/media/get_upload_token", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{})
}, &r)
return r.Data, err
}
// getUploadConfig 获取上传配置信息
func (d *Doubao) getUploadConfig(upConfig *UploadConfig, dataType string, file model.FileStreamer) error {
tokenType := dataType
// 配置参数函数
configureParams := func() (string, map[string]string) {
var uploadUrl string
var params map[string]string
// 根据数据类型设置不同的上传参数
switch dataType {
case VideoDataType:
// 音频/视频类型 - 使用uploadToken.Samantha的配置
uploadUrl = d.UploadToken.Samantha.UploadInfo.VideoHost
params = map[string]string{
"Action": "ApplyUploadInner",
"Version": "2020-11-19",
"SpaceName": d.UploadToken.Samantha.UploadInfo.SpaceName,
"FileType": "video",
"IsInner": "1",
"NeedFallback": "true",
"FileSize": strconv.FormatInt(file.GetSize(), 10),
"s": randomString(),
}
case ImgDataType, FileDataType:
// 图片或其他文件类型 - 使用uploadToken.Alice对应配置
uploadUrl = "https://" + d.UploadToken.Alice[dataType].UploadHost
params = map[string]string{
"Action": "ApplyImageUpload",
"Version": "2018-08-01",
"ServiceId": d.UploadToken.Alice[dataType].ServiceID,
"NeedFallback": "true",
"FileSize": strconv.FormatInt(file.GetSize(), 10),
"FileExtension": filepath.Ext(file.GetName()),
"s": randomString(),
}
}
return uploadUrl, params
}
// 获取初始参数
uploadUrl, params := configureParams()
tokenRefreshed := false
var configResp UploadConfigResp
err := d._retryOperation("get upload_config", func() error {
configResp = UploadConfigResp{}
_, err := d.requestApi(uploadUrl, http.MethodGet, tokenType, func(req *resty.Request) {
req.SetQueryParams(params)
}, &configResp)
if err != nil {
return err
}
if configResp.ResponseMetadata.Error.Code == "" {
*upConfig = configResp.Result
return nil
}
// 100028 凭证过期
if configResp.ResponseMetadata.Error.CodeN == 100028 && !tokenRefreshed {
log.Debugln("[doubao] Upload token expired, re-fetching...")
newToken, err := d.initUploadToken()
if err != nil {
return fmt.Errorf("failed to refresh token: %w", err)
}
d.UploadToken = newToken
tokenRefreshed = true
uploadUrl, params = configureParams()
return retry.Error{errors.New("token refreshed, retry needed")}
}
return fmt.Errorf("get upload_config failed: %s", configResp.ResponseMetadata.Error.Message)
})
return err
}
// uploadNode 上传 文件信息
func (d *Doubao) uploadNode(uploadConfig *UploadConfig, dir model.Obj, file model.FileStreamer, dataType string) (UploadNodeResp, error) {
reqUuid := uuid.New().String()
var key string
var nodeType int
mimetype := file.GetMimetype()
switch dataType {
case VideoDataType:
key = uploadConfig.InnerUploadAddress.UploadNodes[0].Vid
if strings.HasPrefix(mimetype, "audio/") {
nodeType = AudioType // 音频类型
} else {
nodeType = VideoType // 视频类型
}
case ImgDataType:
key = uploadConfig.InnerUploadAddress.UploadNodes[0].StoreInfos[0].StoreURI
nodeType = ImageType // 图片类型
default: // FileDataType
key = uploadConfig.InnerUploadAddress.UploadNodes[0].StoreInfos[0].StoreURI
nodeType = FileType // 文件类型
}
var r UploadNodeResp
_, err := d.request("/samantha/aispace/upload_node", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"node_list": []base.Json{
{
"local_id": reqUuid,
"parent_id": dir.GetID(),
"name": file.GetName(),
"key": key,
"node_content": base.Json{},
"node_type": nodeType,
"size": file.GetSize(),
},
},
"request_id": reqUuid,
})
}, &r)
return r, err
}
// Upload 普通上传实现
func (d *Doubao) Upload(config *UploadConfig, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, dataType string) (model.Obj, error) {
data, err := io.ReadAll(file)
if err != nil {
return nil, err
}
// 计算CRC32
crc32Hash := crc32.NewIEEE()
crc32Hash.Write(data)
crc32Value := hex.EncodeToString(crc32Hash.Sum(nil))
// 构建请求路径
uploadNode := config.InnerUploadAddress.UploadNodes[0]
storeInfo := uploadNode.StoreInfos[0]
uploadUrl := fmt.Sprintf("https://%s/upload/v1/%s", uploadNode.UploadHost, storeInfo.StoreURI)
uploadResp := UploadResp{}
if _, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
req.SetHeaders(map[string]string{
"Content-Type": "application/octet-stream",
"Content-Crc32": crc32Value,
"Content-Length": fmt.Sprintf("%d", len(data)),
"Content-Disposition": fmt.Sprintf("attachment; filename=%s", url.QueryEscape(storeInfo.StoreURI)),
})
req.SetBody(data)
}, &uploadResp); err != nil {
return nil, err
}
if uploadResp.Code != 2000 {
return nil, fmt.Errorf("upload failed: %s", uploadResp.Message)
}
uploadNodeResp, err := d.uploadNode(config, dstDir, file, dataType)
if err != nil {
return nil, err
}
return &model.Object{
ID: uploadNodeResp.Data.NodeList[0].ID,
Name: uploadNodeResp.Data.NodeList[0].Name,
Size: file.GetSize(),
IsFolder: false,
}, nil
}
// UploadByMultipart 分片上传
func (d *Doubao) UploadByMultipart(ctx context.Context, config *UploadConfig, fileSize int64, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, dataType string) (model.Obj, error) {
// 构建请求路径
uploadNode := config.InnerUploadAddress.UploadNodes[0]
storeInfo := uploadNode.StoreInfos[0]
uploadUrl := fmt.Sprintf("https://%s/upload/v1/%s", uploadNode.UploadHost, storeInfo.StoreURI)
// 初始化分片上传
var uploadID string
err := d._retryOperation("Initialize multipart upload", func() error {
var err error
uploadID, err = d.initMultipartUpload(config, uploadUrl, storeInfo)
return err
})
if err != nil {
return nil, fmt.Errorf("failed to initialize multipart upload: %w", err)
}
// 准备分片参数
chunkSize := DefaultChunkSize
if config.InnerUploadAddress.AdvanceOption.SliceSize > 0 {
chunkSize = int64(config.InnerUploadAddress.AdvanceOption.SliceSize)
}
totalParts := (fileSize + chunkSize - 1) / chunkSize
// 创建分片信息组
parts := make([]UploadPart, totalParts)
// 缓存文件
tempFile, err := file.CacheFullInTempFile()
if err != nil {
return nil, fmt.Errorf("failed to cache file: %w", err)
}
defer tempFile.Close()
up(10.0) // 更新进度
// 设置并行上传
threadG, uploadCtx := errgroup.NewGroupWithContext(ctx, d.uploadThread,
retry.Attempts(1),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
var partsMutex sync.Mutex
// 并行上传所有分片
for partIndex := int64(0); partIndex < totalParts; partIndex++ {
if utils.IsCanceled(uploadCtx) {
break
}
partIndex := partIndex
partNumber := partIndex + 1 // 分片编号从1开始
threadG.Go(func(ctx context.Context) error {
// 计算此分片的大小和偏移
offset := partIndex * chunkSize
size := chunkSize
if partIndex == totalParts-1 {
size = fileSize - offset
}
limitedReader := driver.NewLimitedUploadStream(ctx, io.NewSectionReader(tempFile, offset, size))
// 读取数据到内存
data, err := io.ReadAll(limitedReader)
if err != nil {
return fmt.Errorf("failed to read part %d: %w", partNumber, err)
}
// 计算CRC32
crc32Value := calculateCRC32(data)
// 使用_retryOperation上传分片
var uploadPart UploadPart
if err = d._retryOperation(fmt.Sprintf("Upload part %d", partNumber), func() error {
var err error
uploadPart, err = d.uploadPart(config, uploadUrl, uploadID, partNumber, data, crc32Value)
return err
}); err != nil {
return fmt.Errorf("part %d upload failed: %w", partNumber, err)
}
// 记录成功上传的分片
partsMutex.Lock()
parts[partIndex] = UploadPart{
PartNumber: strconv.FormatInt(partNumber, 10),
Etag: uploadPart.Etag,
Crc32: crc32Value,
}
partsMutex.Unlock()
// 更新进度
progress := 10.0 + 90.0*float64(threadG.Success()+1)/float64(totalParts)
up(math.Min(progress, 95.0))
return nil
})
}
if err = threadG.Wait(); err != nil {
return nil, err
}
// 完成上传-分片合并
if err = d._retryOperation("Complete multipart upload", func() error {
return d.completeMultipartUpload(config, uploadUrl, uploadID, parts)
}); err != nil {
return nil, fmt.Errorf("failed to complete multipart upload: %w", err)
}
// 提交上传
if err = d._retryOperation("Commit upload", func() error {
return d.commitMultipartUpload(config)
}); err != nil {
return nil, fmt.Errorf("failed to commit upload: %w", err)
}
up(98.0) // 更新到98%
// 上传节点信息
var uploadNodeResp UploadNodeResp
if err = d._retryOperation("Upload node", func() error {
var err error
uploadNodeResp, err = d.uploadNode(config, dstDir, file, dataType)
return err
}); err != nil {
return nil, fmt.Errorf("failed to upload node: %w", err)
}
up(100.0) // 完成上传
return &model.Object{
ID: uploadNodeResp.Data.NodeList[0].ID,
Name: uploadNodeResp.Data.NodeList[0].Name,
Size: file.GetSize(),
IsFolder: false,
}, nil
}
// 统一上传请求方法
func (d *Doubao) uploadRequest(uploadUrl string, method string, storeInfo StoreInfo, callback base.ReqCallback, resp interface{}) ([]byte, error) {
client := resty.New()
client.SetTransport(&http.Transport{
DisableKeepAlives: true, // 禁用连接复用
ForceAttemptHTTP2: false, // 强制使用HTTP/1.1
})
client.SetTimeout(UploadTimeout)
req := client.R()
req.SetHeaders(map[string]string{
"Host": strings.Split(uploadUrl, "/")[2],
"Referer": BaseURL + "/",
"Origin": BaseURL,
"User-Agent": UserAgent,
"X-Storage-U": d.UserId,
"Authorization": storeInfo.Auth,
})
if method == http.MethodPost {
req.SetHeader("Content-Type", "text/plain;charset=UTF-8")
}
if callback != nil {
callback(req)
}
if resp != nil {
req.SetResult(resp)
}
res, err := req.Execute(method, uploadUrl)
if err != nil && err != io.EOF {
return nil, fmt.Errorf("upload request failed: %w", err)
}
return res.Body(), nil
}
// 初始化分片上传
func (d *Doubao) initMultipartUpload(config *UploadConfig, uploadUrl string, storeInfo StoreInfo) (uploadId string, err error) {
uploadResp := UploadResp{}
_, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"uploadmode": "part",
"phase": "init",
})
}, &uploadResp)
if err != nil {
return uploadId, err
}
if uploadResp.Code != 2000 {
return uploadId, fmt.Errorf("init upload failed: %s", uploadResp.Message)
}
return uploadResp.Data.UploadId, nil
}
// 分片上传实现
func (d *Doubao) uploadPart(config *UploadConfig, uploadUrl, uploadID string, partNumber int64, data []byte, crc32Value string) (resp UploadPart, err error) {
uploadResp := UploadResp{}
storeInfo := config.InnerUploadAddress.UploadNodes[0].StoreInfos[0]
_, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
req.SetHeaders(map[string]string{
"Content-Type": "application/octet-stream",
"Content-Crc32": crc32Value,
"Content-Length": fmt.Sprintf("%d", len(data)),
"Content-Disposition": fmt.Sprintf("attachment; filename=%s", url.QueryEscape(storeInfo.StoreURI)),
})
req.SetQueryParams(map[string]string{
"uploadid": uploadID,
"part_number": strconv.FormatInt(partNumber, 10),
"phase": "transfer",
})
req.SetBody(data)
req.SetContentLength(true)
}, &uploadResp)
if err != nil {
return resp, err
}
if uploadResp.Code != 2000 {
return resp, fmt.Errorf("upload part failed: %s", uploadResp.Message)
} else if uploadResp.Data.Crc32 != crc32Value {
return resp, fmt.Errorf("upload part failed: crc32 mismatch, expected %s, got %s", crc32Value, uploadResp.Data.Crc32)
}
return uploadResp.Data, nil
}
// 完成分片上传
func (d *Doubao) completeMultipartUpload(config *UploadConfig, uploadUrl, uploadID string, parts []UploadPart) error {
uploadResp := UploadResp{}
storeInfo := config.InnerUploadAddress.UploadNodes[0].StoreInfos[0]
body := _convertUploadParts(parts)
err := utils.Retry(MaxRetryAttempts, time.Second, func() (err error) {
_, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"uploadid": uploadID,
"phase": "finish",
"uploadmode": "part",
})
req.SetBody(body)
}, &uploadResp)
if err != nil {
return err
}
// 检查响应状态码 2000 成功 4024 分片合并中
if uploadResp.Code != 2000 && uploadResp.Code != 4024 {
return fmt.Errorf("finish upload failed: %s", uploadResp.Message)
}
return err
})
if err != nil {
return fmt.Errorf("failed to complete multipart upload: %w", err)
}
return nil
}
func (d *Doubao) commitMultipartUpload(uploadConfig *UploadConfig) error {
uploadUrl := d.UploadToken.Samantha.UploadInfo.VideoHost
params := map[string]string{
"Action": "CommitUploadInner",
"Version": "2020-11-19",
"SpaceName": d.UploadToken.Samantha.UploadInfo.SpaceName,
}
tokenType := VideoDataType
videoCommitUploadResp := VideoCommitUploadResp{}
jsonBytes, err := json.Marshal(base.Json{
"SessionKey": uploadConfig.InnerUploadAddress.UploadNodes[0].SessionKey,
"Functions": []base.Json{},
})
if err != nil {
return fmt.Errorf("failed to marshal request data: %w", err)
}
_, err = d.requestApi(uploadUrl, http.MethodPost, tokenType, func(req *resty.Request) {
req.SetHeader("Content-Type", "application/json")
req.SetQueryParams(params)
req.SetBody(jsonBytes)
}, &videoCommitUploadResp)
if err != nil {
return err
}
return nil
}
// 计算CRC32
func calculateCRC32(data []byte) string {
hash := crc32.NewIEEE()
hash.Write(data)
return hex.EncodeToString(hash.Sum(nil))
}
// _retryOperation 操作重试
func (d *Doubao) _retryOperation(operation string, fn func() error) error {
return retry.Do(
fn,
retry.Attempts(MaxRetryAttempts),
retry.Delay(500*time.Millisecond),
retry.DelayType(retry.BackOffDelay),
retry.MaxJitter(200*time.Millisecond),
retry.OnRetry(func(n uint, err error) {
log.Debugf("[doubao] %s retry #%d: %v", operation, n+1, err)
}),
)
}
// _convertUploadParts 将分片信息转换为字符串
func _convertUploadParts(parts []UploadPart) string {
if len(parts) == 0 {
return ""
}
var result strings.Builder
for i, part := range parts {
if i > 0 {
result.WriteString(",")
}
result.WriteString(fmt.Sprintf("%s:%s", part.PartNumber, part.Crc32))
}
return result.String()
}
// 获取规范查询字符串
func getCanonicalQueryString(query url.Values) string {
if len(query) == 0 {
return ""
}
keys := make([]string, 0, len(query))
for k := range query {
keys = append(keys, k)
}
sort.Strings(keys)
parts := make([]string, 0, len(keys))
for _, k := range keys {
values := query[k]
for _, v := range values {
parts = append(parts, urlEncode(k)+"="+urlEncode(v))
}
}
return strings.Join(parts, "&")
}
func urlEncode(s string) string {
s = url.QueryEscape(s)
s = strings.ReplaceAll(s, "+", "%20")
return s
}
// 获取规范头信息和已签名头列表
func getCanonicalHeadersFromMap(headers map[string][]string) (string, string) {
// 不可签名的头部列表
unsignableHeaders := map[string]bool{
"authorization": true,
"content-type": true,
"content-length": true,
"user-agent": true,
"presigned-expires": true,
"expect": true,
"x-amzn-trace-id": true,
}
headerValues := make(map[string]string)
var signedHeadersList []string
for k, v := range headers {
if len(v) == 0 {
continue
}
lowerKey := strings.ToLower(k)
// 检查是否可签名
if strings.HasPrefix(lowerKey, "x-amz-") || !unsignableHeaders[lowerKey] {
value := strings.TrimSpace(v[0])
value = strings.Join(strings.Fields(value), " ")
headerValues[lowerKey] = value
signedHeadersList = append(signedHeadersList, lowerKey)
}
}
sort.Strings(signedHeadersList)
var canonicalHeadersStr strings.Builder
for _, key := range signedHeadersList {
canonicalHeadersStr.WriteString(key)
canonicalHeadersStr.WriteString(":")
canonicalHeadersStr.WriteString(headerValues[key])
canonicalHeadersStr.WriteString("\n")
}
signedHeaders := strings.Join(signedHeadersList, ";")
return canonicalHeadersStr.String(), signedHeaders
}
// 计算HMAC-SHA256
func hmacSHA256(key []byte, data string) []byte {
h := hmac.New(sha256.New, key)
h.Write([]byte(data))
return h.Sum(nil)
}
// 计算HMAC-SHA256并返回十六进制字符串
func hmacSHA256Hex(key []byte, data string) string {
return hex.EncodeToString(hmacSHA256(key, data))
}
// 计算SHA256哈希并返回十六进制字符串
func hashSHA256(data string) string {
h := sha256.New()
h.Write([]byte(data))
return hex.EncodeToString(h.Sum(nil))
}
// 获取签名密钥
func getSigningKey(secretKey, dateStamp, region, service string) []byte {
kDate := hmacSHA256([]byte("AWS4"+secretKey), dateStamp)
kRegion := hmacSHA256(kDate, region)
kService := hmacSHA256(kRegion, service)
kSigning := hmacSHA256(kService, "aws4_request")
return kSigning
}
// generateContentDisposition 生成符合RFC 5987标准的Content-Disposition头部
func generateContentDisposition(filename string) string {
// 按照RFC 2047进行编码用于filename部分
encodedName := urlEncode(filename)
// 按照RFC 5987进行编码用于filename*部分
encodedNameRFC5987 := encodeRFC5987(filename)
return fmt.Sprintf("attachment; filename=\"%s\"; filename*=utf-8''%s",
encodedName, encodedNameRFC5987)
}
// encodeRFC5987 按照RFC 5987规范编码字符串适用于HTTP头部参数中的非ASCII字符
func encodeRFC5987(s string) string {
var buf strings.Builder
for _, r := range []byte(s) {
// 根据RFC 5987只有字母、数字和部分特殊符号可以不编码
if (r >= 'a' && r <= 'z') ||
(r >= 'A' && r <= 'Z') ||
(r >= '0' && r <= '9') ||
r == '-' || r == '.' || r == '_' || r == '~' {
buf.WriteByte(r)
} else {
// 其他字符都需要百分号编码
fmt.Fprintf(&buf, "%%%02X", r)
}
}
return buf.String()
}
func randomString() string {
const charset = "0123456789abcdefghijklmnopqrstuvwxyz"
const length = 11 // 11位随机字符串
var sb strings.Builder
sb.Grow(length)
for i := 0; i < length; i++ {
sb.WriteByte(charset[rand.Intn(len(charset))])
}
return sb.String()
}

View File

@@ -0,0 +1,177 @@
package doubao_share
import (
"context"
"errors"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/go-resty/resty/v2"
"net/http"
)
type DoubaoShare struct {
model.Storage
Addition
RootFiles []RootFileList
}
func (d *DoubaoShare) Config() driver.Config {
return config
}
func (d *DoubaoShare) GetAddition() driver.Additional {
return &d.Addition
}
func (d *DoubaoShare) Init(ctx context.Context) error {
// 初始化 虚拟分享列表
if err := d.initShareList(); err != nil {
return err
}
return nil
}
func (d *DoubaoShare) Drop(ctx context.Context) error {
return nil
}
func (d *DoubaoShare) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
// 检查是否为根目录
if dir.GetID() == "" && dir.GetPath() == "/" {
return d.listRootDirectory(ctx)
}
// 非根目录,处理不同情况
if fo, ok := dir.(*FileObject); ok {
if fo.ShareID == "" {
// 虚拟目录,需要列出子目录
return d.listVirtualDirectoryContent(dir)
} else {
// 具有分享ID的目录获取此分享下的文件
shareId, relativePath, err := d._findShareAndPath(dir)
if err != nil {
return nil, err
}
return d.getFilesInPath(ctx, shareId, dir.GetID(), relativePath)
}
}
// 使用通用方法
shareId, relativePath, err := d._findShareAndPath(dir)
if err != nil {
return nil, err
}
// 获取指定路径下的文件
return d.getFilesInPath(ctx, shareId, dir.GetID(), relativePath)
}
func (d *DoubaoShare) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
var downloadUrl string
if u, ok := file.(*FileObject); ok {
switch u.NodeType {
case VideoType, AudioType:
var r GetVideoFileUrlResp
_, err := d.request("/samantha/media/get_play_info", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"key": u.Key,
"share_id": u.ShareID,
"node_id": file.GetID(),
})
}, &r)
if err != nil {
return nil, err
}
downloadUrl = r.Data.OriginalMediaInfo.MainURL
default:
var r GetFileUrlResp
_, err := d.request("/alice/message/get_file_url", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{u.Key},
"type": FileNodeType[u.NodeType],
})
}, &r)
if err != nil {
return nil, err
}
downloadUrl = r.Data.FileUrls[0].MainURL
}
// 生成标准的Content-Disposition
contentDisposition := generateContentDisposition(u.Name)
return &model.Link{
URL: downloadUrl,
Header: http.Header{
"User-Agent": []string{UserAgent},
"Content-Disposition": []string{contentDisposition},
},
}, nil
}
return nil, errors.New("can't convert obj to URL")
}
func (d *DoubaoShare) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
// TODO create folder, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// TODO move obj, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
// TODO rename obj, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// TODO copy obj, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) Remove(ctx context.Context, obj model.Obj) error {
// TODO remove obj, optional
return errs.NotImplement
}
func (d *DoubaoShare) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
// TODO upload file, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// return errs.NotImplement to use an internal archive tool
return nil, errs.NotImplement
}
//func (d *DoubaoShare) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*DoubaoShare)(nil)

View File

@@ -0,0 +1,32 @@
package doubao_share
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootPath
Cookie string `json:"cookie" type:"text"`
ShareIds string `json:"share_ids" type:"text" required:"true"`
}
var config = driver.Config{
Name: "DoubaoShare",
LocalSort: true,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: true,
NeedMs: false,
DefaultRoot: "/",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &DoubaoShare{}
})
}

Some files were not shown because too many files have changed in this diff Show More