Compare commits

...

127 Commits

Author SHA1 Message Date
千石
4b288a08ef fix: session invalid issue (#9301)
* feat(auth): Enhanced device login session management

- Upon login, obtain and verify `Client-Id` to ensure unique device sessions.
- If there are too many device sessions, clean up old ones according to the configured policy or return an error.
- If a device session is invalid, deregister the old token and return a 401 error.
- Added `EnsureActiveOnLogin` function to handle the creation and refresh of device sessions during login.

* feat(session): Modified session deletion logic to mark sessions as inactive.

- Changed session deletion logic to mark sessions as inactive using the `MarkInactive` method.
- Adjusted error handling to ensure an error is returned if marking fails.

* feat(session): Added device limits and eviction policies

- Added a device limit, controlling the maximum number of devices using the `MaxDevices` configuration option.
- If the number of devices exceeds the limit, the configured eviction policy is used.
- If the policy is `evict_oldest`, the oldest device is evicted.
- Otherwise, an error message indicating too many devices is returned.

* refactor(session): Filter for the user's oldest active session

- Renamed `GetOldestSession` to `GetOldestActiveSession` to more accurately reflect its functionality
- Updated the SQL query to add the `status = SessionActive` condition to retrieve only active sessions
- Replaced all callpoints and unified the new function name to ensure logical consistency
2025-08-29 21:20:29 +08:00
Sky_slience
63391a2091 fix(readme): remove outdated sponsor links from README files (#9300)
Co-authored-by: Sky_slience <Skyslience@spdzy.com>
2025-08-29 14:56:54 +08:00
JoaHuang
a11e4cfb31 Merge pull request #9299 from okatu-loli/session-manage-2
fix: session login error
2025-08-29 13:45:10 +08:00
okatu-loli
9a7c82a71e feat(auth): Optimized device session handling logic
- Introduced middleware to handle device sessions
- Changed `handleSession` to `HandleSession` in multiple places in `auth.go` to maintain consistent naming
- Updated response structure to return `device_key` and `token`
2025-08-29 13:31:44 +08:00
okatu-loli
8623da5361 feat(session): Added user session limit and device eviction logic
- Renamed `CountSessionsByUser` to `CountActiveSessionsByUser` and added session status filtering
- Added user and device session limit, with policy handling when exceeding the limit
- Introduced device eviction policy: If the maximum number of devices is exceeded, the oldest session will be evicted using the "evict_oldest" policy
- Modified `LastActive` update logic to ensure accurate session activity time
2025-08-29 11:53:55 +08:00
千石
84adba3acc feat(user): Enhanced role assignment logic (#9297)
- Imported the `utils` package
- Modified the role assignment logic to prevent assigning administrator or guest roles to users
2025-08-28 09:57:34 +08:00
千石
3bf0af1e68 fix(session): Fixed the session status update logic. (#9296)
- Removed the error returned when the session status is `SessionInactive`.
- Updated the `LastActive` field of the session to always record the current time.
2025-08-28 09:57:13 +08:00
千石
de09ba08b6 chore(deps): Update 115driver dependency to v1.1.2 (#9294)
- Upgrade `github.com/SheltonZhu/115driver` to v1.1.2 in `go.mod`
- Modify `replace` to point to `github.com/okatu-loli/115driver v1.1.2`
- Remove old version checksum from `go.sum` and add new version checksum
2025-08-27 17:46:34 +08:00
千石
c64f899a63 feat: implement session management (#9286)
* feat(auth): Added device session management

- Added the `handleSession` function to manage user device sessions and verify client identity
- Updated `auth.go` to call `handleSession` for device handling when a user logs in
- Added the `Session` model to database migrations
- Added `device.go` and `session.go` files to handle device session logic
- Updated `settings.go` to add device-related configuration items, such as the maximum number of devices, device eviction policy, and session TTL

* feat(session): Adds session management features

- Added `SessionInactive` error type in `device.go`
- Added session-related APIs in `router.go` to support listing and evicting sessions
- Added `ListSessionsByUser`, `ListSessions`, and `MarkInactive` methods in `session.go`
- Returns an appropriate error when the session state is `SessionInactive`

* feat(auth): Marks the device session as invalid.

- Import the `session` package into the `auth` module to handle device session status.
- Add a check in the login logic. If `device_key` is obtained, call `session.MarkInactive` to mark the device session as invalid.
- Store the invalid status in the context variable `session_inactive` for subsequent middleware checks.
- Add a check in the session refresh logic to abort the process if the current session has been marked invalid.

* feat(auth, session): Added device information processing and session management changes

- Updated device handling logic in `auth.go` to pass user agent and IP information
- Adjusted database queries in `session.go` to optimize session query fields and add `user_agent` and `ip` fields
- Modified the `Handle` method to add `ua` and `ip` parameters to store the user agent and IP address
- Added the `SessionResp` structure to return a session response containing `user_agent` and `ip`
- Updated the `/admin/user/create` and `/webdav` endpoints to pass the user agent and IP address to the device handler
2025-08-25 19:46:38 +08:00
千石
3319f6ea6a feat(search): Optimized search result filtering and paging logic (#9287)
- Introduced the `filteredNodes` list to optimize the node filtering process
- Filtered results based on the page limit during paging
- Modified search logic to ensure nodes are within the user's base path
- Added access permission checks for node metadata
- Adjusted paging logic to avoid redundant node retrieval
2025-08-25 19:46:24 +08:00
千石
d7723c378f chore(deps): Upgrade 115driver to v1.1.1 (#9283)
- Upgraded `github.com/SheltonZhu/115driver` from v1.0.34 to v1.1.1
- Updated the corresponding version verification information in `go.sum`
2025-08-25 19:46:10 +08:00
千石
a9fcd51bc4 fix: ensure DefaultRole stores role ID while exposing role name in APIs (#9279)
* fix(setting): ensure DefaultRole stores role ID while exposing role name in APIs

- Simplified initial settings to use `model.GUEST` as the default role ID instead of querying roles at startup.
- Updated `GetSetting`, `ListSettings` handlers to:
  - Convert stored role ID into the corresponding role name when returning data.
  - Preserve dynamic role options for selection.
- Removed unused `strings` import and role preloading logic from `InitialSettings`.
- This change avoids DB dependency during initialization while keeping consistent role display for frontend clients.

* fix(setting): ensure DefaultRole stores role ID while exposing role
name in APIs (fix/settings-get-role)

- Simplify initial settings to use `model.GUEST` as the default role ID
  instead of querying roles at startup.
- Update `GetSetting`, `ListSettings` handlers to:
  - Convert stored role ID into the corresponding role name when
    returning data.
  - Preserve dynamic role options for selection.
- Remove unused `strings` import and role preloading logic from
  `InitialSettings`.
- Avoid DB dependency during initialization while keeping consistent
  role display for frontend clients.
2025-08-19 15:01:32 +08:00
千石
74e384175b fix(lanzou): correct comment parsing logic in lanzou driver (#9278)
- Adjusted logic to skip incrementing index when exiting comments.
- Added checks to continue loop if inside a single-line or block comment.
- Prevents erroneous parsing and retains intended comment exclusion.
2025-08-19 00:53:52 +08:00
千石
eca500861a feat: add user registration endpoint and role-based default settings (#9277)
* feat(setting): add role-based default and registration settings (closed #feat/register-and-statistics)

- Added `AllowRegister` and `DefaultRole` settings to site configuration.
- Integrated dynamic role options for `DefaultRole` using `op.GetRoles`.
- Updated `setting.go` handlers to manage `DefaultRole` options dynamically.
- Modified `const.go` to include new site settings constants.
- Updated dependencies in `go.mod` and `go.sum` to support new functionality.

* feat(register-and-statistics): add user registration endpoint

- Added `POST /auth/register` endpoint to support user registration.
- Implemented registration logic in `auth.go` with dynamic role assignment.
- Integrated settings `AllowRegister` and `DefaultRole` for registration flow.
- Updated imports to include new modules: `conf`, `setting`.
- Adjusted user creation logic to use `DefaultRole` setting dynamically.

* feat(register-and-statistics): add user registration endpoint (#register-and-statistics)

- Added `POST /auth/register` endpoint to support user registration.
- Implemented registration logic in `auth.go` with dynamic role assignment.
- Integrated `AllowRegister` and `DefaultRole` settings for registration flow.
- Updated imports to include new modules: `conf`, `setting`.
- Adjusted user creation logic to use `DefaultRole` dynamically.

* feat(register-and-statistics): enhance role management logic (#register-and-statistics)

- Refactored CreateRole and UpdateRole functions to handle default role.
- Added dynamic role assignment logic in 'role.go' using conf settings.
- Improved request handling in 'handles/role.go' with structured data.
- Implemented default role logic in 'db/role.go' to update non-default roles.
- Modified 'model/role.go' to include a 'Default' field for role management.

* feat(register-and-statistics): enhance role management logic

- Refactor CreateRole and UpdateRole to handle default roles.
- Add dynamic role assignment using conf settings in 'role.go'.
- Improve request handling with structured data in 'handles/role.go'.
- Implement default role logic in 'db/role.go' for non-default roles.
- Modify 'model/role.go' to include 'Default' field for role management.

* feat(register-and-statistics): improve role handling logic

- Switch from role names to role IDs for better consistency.
- Update logic to prioritize "guest" for default role ID.
- Adjust `DefaultRole` setting to use role IDs.
- Refactor `getRoleOptions` to return role IDs as a comma-separated string.

* feat(register-and-statistics): improve role handling logic
2025-08-18 16:38:21 +08:00
千石
97d4f79b96 fix: resolve webdav decode issue (#9268)
* fix: resolve webdav handshake error in permission checks

- Updated role permission logic to handle bidirectional subpaths,
  fixing handshake termination by remote host due to path mismatch.
- Refactored function naming for consistency and clarity.
- Enhanced filtering of objects based on user permissions.
- Modified `makePropstatResponse` to preserve encoded href paths.
- Added test for `makePropstatResponse` to ensure href encoding.

* Delete server/webdav/makepropstatresponse_test.go

* ci(workflow): set GOPROXY for Go builds on GitHub Actions

- Use `GOPROXY=https://proxy.golang.org,direct` to speed up module downloads
- Mitigates network flakiness (e.g., checksum DB timeouts/rate limits)
- `,direct` provides fallback for private/unproxyable modules
- No build logic changes; only affects dependency resolution across all matrix targets

---------

Co-authored-by: AlistGo <opsgit88@gmail.com>
2025-08-16 20:55:17 +08:00
千石
fcfb3369d1 fix: webdav error location (#9266)
* feat: improve WebDAV permission handling and user role fetching

- Added logic to handle root permissions in WebDAV requests.
- Improved the user role fetching mechanism.
- Enhanced path checks and permission scopes in role_perm.go.
- Set FetchRole function to avoid import cycles between modules.

* fix(webdav): resolve connection reset issue by encoding paths

- Adjust path encoding in webdav.go to prevent connection reset.
- Utilize utils.EncodePath for correct path formatting.
- Ensure proper handling of directory paths with trailing slash.

* fix(webdav): resolve connection reset issue by encoding paths

- Adjust path encoding in webdav.go to prevent connection reset.
- Utilize utils.FixAndCleanPath for correct path formatting.
- Ensure proper handling of directory paths with trailing slash.

* fix: resolve webdav handshake error in permission checks

- Updated role permission logic to handle bidirectional subpaths.
- This adjustment fixes the issue where remote host terminates the
  handshake due to improper path matching.

* fix: resolve webdav handshake error in permission checks (fix/fix-webdav-error)

- Updated role permission logic to handle bidirectional subpaths,
  fixing handshake termination by remote host due to path mismatch.
- Refactored function naming for consistency and clarity.
- Enhanced filtering of objects based on user permissions.

* fix: resolve webdav handshake error in permission checks

- Updated role permission logic to handle bidirectional subpaths,
  fixing handshake termination by remote host due to path mismatch.
- Refactored function naming for consistency and clarity.
- Enhanced filtering of objects based on user permissions.
2025-08-15 23:10:55 +08:00
千石
aea3ba1499 feat: add tag backup and fix bugs (#9265)
* feat(label): enhance label file binding and router setup (feat/add-tag-backup)

- Add `GetLabelsByFileNamesPublic` to retrieve labels using file names.
- Refactor router setup for label and file binding routes.
- Improve `toObjsResp` for efficient label retrieval by file names.
- Comment out unnecessary user ID parameter in `toObjsResp`.

* feat(label): enhance label file binding and router setup

- Add `GetLabelsByFileNamesPublic` for label retrieval by file names.
- Refactor router setup for label and file binding routes.
- Improve `toObjsResp` for efficient label retrieval by file names.
- Comment out unnecessary user ID parameter in `toObjsResp`.

* refactor(db): comment out debug print in GetLabelIds (#feat/add-tag-backup)

- Comment out debug print statement in GetLabelIds to clean up logs.
- Enhance code readability by removing unnecessary debug output.

* feat(label-file-binding): add batch creation and improve label ID handling

- Introduced `CreateLabelFileBinDingBatch` API for batch label binding.
- Added `collectLabelIDs` helper function to handle label ID parsing.
- Enhanced label ID handling to support varied delimiters and input formats.
- Refactored `CreateLabelFileBinDing` logic for improved code readability.
- Updated router to include `POST /label_file_binding/create_batch`.
2025-08-15 23:09:00 +08:00
千石
6b2d81eede feat(user): enhance path management and role handling (#9249)
- Add `GetUsersByRole` function for fetching users by role.
- Introduce `GetAllBasePathsFromRoles` to aggregate paths from roles.
- Refine path handling in `pkg/utils/path.go` for normalization.
- Comment out base path prefix updates to simplify role operations.
2025-08-06 16:31:36 +08:00
千石
85fe4e5bb3 feat(alist_v3): add IntSlice type for JSON unmarshalling (#9247)
- Add `IntSlice` type to handle both single int and array in JSON.
- Modify `MeResp` struct to use `IntSlice` for `Role` field.
- Import `encoding/json` for JSON operations.
2025-08-04 12:02:45 +08:00
千石
52da07e8a7 feat(123_open): add new driver support for 123 Open (#9246)
- Implement new driver for 123 Open service, enabling file operations
  such as listing, uploading, moving, and removing files.
- Introduce token management for authentication and authorization.
- Add API integration for various file operations and actions.
- Include utility functions for handling API requests and responses.
- Register the new driver in the existing drivers' list.
2025-08-04 11:56:57 +08:00
Sky_slience
46de9e9ebb fix(driver): 123 download and modify request headers on the frontend (#9236)
Co-authored-by: Sky_slience <Skyslience@spdzy.com>
2025-08-03 20:00:09 +08:00
千石
ae90fb579b feat(log): enhance log formatter to respect NO_COLOR env variable (#9239)
- Adjust log formatter to disable colors when NO_COLOR or ALIST_NO_COLOR
  environment variables are set.
- Reorganize formatter settings for better readability.
2025-08-03 09:26:23 +08:00
Sky_slience
394a18cbd9 Fix 123 download (#9235)
* fix(driver): handle additional HTTP status code 210 for URL redirection

* fix(driver): 123 download url error

---------

Co-authored-by: Sky_slience <Skyslience@spdzy.com>
2025-07-30 16:55:32 +08:00
千石
280960ce3e feat(user-db): enhance user management with role-based queries (allow-edit-role-guest) (#9234)
- Add `GetUsersByRole` function to fetch users based on their roles.
- Extend `UpdateUserBasePathPrefix` to accept optional user lists.
- Ensure path cleaning in `UpdateUserBasePathPrefix` for consistency.
- Integrate guest role fetching in `auth.go` middleware.
- Utilize `GetUsersByRole` in `role.go` for base path modifications.
- Remove redundant line in `role.go` role modification logic.
2025-07-30 13:15:35 +08:00
Sky_slience
74332e91fb feat(ui): add new UI configuration option to settings (#9233)
* feat(ui): add new UI configuration option to settings

* fix(ui): disable new UI feature by default

---------

Co-authored-by: Sky_slience <Skyslience@spdzy.com>
2025-07-30 12:22:02 +08:00
Sky_slience
540d6c7064 fix(meta): update OAuth token URL and improve default client credentials (#9231) 2025-07-30 10:48:33 +08:00
千石
55b2bb6b80 feat(user-management): Enhance admin management and role handling 2025-07-29 19:45:28 +08:00
qianshi
d5df6fa4cf Merge branch 'main' into feat/allow-edit-role-guest 2025-07-29 19:13:01 +08:00
千石
3353055482 Update Dockerfile.ci (#9230)
chore(docker): Update base image from alpine:edge to alpine:3.20.7 in Dockerfile.ci
2025-07-29 18:35:47 +08:00
千石
4d7c2a09ce docs(README): Add API documentation links across multiple languages (#9225)
- Add API documentation section to `README.md` with link to Apifox
- Add API documentation section to `README_ja.md` with Japanese translation and link to Apifox
- Add API documentation section to `README_cn.md` with Chinese translation and link to Apifox
2025-07-29 09:42:34 +08:00
qianshi
5b8c26510b feat(user-management): Enhance admin management and role handling
- Add `CountEnabledAdminsExcluding` function to count enabled admins excluding a specific user.
- Implement `CountUsersByRoleAndEnabledExclude` in `internal/db/user.go` to support exclusion logic.
- Refactor role handling with switch-case for better readability in `server/handles/role.go`.
- Ensure at least one enabled admin remains when disabling an admin in `server/handles/user.go`.
- Maintain guest role name consistency when updating roles in `internal/op/role.go`.
2025-07-28 23:07:07 +08:00
千石
91cc7529a0 feat(user/role/storage): enhance user and storage operations with additional validations (#9223)
- Update `CreateUser` to adjust `BasePath` based on user roles and clean paths.
- Modify `UpdateUser` to incorporate role-based path changes.
- Add validation in `CreateStorage` and `UpdateStorage` to prevent root mount path.
- Prevent changes to admin user's role and username in user handler.
- Update `UpdateRole` to modify user base paths when role paths change, and clear user cache accordingly.
- Import `errors` package to handle error messages.
2025-07-27 22:25:45 +08:00
千石
f61d13d433 refactor(convert_role): Improve role conversion logic for legacy formats (#9219)
- Add new imports: `database/sql`, `encoding/json`, and `conf` package in `convert_role.go`.
- Simplify permission entry initialization by removing redundant struct formatting.
- Update error logging messages for better clarity.
- Replace `op.GetUsers` with direct database access for fetching user roles.
- Implement role update logic using `rawDb` and handle legacy int role conversion.
- Count the number of users whose roles are updated and log completion.
- Introduce `IsLegacyRoleDetected` function to check for legacy role formats.
- Modify `cmd/common.go` to invoke role conversion if legacy format is detected.
2025-07-26 15:20:08 +08:00
千石
00120cba27 feat: enhance permission control and label management (#9215)
* 标签管理

* pr检查优化

* feat(role): Implement role management functionality

- Add role management routes in `server/router.go` for listing, getting, creating, updating, and deleting roles
- Introduce `initRoles()` in `internal/bootstrap/data/data.go` for initializing roles during bootstrap
- Create `internal/op/role.go` to handle role operations including caching and singleflight
- Implement role handler functions in `server/handles/role.go` for API responses
- Define database operations for roles in `internal/db/role.go`
- Extend `internal/db/db.go` for role model auto-migration
- Design `internal/model/role.go` to represent role structure with ID, name, description, base path, and permissions
- Initialize default roles (`admin` and `guest`) in `internal/bootstrap/data/role.go` during startup

* refactor(user roles): Support multiple roles for users

- Change the `Role` field type from `int` to `[]int` in `drivers/alist_v3/types.go` and `drivers/quqi/types.go`.
- Update the `Role` field in `internal/model/user.go` to use a new `Roles` type with JSON and database support.
- Modify `IsGuest` and `IsAdmin` methods to check for roles using `Contains` method.
- Update `GetUserByRole` method in `internal/db/user.go` to handle multiple roles.
- Add `roles.go` to define a new `Roles` type with JSON marshalling and scanning capabilities.
- Adjust code in `server/handles/user.go` to compare roles with `utils.SliceEqual`.
- Change role initialization for users in `internal/bootstrap/data/dev.go` and `internal/bootstrap/data/user.go`.
- Update `Role` handling in `server/handles/task.go`, `server/handles/ssologin.go`, and `server/handles/ldap_login.go`.

* feat(user/role): Add path limit check for user and role permissions

- Add new permission bit for checking path limits in `user.go`
- Implement `CheckPathLimit` method in `User` struct to validate path access
- Modify `JoinPath` method in `User` to enforce path limit checks
- Update `role.go` to include path limit logic in `Role` struct
- Document new permission bit in `Role` and `User` comments for clarity

* feat(permission): Add role-based permission handling

- Introduce `role_perm.go` for managing user permissions based on roles.
- Implement `HasPermission` and `MergeRolePermissions` functions.
- Update `webdav.go` to utilize role-based permissions instead of direct user checks.
- Modify `fsup.go` to integrate `CanAccessWithRoles` function.
- Refactor `fsread.go` to use `common.HasPermission` for permission validation.
- Adjust `fsmanage.go` for role-based access control checks.
- Enhance `ftp.go` and `sftp.go` to manage FTP access via roles.
- Update `fsbatch.go` to employ `MergeRolePermissions` for batch operations.
- Replace direct user permission checks with role-based permission handling across various modules.

* refactor(user): Replace integer role values with role IDs

- Change `GetAdmin()` and `GetGuest()` functions to retrieve role by name and use role ID.
- Add patch for version `v3.45.2` to convert legacy integer roles to role IDs.
- Update `dev.go` and `user.go` to use role IDs instead of integer values for roles.
- Remove redundant code in `role.go` related to guest role creation.
- Modify `ssologin.go` and `ldap_login.go` to set user roles to nil instead of using integer roles.
- Introduce `convert_roles.go` to handle conversion of legacy roles and ensure role existence in the database.

* feat(role_perm): implement support for multiple base paths for roles

- Modify role permission checks to support multiple base paths
- Update role creation and update functions to handle multiple base paths
- Add migration script to convert old base_path to base_paths
- Define new Paths type for handling multiple paths in the model
- Adjust role model to replace BasePath with BasePaths
- Update existing patches to handle roles with multiple base paths
- Update bootstrap data to reflect the new base_paths field

* feat(role): Restrict modifications to default roles (admin and guest)

- Add validation to prevent changes to "admin" and "guest" roles in `UpdateRole` and `DeleteRole` functions.
- Introduce `ErrChangeDefaultRole` error in `internal/errs/role.go` to standardize error messaging.
- Update role-related API handlers in `server/handles/role.go` to enforce the new restriction.
- Enhance comments in `internal/bootstrap/data/role.go` to clarify the significance of default roles.
- Ensure consistent error responses for unauthorized role modifications across the application.

* 🔄 **refactor(role): Enhance role permission handling**

- Replaced `BasePaths` with `PermissionPaths` in `Role` struct for better permission granularity.
- Introduced JSON serialization for `PermissionPaths` using `RawPermission` field in `Role` struct.
- Implemented `BeforeSave` and `AfterFind` GORM hooks for handling `PermissionPaths` serialization.
- Refactored permission calculation logic in `role_perm.go` to work with `PermissionPaths`.
- Updated role creation logic to initialize `PermissionPaths` for `admin` and `guest` roles.
- Removed deprecated `CheckPathLimit` method from `Role` struct.

* fix(model/user/role): update permission settings for admin and role

- Change `RawPermission` field in `role.go` to hide JSON representation
- Update `Permission` field in `user.go` to `0xFFFF` for full access
- Modify `PermissionScopes` in `role.go` to `0xFFFF` for enhanced permissions

* 🔒 feat(role-permissions): Enhance role-based access control

- Introduce `canReadPathByRole` function in `role_perm.go` to verify path access based on user roles
- Modify `CanAccessWithRoles` to include role-based path read check
- Add `RoleNames` and `Permissions` to `UserResp` struct in `auth.go` for enhanced user role and permission details
- Implement role details aggregation in `auth.go` to populate `RoleNames` and `Permissions`
- Update `User` struct in `user.go` to include `RolesDetail` for more detailed role information
- Enhance middleware in `auth.go` to load and verify detailed role information for users
- Move `guest` user initialization logic in `user.go` to improve code organization and avoid repetition

* 🔒 fix(permissions): Add permission checks for archive operations

- Add `MergeRolePermissions` and `HasPermission` checks to validate user access for reading archives
- Ensure users have `PermReadArchives` before proceeding with `GetNearestMeta` in specific archive paths
- Implement permission checks for decompress operations, requiring `PermDecompress` for source paths
- Return `PermissionDenied` errors with 403 status if user lacks necessary permissions

* 🔒 fix(server): Add permission check for offline download

- Add permission merging logic for user roles
- Check user has permission for offline download addition
- Return error response with "permission denied" if check fails

*  feat(role-permission): Implement path-based role permission checks

- Add `CheckPathLimitWithRoles` function to validate access based on `PermPathLimit` permission.
- Integrate `CheckPathLimitWithRoles` in `offline_download` to enforce path-based access control.
- Apply `CheckPathLimitWithRoles` across file system management operations (e.g., creation, movement, deletion).
- Ensure `CheckPathLimitWithRoles` is invoked for batch operations and archive-related actions.
- Update error handling to return `PermissionDenied` if the path validation fails.
- Import `errs` package in `offline_download` for consistent error responses.

*  feat(role-permission): Implement path-based role permission checks

- Add `CheckPathLimitWithRoles` function to validate access based on `PermPathLimit` permission.
- Integrate `CheckPathLimitWithRoles` in `offline_download` to enforce path-based access control.
- Apply `CheckPathLimitWithRoles` across file system management operations (e.g., creation, movement, deletion).
- Ensure `CheckPathLimitWithRoles` is invoked for batch operations and archive-related actions.
- Update error handling to return `PermissionDenied` if the path validation fails.
- Import `errs` package in `offline_download` for consistent error responses.

* ♻️ refactor(access-control): Update access control logic to use role-based checks

- Remove deprecated logic from `CanAccess` function in `check.go`, replacing it with `CanAccessWithRoles` for improved role-based access control.
- Modify calls in `search.go` to use `CanAccessWithRoles` for more precise handling of permissions.
- Update `fsread.go` to utilize `CanAccessWithRoles`, ensuring accurate access validation based on user roles.
- Simplify import statements in `check.go` by removing unused packages to clean up the codebase.

*  feat(fs): Improve visibility logic for hidden files

- Import `server/common` package to handle permissions more robustly
- Update `whetherHide` function to use `MergeRolePermissions` for user-specific path permissions
- Replace direct user checks with `HasPermission` for `PermSeeHides`
- Enhance logic to ensure `nil` user cases are handled explicitly

* 标签管理

* feat(db/auth/user): Enhance role handling and clean permission paths

- Comment out role modification checks in `server/handles/user.go` to allow flexible role changes.
- Improve permission path handling in `server/handles/auth.go` by normalizing and deduplicating paths.
- Introduce `addedPaths` map in `CurrentUser` to prevent duplicate permissions.

* feat(storage/db): Implement role permissions path prefix update

- Add `UpdateRolePermissionsPathPrefix` function in `role.go` to update role permissions paths.
- Modify `storage.go` to call the new function when the mount path is renamed.
- Introduce path cleaning and prefix matching logic for accurate path updates.
- Ensure roles are updated only if their permission scopes are modified.
- Handle potential errors with informative messages during database operations.

* feat(role-migration): Implement role conversion and introduce NEWGENERAL role

- Add `NEWGENERAL` to the roles enumeration in `user.go`
- Create new file `convert_role.go` for migrating legacy roles to new model
- Implement `ConvertLegacyRoles` function to handle role conversion with permission scopes
- Add `convert_role.go` patch to `all.go` under version `v3.46.0`

* feat(role/auth): Add role retrieval by user ID and update path prefixes

- Add `GetRolesByUserID` function for efficient role retrieval by user ID
- Implement `UpdateUserBasePathPrefix` to update user base paths
- Modify `UpdateRolePermissionsPathPrefix` to return modified role IDs
- Update `auth.go` middleware to use the new role retrieval function
- Refresh role and user caches upon path prefix updates to maintain consistency

---------

Co-authored-by: Leslie-Xy <540049476@qq.com>
2025-07-26 09:51:59 +08:00
Sakana
5e15a360b7 feat(github_releases): concurrently request the GitHub API (#9211) 2025-07-24 15:30:12 +08:00
alist666
2bdc5bef9e Merge pull request #9207 from AlistGo/fix-aliyundirve
fix: update DriveId assignment to use DeviceID from Addition struct
2025-07-17 13:21:32 +08:00
AlistDev
13ea1c1405 fix: restore user-agent header in HTTP requests 2025-07-16 20:39:05 +08:00
AlistDev
fd41186679 fix: update DriveId assignment to use DeviceID from Addition struct 2025-07-14 23:04:40 +08:00
alist666
9da56bab4d Merge pull request #9171 from AlistGo/fix-189pc-login
fix: update documentation links to point to the new domain And fix 189pc getToken fail
2025-06-28 00:20:50 +08:00
alistgo
51eeb22465 fix: dead link 2025-06-27 23:58:52 +08:00
Alone
b1586612ca feat: add ghcr docker image (#8524) 2025-06-27 23:39:23 +08:00
AlistDev
7aeb0ab078 fix: update documentation links to point to the new domain And fix 189pc getToken fail 2025-06-27 16:28:09 +08:00
MadDogOwner
ffa03bfda1 feat(cloudreve_v4): add Cloudreve V4 driver (#8470 closes #8328 #8467)
* feat(cloudreve_v4): add Cloudreve V4 driver implementation

* fix(cloudreve_v4): update request handling to prevent token refresh loop

* feat(onedrive): implement retry logic for upload failures

* feat(cloudreve): implement retry logic for upload failures

* feat(cloudreve_v4): support cloud sorting

* fix(cloudreve_v4): improve token handling in Init method

* feat(cloudreve_v4): support share

* feat(cloudreve): support reference

* feat(cloudreve_v4): support version upload

* fix(cloudreve_v4): add SetBody in upLocal

* fix(cloudreve_v4): update URL structure in Link and FileUrlResp
2025-05-24 13:38:43 +08:00
Andy Hsu
630cf30af5 feat(115_open): implement rate limiting for API requests 2025-05-11 13:39:32 +08:00
Andy Hsu
bc5117fa4f fix(115_open): add delay in MakeDir function to handle rate limiting 2025-05-02 16:53:39 +08:00
yoclo
11e7284824 fix: prevent guest user from updating profile (#8447) 2025-04-29 23:14:16 +08:00
MadDogOwner
b2b91a9281 feat(doubao): add get_download_info API and download_api option (#8428) 2025-04-27 20:00:25 +08:00
MadDogOwner
f541489d7d fix(netease_music): change ListResp size fields from string to int64 (#8417) 2025-04-27 19:59:30 +08:00
bigQY
6d9c554f6f feat: add UseLargeThumbnail for 139 (#8424) 2025-04-27 19:58:45 +08:00
Mmx
e532ab31ef fix: remove auth middleware for authn login (#8407) 2025-04-27 19:58:09 +08:00
Mmx
bf0705ec17 fix: shebang of entrypoint.sh (#8408) 2025-04-27 19:56:34 +08:00
gdm257
17b42b9fa4 fix(mega): use newest file for same filename (#8422 close #8344)
Mega supports duplicate names but alist does not support.
In `List()` method, driver will return multiple files with same name.
That makes alist to use oldest version file for listing/downloading.
So it is necessary to filter old same name files in a folder.
After fixes, all CRUD work normally.

Refs #8344
2025-04-27 19:56:04 +08:00
Sam- Pan(潘绍森)
41bdab49aa fix(139): incorrect host (#8368)
* fix: correct new personal cloud path for 139Driver

* Update drivers/139/driver.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix bug

---------

Co-authored-by: panshaosen <19802021493@139.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: j2rong4cn <253551464@qq.com>
2025-04-19 14:29:12 +08:00
Lin Tianchuan
8f89c55aca perf(local): avoid duplicate parsing of VideoThumbPos (#7812)
* feat(local): support percent for video thumbnail

The percentage determines the point in the video (as a percentage of the total duration) at which the thumbnail will be generated.

* feat(local): support both time and percent for video thumbnail

* refactor(local): avoid duplicate parsing of VideoThumbPos
2025-04-19 14:27:13 +08:00
wxnq
b449312da8 fix(docker_release): avoid duplicate occupation in docker image (#8393 close #8388)
* fix(ci): modify the method of adding permissions

* fix(build): modify the method of adding permissions(to keep up with ci)
2025-04-19 14:26:19 +08:00
MadDogOwner
52d4e8ec47 fix(lanzou): remove JavaScript comments from response data (#8386)
* feat(lanzou): add RemoveJSComment function to clean JavaScript comments from HTML

* feat(lanzou): remove comments from share page data in getFilesByShareUrl function

* fix(lanzou): optimize RemoveJSComment function to improve comment removal logic
2025-04-19 14:24:43 +08:00
New Future
28e5b5759e feat(azure_blob): implement GetRootId interface in Addition struct (#8389)
fix failed get dir
2025-04-19 14:23:48 +08:00
asdfghjkl
477c43971f feat(doubao_share): support doubao_share link (#8376)
Co-authored-by: anobodys <anobodys@gmail.com>
2025-04-19 14:22:43 +08:00
Yifan Gao
0a9921fa79 fix(aliyundrive_open): resolve file duplication issues and improve path handling (#8358)
* fix(aliyundrive_open): resolve file duplication issues and improve path handling

1. Fix file duplication by implementing a new removeDuplicateFiles method that cleans up duplicate files after operations
2. Change Move operation to use "ignore" for check_name_mode instead of "refuse" to allow moves when destination has same filename
3. Set Copy operation to handle duplicates by removing them after successful copy
4. Improve path handling for all file operations (Move, Rename, Put, MakeDir) by properly maintaining the full path of objects
5. Implement GetRoot interface for proper root object initialization with correct path
6. Add proper path management in List operation to ensure objects have correct paths
7. Fix path handling in error cases and improve logging of failures

* refactor(aliyundrive_open): change error logging to warnings for duplicate file removal

Updated the Move, Rename, and Copy methods to log warnings instead of errors when duplicate file removal fails, as the primary operations have already completed successfully. This improves the clarity of logs without affecting the functionality.

* Update drivers/aliyundrive_open/util.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-19 14:22:12 +08:00
Lee CQ
88abb323cb feat(url-tree): implement the Put interface to support adding links directly to the UrlTree on the web side (#8312)
* feat(url-tree)支持PUT

* feat(url-tree) UrlTree更新时,需要将路径和内容分割 #8303

* fix: stdpath.Join call

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Andy Hsu <i@nn.ci>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-12 17:27:56 +08:00
asdfghjkl
f0b1aeaf8d feat(doubao): support upload (#8302 close #8335)
* feat(doubao): support upload

* fix(doubao): fix file list cursor

* fix: handle strconv.Atoi err

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: anobodys <anobodys@gmail.com>
Co-authored-by: Andy Hsu <i@nn.ci>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-12 17:12:40 +08:00
Yifan Gao
c8470b9a2a fix(fs): remove old target object from cache before updating (#8352) 2025-04-12 17:09:46 +08:00
Dgs
d0ee90cd11 fix(thunder): fix login issue (#8342 close #8288) 2025-04-12 17:05:58 +08:00
Dgs
544a7ea022 fix(pikpak&pikpak_share): fix WebPackageName (#8305) 2025-04-12 17:03:58 +08:00
j2rong4cn
4f5cabc725 feat: add h2c for http server (#8294)
* feat: add h2c for http server

* chore(config): add EnableH2c option
2025-04-12 17:02:51 +08:00
j2rong4cn
a2f266277c fix(net): unexpected write (#8291 close #8281) 2025-04-12 17:01:52 +08:00
jerry
a4bfbf8a83 fix(ipfs): fix problems (#8252)
* fix: 🐛 (ipfs): fix the list error caused by not proper join path function

使用更加规范的路径拼接,修复了有中文或符号的路径无法正常访问的问题

* refactor: 命名规范

* 删除多余的条件判断

* fix: 使用withresult方法重构代码,添加get方法,提高性能

* fix: 允许get方法获取目录

去除多余的判断

* fix: 允许copy,rename,move进行覆写

* fix: 修复move方法导致的目录被删除

* refactor: 整理关于返回Path的代码

* fix: 修复由于get方法导致的ipfs路径无法访问

* fix: 修复path处理错误的get方法

修复get方法,删除意外加入的目录

* fix: fix path join

use path join instead of filepath join to avoid os problem

* fix: rm filepath ref

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2025-04-12 17:01:30 +08:00
j2rong4cn
ddffacf07b perf: optimize IO read/write usage (#8243)
* perf: optimize IO read/write usage

* .

* Update drivers/139/driver.go

Co-authored-by: MadDogOwner <xiaoran@xrgzs.top>

---------

Co-authored-by: MadDogOwner <xiaoran@xrgzs.top>
2025-04-12 16:55:31 +08:00
xiaoQQya
3375c26c41 perf(quark_uc&quark_uc_tv): native proxy multithreading (#8287)
* perf(quark_uc): native proxy multithreading

* perf(quark_uc_tv): native proxy multithreading

* chore(fs): file query result add id
2025-04-03 20:50:29 +08:00
asdfghjkl
ab68faef44 fix(baidu_netdisk): add another video crack api (#8275)
Co-authored-by: anobodys <anobodys@gmail.com>
2025-04-03 20:44:49 +08:00
New Future
2e21df0661 feat(driver): add Azure Blob Storage driver (#8261)
* add azure-blob driver

* fix nested folders copy

* feat(driver): add Azure Blob Storage driver

实现 Azure Blob Storage 驱动,支持以下功能:
- 使用共享密钥身份验证初始化连接
- 列出目录和文件
- 生成临时 SAS URL 进行文件访问
- 创建目录
- 移动和重命名文件/文件夹
- 复制文件/文件夹
- 删除文件/文件夹
- 上传文件并支持进度跟踪

此驱动允许用户通过 AList 平台无缝访问和管理 Azure Blob Storage 中的数据。

* feat(driver): update help doc for Azure Blob

* doc(readme): add new driver

* Update drivers/azure_blob/driver.go

fix(azure): fix name check

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update README.md

doc(readme): fix the link

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix(azure): fix log and link

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-03 20:43:21 +08:00
MadDogOwner
af18cb138b feat(139): add option ReportRealSize (#8244 close #8141)
* feat(139): handle family upload errors

* feat(139): add option `ReportRealSize`

* Update drivers/139/driver.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-03 20:41:59 +08:00
j2rong4cn
31c55a2adf fix(archive): unable to preview (#8248)
* fix(archive): unable to preview

* fix bug
2025-04-03 20:41:05 +08:00
MadDogOwner
465dd1703d feat(cloudreve): s3 policy support (#8245)
* feat(cloudreve): s3 policy support

* fix(cloudreve): correct potential off-by-one error in `etags` initialization
2025-04-03 20:40:19 +08:00
j2rong4cn
a6304285b6 fix: revert "refactor(net): pass request header" (#8269)
5be50e77d9
2025-04-03 20:35:52 +08:00
YangXu
affd0cecd1 fix(pikpak&pikpak_share): update algorithms (#8278) 2025-04-03 20:35:14 +08:00
MadDogOwner
37640221c0 fix(doubao): update file size type to int64 (#8289) 2025-04-03 20:34:27 +08:00
Andy Hsu
e4bd223d1c fix(deps): update 115-sdk-go to v0.1.5 2025-04-03 20:29:53 +08:00
jerry
0cde4e73d6 feat(ipfs): better ipfs support (#8225)
* feat:  better ipfs support

fixed mfs crud, added ipns support

* Update driver.go

clean up
2025-03-27 23:25:23 +08:00
Ljcbaby
7b62dcb88c fix(baidu_netdisk): deplicate retry (#8210 redo #7972, link #8180) 2025-03-27 23:22:55 +08:00
never lee
c38dc6df7c fix(115_open): support multipart upload (#8229)
Co-authored-by: neverlee <neverlea@formail.com>
2025-03-27 23:22:08 +08:00
MadDogOwner
5668e4a4ea feat(doubao): add Doubao driver (#8232 closes #8020 #8206)
* feat(doubao): implement List()

* feat(doubao): implement Link()

* feat(doubao): implement MakeDir()

* refactor(doubao): add type Object to store key

* feat(doubao): implement Move()

* feat(doubao): implement Rename()

* feat(doubao): implement Remove()
2025-03-27 23:21:42 +08:00
KirCute
1335f80362 feat(archive): support multipart archives (#8184 close #8015)
* feat(archive): multipart support & sevenzip tool

* feat(archive): rardecode tool

* feat(archive): support decompress multi-selected

* fix(archive): decompress response filter internal

* feat(archive): support multipart zip

* fix: more applicable AcceptedMultipartExtensions interface
2025-03-27 23:20:44 +08:00
KirCute
704d3854df feat(alist_v3): support forward archive requests (#8230)
* feat(alist_v3): support forward archive requests

* fix: encode all inner path
2025-03-27 23:18:34 +08:00
MadDogOwner
44cc71d354 fix(cloudreve): enable SetContentLength for uploading to local policy (#8228 close #8174)
* fix(cloudreve): upload failure to return error msg instead of deletion success

* fix(cloudreve): enable SetContentLength for uploading to local policy

* refactor(cloudreve): move local policy upload logic to utils for better error handling

* refactor(cloudreve): unified upload code style

* refactor(cloudreve): improve user agent handling
2025-03-27 23:18:15 +08:00
KirCute
9a9aee9ac6 feat(alias): support writing to non-ambiguous paths (#8216)
* feat(alias): support writing to non-ambiguous paths

* feat(alias): support extract concurrency

* fix(alias): extract url no pass query
2025-03-27 23:17:45 +08:00
KirCute
4fcc3a187e fix(traffic): duplicate semaphore release when uploading (#8211 close #8180) 2025-03-27 23:15:47 +08:00
Ljcbaby
10a76c701d fix(db): support postgres trust/peer mode (#8198 close #8066) 2025-03-27 23:15:04 +08:00
KirCute
6e13923225 fix(sftp-server): postgre cannot store control characters (#8188 close #8186) 2025-03-27 23:14:36 +08:00
Andy Hsu
32890da29f fix(115_open): upgrade 115-sdk-go dependency to v0.1.4 2025-03-21 19:06:09 +08:00
Andy Hsu
758554a40f fix(115_open): upgrade 115-sdk-go dependency to v0.1.3 (close #8169) 2025-03-19 21:47:42 +08:00
Andy Hsu
4563aea47e fix(115_open): rename delay to take effect (close #8156) 2025-03-18 22:25:04 +08:00
Andy Hsu
35d6f3b8fc fix(115_open): upgrade sdk (close #8151) 2025-03-18 22:21:50 +08:00
j2rong4cn
b4e6ab12d9 refactor: FilterReadMeScripts (#8154 close #8150)
* refactor: FilterReadMeScripts

* .
2025-03-18 22:02:33 +08:00
Andy Hsu
3499c4db87 feat: 115 open driver (#8139)
* wip: 115 open

* chore(go.mod): update 115-sdk-go dependency version

* feat(115_open): implement directory management and file operations

* chore(go.mod): update 115-sdk-go dependency to v0.1.1 and adjust callback handling in driver

* chore: rename driver
2025-03-17 00:52:09 +08:00
hshpy
d20f41d687 fix: missing handling of RangeReadCloser (#8146) 2025-03-16 22:14:44 +08:00
Andy Hsu
d16ba65f42 fix(lang): initialize configuration in LangCmd before generating language JSON file 2025-03-16 16:37:33 +08:00
hshpy
c82e632ee1 fix: potential XSS vulnerabilities (#7923)
* fix: potential XSS vulnerabilities

* feat: support filter and render for readme.md

* chore: set ReadMeAutoRender to true

* fix attachFileName undefined

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2025-03-15 23:28:40 +08:00
折纸飞机
04f5525f20 fix(s3): incorrectly added slash before the Bucket name (#8083 close #8001) 2025-03-15 00:21:24 +08:00
shniubobo
28b61a93fd feat(webdav): support oc:checksums (#8064 close #7472)
Ref: #7472
2025-03-15 00:21:07 +08:00
j2rong4cn
0126af4de0 fix(crypt): premature close of MFile (#8132 close #8119)
* fix(crypt): premature close of MFile

* refactor
2025-03-15 00:13:30 +08:00
MadDogOwner
7579d44517 fix(onedrive): set req.ContentLength (#8081)
* fix(onedrive): set req.ContentLength

* fix(onedrive_app): set req.ContentLength

* fix(cloudreve): set req.ContentLength
2025-03-15 00:12:37 +08:00
MadDogOwner
5dfea714d8 fix(cloudreve): use milliseconds timestamp in last_modified (#8133) 2025-03-15 00:12:15 +08:00
Ljcbaby
370a6c15a9 fix(baidu_netdisk): remove duplicate retry (#7972) 2025-03-01 19:00:36 +08:00
Ljcbaby
2570707a06 feat(baidu_netdisk): support dynamical slice size for low bandwith upload case (#7965)
* 动态分片尺寸

* 补充严格测试结果
2025-03-01 18:46:05 +08:00
j2rong4cn
4145734c18 refactor(net): pass request header (#8031 close #8008)
* refactor(net): pass request header

* feat(proxy): add `Etag` to response header

* refactor
2025-03-01 18:35:34 +08:00
KirCute
646c7bcd21 fix(archive): use another sign for extraction (#7982) 2025-03-01 18:34:33 +08:00
KirCute
cdc41595bc feat(github): support GPG verification (#7996 close #7986)
* feat(github): support GPG verification

* chore
2025-02-24 23:12:23 +08:00
KirCute_ECT
79bef0be9e chore: fix build failed (#8005) 2025-02-16 15:11:48 +08:00
KirCute_ECT
c230f24ebe fix(archive): decode filename when decompressing zips (#7998 close #7988) 2025-02-16 12:25:01 +08:00
KirCute_ECT
30d8c20756 feat(archive): support deprioritize previewing (#7984) 2025-02-16 12:24:10 +08:00
KirCute_ECT
3b71500f23 feat(traffic): support limit task worker count & file stream rate (#7948)
* feat: set task workers num & client stream rate limit

* feat: server stream rate limit

* upgrade xhofe/tache

* .
2025-02-16 12:22:11 +08:00
foxxorcat
399336b33c fix(189pc): transfer rename (#7958)
* fix(189pc): transfer rename

* fix: OverwriteUpload

* fix: change search method

* fix

* fix
2025-02-16 12:21:34 +08:00
KirCute_ECT
36b4204623 feat(github): support github proxy (#7979 close #7963) 2025-02-16 12:21:03 +08:00
YangRucheng
f25be154c6 fix(ilanzou): add header X-Forwarded-For to solve IP ban (#7977)
* fix: warning

* feat: ip header

* fix: ip header for fs link
2025-02-16 12:20:28 +08:00
Sakana
ec3fc945a3 fix(feiji): modify the request header (#7902 close #7890) 2025-02-09 18:35:39 +08:00
MadDogOwner
3f9bed3d5f feat(bootstrap): add .url to proxy types (#7928) 2025-02-09 18:33:38 +08:00
Jealous
b9ad18bd0a feat(recursive-move): Advanced conflict policy for preventing unintentional overwriting (#7906) 2025-02-09 18:32:57 +08:00
Jealous
0219c4e15a fix(index): fix the issue where ignored paths are not updated (#7907) 2025-02-09 18:31:43 +08:00
Feng.YJ
d983a4ebcb refactor(cmd): use std runtime package to get go version info (#7964)
* refactor(cmd): use std `runtime` package to get go version info

- Remove the `GoVersion` variable.
- Remove overriding `GoVersion` by ldflags in `build.sh`.
- Get go version, OS and arch from the constants in the std `runtime` package instead of compile time.

* chore(ci): remove `GoVersion` flag from workflows

Remove GoVersion flag from beta_release.yml and build.yml workflows.

> Reduce compile-time dependencies.
2025-02-09 18:30:56 +08:00
Sakana
f795807753 feat(github_releases): support dir size for show all version (#7938)
* refactor

* 修改默认 RepoStructure

* feat: 支持使用 gh-proxy
2025-02-09 18:30:38 +08:00
hshpy
6164e4577b fix: missing args when using alias driver (#7941 close #7932) 2025-02-05 19:22:10 +08:00
Sakana
39bde328ee fix(lenovonas_share): the size of the directory (#7914) 2025-02-01 17:32:58 +08:00
KirCute_ECT
779c293f04 fix(driver): implement canceling and updating progress for putting for some drivers (#7847)
* fix(driver): additionally implement canceling and updating progress for putting for some drivers

* refactor: add driver archive api into template

* fix(123): use built-in MD5 to avoid caching full

* .

* fix build failed
2025-02-01 17:29:55 +08:00
abc1763613206
b9f397d29f fix(139): restore the Account handling, partially reverts #7850 (#7900 close #7784) 2025-01-30 11:25:41 +08:00
Jiang Xiang
d53eecc229 fix(febbox): panic due to slice out of range (#7898 close #7889) 2025-01-30 11:24:07 +08:00
Andy Hsu
f88fd83d4a feat(ci): use go-cross/cgo-actions for dev build 2025-01-28 18:57:09 +08:00
283 changed files with 15225 additions and 2433 deletions

2
.github/FUNDING.yml vendored
View File

@@ -10,4 +10,4 @@ liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
custom: ['https://alist.nn.ci/guide/sponsor.html']
custom: ['https://alistgo.com/guide/sponsor.html']

View File

@@ -16,14 +16,14 @@ body:
您必须勾选以下所有内容否则您的issue可能会被直接关闭。或者您可以去[讨论区](https://github.com/alist-org/alist/discussions)
options:
- label: |
I have read the [documentation](https://alist.nn.ci).
我已经阅读了[文档](https://alist.nn.ci)。
I have read the [documentation](https://alistgo.com).
我已经阅读了[文档](https://alistgo.com)。
- label: |
I'm sure there are no duplicate issues or discussions.
我确定没有重复的issue或讨论。
- label: |
I'm sure it's due to `AList` and not something else(such as [Network](https://alist.nn.ci/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host) ,`Dependencies` or `Operational`).
我确定是`AList`的问题,而不是其他原因(例如[网络](https://alist.nn.ci/zh/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host)`依赖`或`操作`)。
I'm sure it's due to `AList` and not something else(such as [Network](https://alistgo.com/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host) ,`Dependencies` or `Operational`).
我确定是`AList`的问题,而不是其他原因(例如[网络](https://alistgo.com/zh/faq/howto.html#tls-handshake-timeout-read-connection-reset-by-peer-dns-lookup-failed-connect-connection-refused-client-timeout-exceeded-while-awaiting-headers-no-such-host)`依赖`或`操作`)。
- label: |
I'm sure this issue is not fixed in the latest version.
我确定这个问题在最新版本中没有被修复。

View File

@@ -7,7 +7,7 @@ body:
label: Please make sure of the following things
description: You may select more than one, even select all.
options:
- label: I have read the [documentation](https://alist.nn.ci).
- label: I have read the [documentation](https://alistgo.com).
- label: I'm sure there are no duplicate issues or discussions.
- label: I'm sure this feature is not implemented.
- label: I'm sure it's a reasonable and popular requirement.

View File

@@ -94,7 +94,6 @@ jobs:
out-dir: build
x-flags: |
github.com/alist-org/alist/v3/internal/conf.BuiltAt=$built_at
github.com/alist-org/alist/v3/internal/conf.GoVersion=$go_version
github.com/alist-org/alist/v3/internal/conf.GitAuthor=Xhofe
github.com/alist-org/alist/v3/internal/conf.GitCommit=$git_commit
github.com/alist-org/alist/v3/internal/conf.Version=$tag
@@ -120,7 +119,7 @@ jobs:
- name: Checkout repo
uses: actions/checkout@v4
with:
repository: alist-org/desktop-release
repository: AlistGo/desktop-release
ref: main
persist-credentials: false
fetch-depth: 0
@@ -136,4 +135,4 @@ jobs:
with:
github_token: ${{ secrets.MY_TOKEN }}
branch: main
repository: alist-org/desktop-release
repository: AlistGo/desktop-release

View File

@@ -15,14 +15,19 @@ jobs:
strategy:
matrix:
platform: [ubuntu-latest]
go-version: [ '1.21' ]
target:
- darwin-amd64
- darwin-arm64
- windows-amd64
- linux-arm64-musl
- linux-amd64-musl
- windows-arm64
- android-arm64
name: Build
runs-on: ${{ matrix.platform }}
env:
GOPROXY: https://proxy.golang.org,direct
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- name: Checkout
uses: actions/checkout@v4
@@ -30,19 +35,29 @@ jobs:
- uses: benjlevesque/short-sha@v3.0
id: short-sha
- name: Install dependencies
run: |
sudo snap install zig --classic --beta
docker pull crazymax/xgo:latest
go install github.com/crazy-max/xgo@latest
sudo apt install upx
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: '1.22'
- name: Setup web
run: bash build.sh dev web
- name: Build
run: |
bash build.sh dev
uses: go-cross/cgo-actions@v1
with:
targets: ${{ matrix.target }}
musl-target-format: $os-$musl-$arch
out-dir: build
x-flags: |
github.com/alist-org/alist/v3/internal/conf.BuiltAt=$built_at
github.com/alist-org/alist/v3/internal/conf.GitAuthor=Xhofe
github.com/alist-org/alist/v3/internal/conf.GitCommit=$git_commit
github.com/alist-org/alist/v3/internal/conf.Version=$tag
github.com/alist-org/alist/v3/internal/conf.WebVersion=dev
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: alist_${{ env.SHA }}
path: dist
name: alist_${{ env.SHA }}_${{ matrix.target }}
path: build/*

View File

@@ -72,7 +72,7 @@ jobs:
- name: Checkout repo
uses: actions/checkout@v4
with:
repository: alist-org/desktop-release
repository: AlistGo/desktop-release
ref: main
persist-credentials: false
fetch-depth: 0
@@ -89,4 +89,4 @@ jobs:
with:
github_token: ${{ secrets.MY_TOKEN }}
branch: main
repository: alist-org/desktop-release
repository: AlistGo/desktop-release

View File

@@ -18,6 +18,7 @@ env:
REGISTRY: 'xhofe/alist'
REGISTRY_USERNAME: 'xhofe'
REGISTRY_PASSWORD: ${{ secrets.DOCKERHUB_TOKEN }}
GITHUB_CR_REPO: ghcr.io/${{ github.repository }}
ARTIFACT_NAME: 'binaries_docker_release'
RELEASE_PLATFORMS: 'linux/amd64,linux/arm64,linux/arm/v7,linux/386,linux/arm/v6,linux/s390x,linux/ppc64le,linux/riscv64'
IMAGE_PUSH: ${{ github.event_name == 'push' }}
@@ -114,11 +115,21 @@ jobs:
username: ${{ env.REGISTRY_USERNAME }}
password: ${{ env.REGISTRY_PASSWORD }}
- name: Login to GHCR
uses: docker/login-action@v3
with:
logout: true
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}
images: |
${{ env.REGISTRY }}
${{ env.GITHUB_CR_REPO }}
tags: ${{ env.IMAGE_IS_PROD == 'true' && '' || env.IMAGE_TAGS_BETA }}
flavor: |
${{ env.IMAGE_IS_PROD == 'true' && 'latest=true' || '' }}

View File

@@ -32,10 +32,9 @@ RUN apk update && \
/opt/aria2/.aria2/tracker.sh ; \
rm -rf /var/cache/apk/*
COPY --from=builder /app/bin/alist ./
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /opt/alist/alist && \
chmod +x /entrypoint.sh && /entrypoint.sh version
COPY --chmod=755 --from=builder /app/bin/alist ./
COPY --chmod=755 entrypoint.sh /entrypoint.sh
RUN /entrypoint.sh version
ENV PUID=0 PGID=0 UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
VOLUME /opt/alist/data/

View File

@@ -1,4 +1,4 @@
FROM alpine:edge
FROM alpine:3.20.7
ARG TARGETPLATFORM
ARG INSTALL_FFMPEG=false
@@ -24,10 +24,9 @@ RUN apk update && \
/opt/aria2/.aria2/tracker.sh ; \
rm -rf /var/cache/apk/*
COPY /build/${TARGETPLATFORM}/alist ./
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /opt/alist/alist && \
chmod +x /entrypoint.sh && /entrypoint.sh version
COPY --chmod=755 /build/${TARGETPLATFORM}/alist ./
COPY --chmod=755 entrypoint.sh /entrypoint.sh
RUN /entrypoint.sh version
ENV PUID=0 PGID=0 UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
VOLUME /opt/alist/data/

View File

@@ -1,5 +1,5 @@
<div align="center">
<a href="https://alist.nn.ci"><img width="100px" alt="logo" src="https://cdn.jsdelivr.net/gh/alist-org/logo@main/logo.svg"/></a>
<a href="https://alistgo.com"><img width="100px" alt="logo" src="https://cdn.jsdelivr.net/gh/alist-org/logo@main/logo.svg"/></a>
<p><em>🗂A file list program that supports multiple storages, powered by Gin and Solidjs.</em></p>
<div>
<a href="https://goreportcard.com/report/github.com/alist-org/alist/v3">
@@ -31,7 +31,7 @@
<a href="https://hub.docker.com/r/xhofe/alist">
<img src="https://img.shields.io/docker/pulls/xhofe/alist?color=%2348BB78&logo=docker&label=pulls" alt="Downloads" />
</a>
<a href="https://alist.nn.ci/guide/sponsor.html">
<a href="https://alistgo.com/guide/sponsor.html">
<img src="https://img.shields.io/badge/%24-sponsor-F87171.svg" alt="sponsor" />
</a>
</div>
@@ -77,6 +77,7 @@ English | [中文](./README_cn.md) | [日本語](./README_ja.md) | [Contributing
- [x] [Dropbox](https://www.dropbox.com/)
- [x] [FeijiPan](https://www.feijipan.com/)
- [x] [dogecloud](https://www.dogecloud.com/product/oss)
- [x] [Azure Blob Storage](https://azure.microsoft.com/products/storage/blobs)
- [x] Easy to deploy and out-of-the-box
- [x] File preview (PDF, markdown, code, plain text, ...)
- [x] Image preview in gallery mode
@@ -87,7 +88,7 @@ English | [中文](./README_cn.md) | [日本語](./README_ja.md) | [Contributing
- [x] Dark mode
- [x] I18n
- [x] Protected routes (password protection and authentication)
- [x] WebDav (see https://alist.nn.ci/guide/webdav.html for details)
- [x] WebDav (see https://alistgo.com/guide/webdav.html for details)
- [x] [Docker Deploy](https://hub.docker.com/r/xhofe/alist)
- [x] Cloudflare Workers proxy
- [x] File/Folder package download
@@ -100,6 +101,10 @@ English | [中文](./README_cn.md) | [日本語](./README_ja.md) | [Contributing
<https://alistgo.com/>
## API Documentation (via Apifox):
<https://alist-public.apifox.cn/>
## Demo
<https://al.nn.ci>
@@ -111,13 +116,11 @@ Please go to our [discussion forum](https://github.com/alist-org/alist/discussio
## Sponsor
AList is an open-source software, if you happen to like this project and want me to keep going, please consider sponsoring me or providing a single donation! Thanks for all the love and support:
https://alist.nn.ci/guide/sponsor.html
https://alistgo.com/guide/sponsor.html
### Special sponsors
- [VidHub](https://apps.apple.com/app/apple-store/id1659622164?pt=118612019&ct=alist&mt=8) - An elegant cloud video player within the Apple ecosystem. Support for iPhone, iPad, Mac, and Apple TV.
- [亚洲云](https://www.asiayun.com/aff/QQCOOQKZ) - 高防服务器|服务器租用|福州高防|广东电信|香港服务器|美国服务器|海外服务器 - 国内靠谱的企业级云计算服务提供商 (sponsored Chinese API server)
- [找资源](http://zhaoziyuan2.cc/) - 阿里云盘资源搜索引擎
## Contributors

View File

@@ -1,5 +1,5 @@
<div align="center">
<a href="https://alist.nn.ci"><img width="100px" alt="logo" src="https://cdn.jsdelivr.net/gh/alist-org/logo@main/logo.svg"/></a>
<a href="https://alistgo.com"><img width="100px" alt="logo" src="https://cdn.jsdelivr.net/gh/alist-org/logo@main/logo.svg"/></a>
<p><em>🗂一个支持多存储的文件列表程序,使用 Gin 和 Solidjs。</em></p>
<div>
<a href="https://goreportcard.com/report/github.com/alist-org/alist/v3">
@@ -31,7 +31,7 @@
<a href="https://hub.docker.com/r/xhofe/alist">
<img src="https://img.shields.io/docker/pulls/xhofe/alist?color=%2348BB78&logo=docker&label=pulls" alt="Downloads" />
</a>
<a href="https://alist.nn.ci/zh/guide/sponsor.html">
<a href="https://alistgo.com/zh/guide/sponsor.html">
<img src="https://img.shields.io/badge/%24-sponsor-F87171.svg" alt="sponsor" />
</a>
</div>
@@ -86,7 +86,7 @@
- [x] 黑暗模式
- [x] 国际化
- [x] 受保护的路由(密码保护和身份验证)
- [x] WebDav (具体见 https://alist.nn.ci/zh/guide/webdav.html)
- [x] WebDav (具体见 https://alistgo.com/zh/guide/webdav.html)
- [x] [Docker 部署](https://hub.docker.com/r/xhofe/alist)
- [x] Cloudflare workers 中转
- [x] 文件/文件夹打包下载
@@ -97,7 +97,11 @@
## 文档
<https://alist.nn.ci/zh/>
<https://alistgo.com/zh/>
## API 文档(通过 Apifox 提供)
<https://alist-public.apifox.cn/>
## Demo
@@ -109,13 +113,11 @@
## 赞助
AList 是一个开源软件如果你碰巧喜欢这个项目并希望我继续下去请考虑赞助我或提供一个单一的捐款感谢所有的爱和支持https://alist.nn.ci/zh/guide/sponsor.html
AList 是一个开源软件如果你碰巧喜欢这个项目并希望我继续下去请考虑赞助我或提供一个单一的捐款感谢所有的爱和支持https://alistgo.com/zh/guide/sponsor.html
### 特别赞助
- [VidHub](https://apps.apple.com/app/apple-store/id1659622164?pt=118612019&ct=alist&mt=8) - 苹果生态下优雅的网盘视频播放器iPhoneiPadMacApple TV全平台支持。
- [亚洲云](https://www.asiayun.com/aff/QQCOOQKZ) - 高防服务器|服务器租用|福州高防|广东电信|香港服务器|美国服务器|海外服务器 - 国内靠谱的企业级云计算服务提供商 (国内API服务器赞助)
- [找资源](http://zhaoziyuan2.cc/) - 阿里云盘资源搜索引擎
## 贡献者

View File

@@ -1,5 +1,5 @@
<div align="center">
<a href="https://alist.nn.ci"><img width="100px" alt="logo" src="https://cdn.jsdelivr.net/gh/alist-org/logo@main/logo.svg"/></a>
<a href="https://alistgo.com"><img width="100px" alt="logo" src="https://cdn.jsdelivr.net/gh/alist-org/logo@main/logo.svg"/></a>
<p><em>🗂Gin と Solidjs による、複数のストレージをサポートするファイルリストプログラム。</em></p>
<div>
<a href="https://goreportcard.com/report/github.com/alist-org/alist/v3">
@@ -31,7 +31,7 @@
<a href="https://hub.docker.com/r/xhofe/alist">
<img src="https://img.shields.io/docker/pulls/xhofe/alist?color=%2348BB78&logo=docker&label=pulls" alt="Downloads" />
</a>
<a href="https://alist.nn.ci/guide/sponsor.html">
<a href="https://alistgo.com/guide/sponsor.html">
<img src="https://img.shields.io/badge/%24-sponsor-F87171.svg" alt="sponsor" />
</a>
</div>
@@ -87,7 +87,7 @@
- [x] ダークモード
- [x] 国際化
- [x] 保護されたルート (パスワード保護と認証)
- [x] WebDav (詳細は https://alist.nn.ci/guide/webdav.html を参照)
- [x] WebDav (詳細は https://alistgo.com/guide/webdav.html を参照)
- [x] [Docker デプロイ](https://hub.docker.com/r/xhofe/alist)
- [x] Cloudflare ワーカープロキシ
- [x] ファイル/フォルダパッケージのダウンロード
@@ -98,7 +98,11 @@
## ドキュメント
<https://alist.nn.ci/>
<https://alistgo.com/>
## APIドキュメントApifox 提供)
<https://alist-public.apifox.cn/>
## デモ
@@ -111,13 +115,11 @@
## スポンサー
AList はオープンソースのソフトウェアです。もしあなたがこのプロジェクトを気に入ってくださり、続けて欲しいと思ってくださるなら、ぜひスポンサーになってくださるか、1口でも寄付をしてくださるようご検討くださいすべての愛とサポートに感謝します:
https://alist.nn.ci/guide/sponsor.html
https://alistgo.com/guide/sponsor.html
### スペシャルスポンサー
- [VidHub](https://apps.apple.com/app/apple-store/id1659622164?pt=118612019&ct=alist&mt=8) - An elegant cloud video player within the Apple ecosystem. Support for iPhone, iPad, Mac, and Apple TV.
- [亚洲云](https://www.asiayun.com/aff/QQCOOQKZ) - 高防服务器|服务器租用|福州高防|广东电信|香港服务器|美国服务器|海外服务器 - 国内靠谱的企业级云计算服务提供商 (sponsored Chinese API server)
- [找资源](http://zhaoziyuan2.cc/) - 阿里云盘资源搜索引擎
## コントリビューター

View File

@@ -1,6 +1,5 @@
appName="alist"
builtAt="$(date +'%F %T %z')"
goVersion=$(go version | sed 's/go version //')
gitAuthor="Xhofe <i@nn.ci>"
gitCommit=$(git log --pretty=format:"%h" -1)
@@ -22,7 +21,6 @@ echo "frontend version: $webVersion"
ldflags="\
-w -s \
-X 'github.com/alist-org/alist/v3/internal/conf.BuiltAt=$builtAt' \
-X 'github.com/alist-org/alist/v3/internal/conf.GoVersion=$goVersion' \
-X 'github.com/alist-org/alist/v3/internal/conf.GitAuthor=$gitAuthor' \
-X 'github.com/alist-org/alist/v3/internal/conf.GitCommit=$gitCommit' \
-X 'github.com/alist-org/alist/v3/internal/conf.Version=$version' \
@@ -95,7 +93,7 @@ BuildDocker() {
PrepareBuildDockerMusl() {
mkdir -p build/musl-libs
BASE="https://musl.cc/"
BASE="https://github.com/go-cross/musl-toolchain-archive/releases/latest/download/"
FILES=(x86_64-linux-musl-cross aarch64-linux-musl-cross i486-linux-musl-cross s390x-linux-musl-cross armv6-linux-musleabihf-cross armv7l-linux-musleabihf-cross riscv64-linux-musl-cross powerpc64le-linux-musl-cross)
for i in "${FILES[@]}"; do
url="${BASE}${i}.tgz"
@@ -247,7 +245,7 @@ BuildReleaseFreeBSD() {
cgo_cc="clang --target=${CGO_ARGS[$i]} --sysroot=/opt/freebsd/${os_arch}"
echo building for freebsd-${os_arch}
sudo mkdir -p "/opt/freebsd/${os_arch}"
wget -q https://download.freebsd.org/releases/${os_arch}/14.1-RELEASE/base.txz
wget -q https://download.freebsd.org/releases/${os_arch}/14.3-RELEASE/base.txz
sudo tar -xf ./base.txz -C /opt/freebsd/${os_arch}
rm base.txz
export GOOS=freebsd

View File

@@ -1,6 +1,7 @@
package cmd
import (
"github.com/alist-org/alist/v3/internal/bootstrap/patch/v3_46_0"
"os"
"path/filepath"
"strconv"
@@ -16,7 +17,14 @@ func Init() {
bootstrap.InitConfig()
bootstrap.Log()
bootstrap.InitDB()
if v3_46_0.IsLegacyRoleDetected() {
utils.Log.Warnf("Detected legacy role format, executing ConvertLegacyRoles patch early...")
v3_46_0.ConvertLegacyRoles()
}
data.InitData()
bootstrap.InitStreamLimit()
bootstrap.InitIndex()
bootstrap.InitUpgradePatch()
}

View File

@@ -12,6 +12,7 @@ import (
"strings"
_ "github.com/alist-org/alist/v3/drivers"
"github.com/alist-org/alist/v3/internal/bootstrap"
"github.com/alist-org/alist/v3/internal/bootstrap/data"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/op"
@@ -137,6 +138,7 @@ var LangCmd = &cobra.Command{
Use: "lang",
Short: "Generate language json file",
Run: func(cmd *cobra.Command, args []string) {
bootstrap.InitConfig()
err := os.MkdirAll("lang", 0777)
if err != nil {
utils.Log.Fatalf("failed create folder: %s", err.Error())

View File

@@ -16,7 +16,7 @@ var RootCmd = &cobra.Command{
Short: "A file list program that supports multiple storage.",
Long: `A file list program that supports multiple storage,
built with love by Xhofe and friends in Go/Solid.js.
Complete documentation is available at https://alist.nn.ci/`,
Complete documentation is available at https://alistgo.com/`,
}
func Execute() {

View File

@@ -4,9 +4,6 @@ import (
"context"
"errors"
"fmt"
ftpserver "github.com/KirCute/ftpserverlib-pasvportmap"
"github.com/KirCute/sftpd-alist"
"github.com/alist-org/alist/v3/internal/fs"
"net"
"net/http"
"os"
@@ -16,14 +13,19 @@ import (
"syscall"
"time"
ftpserver "github.com/KirCute/ftpserverlib-pasvportmap"
"github.com/KirCute/sftpd-alist"
"github.com/alist-org/alist/v3/cmd/flags"
"github.com/alist-org/alist/v3/internal/bootstrap"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/fs"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/server"
"github.com/gin-gonic/gin"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"golang.org/x/net/http2"
"golang.org/x/net/http2/h2c"
)
// ServerCmd represents the server command
@@ -47,11 +49,15 @@ the address is defined in config file`,
r := gin.New()
r.Use(gin.LoggerWithWriter(log.StandardLogger().Out), gin.RecoveryWithWriter(log.StandardLogger().Out))
server.Init(r)
var httpHandler http.Handler = r
if conf.Conf.Scheme.EnableH2c {
httpHandler = h2c.NewHandler(r, &http2.Server{})
}
var httpSrv, httpsSrv, unixSrv *http.Server
if conf.Conf.Scheme.HttpPort != -1 {
httpBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpPort)
utils.Log.Infof("start HTTP server @ %s", httpBase)
httpSrv = &http.Server{Addr: httpBase, Handler: r}
httpSrv = &http.Server{Addr: httpBase, Handler: httpHandler}
go func() {
err := httpSrv.ListenAndServe()
if err != nil && !errors.Is(err, http.ErrServerClosed) {
@@ -72,7 +78,7 @@ the address is defined in config file`,
}
if conf.Conf.Scheme.UnixFile != "" {
utils.Log.Infof("start unix server @ %s", conf.Conf.Scheme.UnixFile)
unixSrv = &http.Server{Handler: r}
unixSrv = &http.Server{Handler: httpHandler}
go func() {
listener, err := net.Listen("unix", conf.Conf.Scheme.UnixFile)
if err != nil {

View File

@@ -6,6 +6,7 @@ package cmd
import (
"fmt"
"os"
"runtime"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/spf13/cobra"
@@ -16,14 +17,15 @@ var VersionCmd = &cobra.Command{
Use: "version",
Short: "Show current version of AList",
Run: func(cmd *cobra.Command, args []string) {
goVersion := fmt.Sprintf("%s %s/%s", runtime.Version(), runtime.GOOS, runtime.GOARCH)
fmt.Printf(`Built At: %s
Go Version: %s
Author: %s
Commit ID: %s
Version: %s
WebVersion: %s
`,
conf.BuiltAt, conf.GoVersion, conf.GitAuthor, conf.GitCommit, conf.Version, conf.WebVersion)
`, conf.BuiltAt, goVersion, conf.GitAuthor, conf.GitCommit, conf.Version, conf.WebVersion)
os.Exit(0)
},
}

View File

@@ -215,12 +215,12 @@ func (d *Pan115) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
var uploadResult *UploadResult
// 闪传失败,上传
if stream.GetSize() <= 10*utils.MB { // 文件大小小于10MB改用普通模式上传
if uploadResult, err = d.UploadByOSS(&fastInfo.UploadOSSParams, stream, dirID); err != nil {
if uploadResult, err = d.UploadByOSS(ctx, &fastInfo.UploadOSSParams, stream, dirID, up); err != nil {
return nil, err
}
} else {
// 分片上传
if uploadResult, err = d.UploadByMultipart(&fastInfo.UploadOSSParams, stream.GetSize(), stream, dirID); err != nil {
if uploadResult, err = d.UploadByMultipart(ctx, &fastInfo.UploadOSSParams, stream.GetSize(), stream, dirID, up); err != nil {
return nil, err
}
}

View File

@@ -2,6 +2,7 @@ package _115
import (
"bytes"
"context"
"crypto/md5"
"crypto/tls"
"encoding/hex"
@@ -13,9 +14,11 @@ import (
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
@@ -140,7 +143,7 @@ func (d *Pan115) DownloadWithUA(pickCode, ua string) (*driver115.DownloadInfo, e
return nil, err
}
bytes, err := crypto.Decode(string(result.EncodedData), key)
b, err := crypto.Decode(string(result.EncodedData), key)
if err != nil {
return nil, err
}
@@ -148,7 +151,7 @@ func (d *Pan115) DownloadWithUA(pickCode, ua string) (*driver115.DownloadInfo, e
downloadInfo := struct {
Url string `json:"url"`
}{}
if err := utils.Json.Unmarshal(bytes, &downloadInfo); err != nil {
if err := utils.Json.Unmarshal(b, &downloadInfo); err != nil {
return nil, err
}
@@ -271,7 +274,7 @@ func UploadDigestRange(stream model.FileStreamer, rangeSpec string) (result stri
}
// UploadByOSS use aliyun sdk to upload
func (c *Pan115) UploadByOSS(params *driver115.UploadOSSParams, r io.Reader, dirID string) (*UploadResult, error) {
func (c *Pan115) UploadByOSS(ctx context.Context, params *driver115.UploadOSSParams, s model.FileStreamer, dirID string, up driver.UpdateProgress) (*UploadResult, error) {
ossToken, err := c.client.GetOSSToken()
if err != nil {
return nil, err
@@ -286,6 +289,10 @@ func (c *Pan115) UploadByOSS(params *driver115.UploadOSSParams, r io.Reader, dir
}
var bodyBytes []byte
r := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: s,
UpdateProgress: up,
})
if err = bucket.PutObject(params.Object, r, append(
driver115.OssOption(params, ossToken),
oss.CallbackResult(&bodyBytes),
@@ -301,7 +308,8 @@ func (c *Pan115) UploadByOSS(params *driver115.UploadOSSParams, r io.Reader, dir
}
// UploadByMultipart upload by mutipart blocks
func (d *Pan115) UploadByMultipart(params *driver115.UploadOSSParams, fileSize int64, stream model.FileStreamer, dirID string, opts ...driver115.UploadMultipartOption) (*UploadResult, error) {
func (d *Pan115) UploadByMultipart(ctx context.Context, params *driver115.UploadOSSParams, fileSize int64, s model.FileStreamer,
dirID string, up driver.UpdateProgress, opts ...driver115.UploadMultipartOption) (*UploadResult, error) {
var (
chunks []oss.FileChunk
parts []oss.UploadPart
@@ -313,7 +321,7 @@ func (d *Pan115) UploadByMultipart(params *driver115.UploadOSSParams, fileSize i
err error
)
tmpF, err := stream.CacheFullInTempFile()
tmpF, err := s.CacheFullInTempFile()
if err != nil {
return nil, err
}
@@ -372,6 +380,7 @@ func (d *Pan115) UploadByMultipart(params *driver115.UploadOSSParams, fileSize i
quit <- struct{}{}
}()
completedNum := atomic.Int32{}
// consumers
for i := 0; i < options.ThreadsNum; i++ {
go func(threadId int) {
@@ -384,24 +393,28 @@ func (d *Pan115) UploadByMultipart(params *driver115.UploadOSSParams, fileSize i
var part oss.UploadPart // 出现错误就继续尝试共尝试3次
for retry := 0; retry < 3; retry++ {
select {
case <-ctx.Done():
break
case <-ticker.C:
if ossToken, err = d.client.GetOSSToken(); err != nil { // 到时重新获取ossToken
errCh <- errors.Wrap(err, "刷新token时出现错误")
}
default:
}
buf := make([]byte, chunk.Size)
if _, err = tmpF.ReadAt(buf, chunk.Offset); err != nil && !errors.Is(err, io.EOF) {
continue
}
if part, err = bucket.UploadPart(imur, bytes.NewBuffer(buf), chunk.Size, chunk.Number, driver115.OssOption(params, ossToken)...); err == nil {
if part, err = bucket.UploadPart(imur, driver.NewLimitedUploadStream(ctx, bytes.NewReader(buf)),
chunk.Size, chunk.Number, driver115.OssOption(params, ossToken)...); err == nil {
break
}
}
if err != nil {
errCh <- errors.Wrap(err, fmt.Sprintf("上传 %s 的第%d个分片时出现错误%v", stream.GetName(), chunk.Number, err))
errCh <- errors.Wrap(err, fmt.Sprintf("上传 %s 的第%d个分片时出现错误%v", s.GetName(), chunk.Number, err))
} else {
num := completedNum.Add(1)
up(float64(num) * 100.0 / float64(len(chunks)))
}
UploadedPartsCh <- part
}

335
drivers/115_open/driver.go Normal file
View File

@@ -0,0 +1,335 @@
package _115_open
import (
"context"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/cmd/flags"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
sdk "github.com/xhofe/115-sdk-go"
"golang.org/x/time/rate"
)
type Open115 struct {
model.Storage
Addition
client *sdk.Client
limiter *rate.Limiter
}
func (d *Open115) Config() driver.Config {
return config
}
func (d *Open115) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Open115) Init(ctx context.Context) error {
d.client = sdk.New(sdk.WithRefreshToken(d.Addition.RefreshToken),
sdk.WithAccessToken(d.Addition.AccessToken),
sdk.WithOnRefreshToken(func(s1, s2 string) {
d.Addition.AccessToken = s1
d.Addition.RefreshToken = s2
op.MustSaveDriverStorage(d)
}))
if flags.Debug || flags.Dev {
d.client.SetDebug(true)
}
_, err := d.client.UserInfo(ctx)
if err != nil {
return err
}
if d.Addition.LimitRate > 0 {
d.limiter = rate.NewLimiter(rate.Limit(d.Addition.LimitRate), 1)
}
return nil
}
func (d *Open115) WaitLimit(ctx context.Context) error {
if d.limiter != nil {
return d.limiter.Wait(ctx)
}
return nil
}
func (d *Open115) Drop(ctx context.Context) error {
return nil
}
func (d *Open115) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var res []model.Obj
pageSize := int64(200)
offset := int64(0)
for {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
resp, err := d.client.GetFiles(ctx, &sdk.GetFilesReq{
CID: dir.GetID(),
Limit: pageSize,
Offset: offset,
ASC: d.Addition.OrderDirection == "asc",
O: d.Addition.OrderBy,
// Cur: 1,
ShowDir: true,
})
if err != nil {
return nil, err
}
res = append(res, utils.MustSliceConvert(resp.Data, func(src sdk.GetFilesResp_File) model.Obj {
obj := Obj(src)
return &obj
})...)
if len(res) >= int(resp.Count) {
break
}
offset += pageSize
}
return res, nil
}
func (d *Open115) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
var ua string
if args.Header != nil {
ua = args.Header.Get("User-Agent")
}
if ua == "" {
ua = base.UserAgent
}
obj, ok := file.(*Obj)
if !ok {
return nil, fmt.Errorf("can't convert obj")
}
pc := obj.Pc
resp, err := d.client.DownURL(ctx, pc, ua)
if err != nil {
return nil, err
}
u, ok := resp[obj.GetID()]
if !ok {
return nil, fmt.Errorf("can't get link")
}
return &model.Link{
URL: u.URL.URL,
Header: http.Header{
"User-Agent": []string{ua},
},
}, nil
}
func (d *Open115) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
resp, err := d.client.Mkdir(ctx, parentDir.GetID(), dirName)
if err != nil {
return nil, err
}
return &Obj{
Fid: resp.FileID,
Pid: parentDir.GetID(),
Fn: dirName,
Fc: "0",
Upt: time.Now().Unix(),
Uet: time.Now().Unix(),
UpPt: time.Now().Unix(),
}, nil
}
func (d *Open115) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
_, err := d.client.Move(ctx, &sdk.MoveReq{
FileIDs: srcObj.GetID(),
ToCid: dstDir.GetID(),
})
if err != nil {
return nil, err
}
return srcObj, nil
}
func (d *Open115) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
_, err := d.client.UpdateFile(ctx, &sdk.UpdateFileReq{
FileID: srcObj.GetID(),
FileNma: newName,
})
if err != nil {
return nil, err
}
obj, ok := srcObj.(*Obj)
if ok {
obj.Fn = newName
}
return srcObj, nil
}
func (d *Open115) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
_, err := d.client.Copy(ctx, &sdk.CopyReq{
PID: dstDir.GetID(),
FileID: srcObj.GetID(),
NoDupli: "1",
})
if err != nil {
return nil, err
}
return srcObj, nil
}
func (d *Open115) Remove(ctx context.Context, obj model.Obj) error {
if err := d.WaitLimit(ctx); err != nil {
return err
}
_obj, ok := obj.(*Obj)
if !ok {
return fmt.Errorf("can't convert obj")
}
_, err := d.client.DelFile(ctx, &sdk.DelFileReq{
FileIDs: _obj.GetID(),
ParentID: _obj.Pid,
})
if err != nil {
return err
}
return nil
}
func (d *Open115) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
if err := d.WaitLimit(ctx); err != nil {
return err
}
tempF, err := file.CacheFullInTempFile()
if err != nil {
return err
}
// cal full sha1
sha1, err := utils.HashReader(utils.SHA1, tempF)
if err != nil {
return err
}
_, err = tempF.Seek(0, io.SeekStart)
if err != nil {
return err
}
// pre 128k sha1
sha1128k, err := utils.HashReader(utils.SHA1, io.LimitReader(tempF, 128*1024))
if err != nil {
return err
}
_, err = tempF.Seek(0, io.SeekStart)
if err != nil {
return err
}
// 1. Init
resp, err := d.client.UploadInit(ctx, &sdk.UploadInitReq{
FileName: file.GetName(),
FileSize: file.GetSize(),
Target: dstDir.GetID(),
FileID: strings.ToUpper(sha1),
PreID: strings.ToUpper(sha1128k),
})
if err != nil {
return err
}
if resp.Status == 2 {
return nil
}
// 2. two way verify
if utils.SliceContains([]int{6, 7, 8}, resp.Status) {
signCheck := strings.Split(resp.SignCheck, "-") //"sign_check": "2392148-2392298" 取2392148-2392298之间的内容(包含2392148、2392298)的sha1
start, err := strconv.ParseInt(signCheck[0], 10, 64)
if err != nil {
return err
}
end, err := strconv.ParseInt(signCheck[1], 10, 64)
if err != nil {
return err
}
_, err = tempF.Seek(start, io.SeekStart)
if err != nil {
return err
}
signVal, err := utils.HashReader(utils.SHA1, io.LimitReader(tempF, end-start+1))
if err != nil {
return err
}
_, err = tempF.Seek(0, io.SeekStart)
if err != nil {
return err
}
resp, err = d.client.UploadInit(ctx, &sdk.UploadInitReq{
FileName: file.GetName(),
FileSize: file.GetSize(),
Target: dstDir.GetID(),
FileID: strings.ToUpper(sha1),
PreID: strings.ToUpper(sha1128k),
SignKey: resp.SignKey,
SignVal: strings.ToUpper(signVal),
})
if err != nil {
return err
}
if resp.Status == 2 {
return nil
}
}
// 3. get upload token
tokenResp, err := d.client.UploadGetToken(ctx)
if err != nil {
return err
}
// 4. upload
err = d.multpartUpload(ctx, tempF, file, up, tokenResp, resp)
if err != nil {
return err
}
return nil
}
// func (d *Open115) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// // TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
// return nil, errs.NotImplement
// }
// func (d *Open115) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// // TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
// return nil, errs.NotImplement
// }
// func (d *Open115) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// // TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
// return nil, errs.NotImplement
// }
// func (d *Open115) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// // TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// // a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// // return errs.NotImplement to use an internal archive tool
// return nil, errs.NotImplement
// }
//func (d *Template) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Open115)(nil)

37
drivers/115_open/meta.go Normal file
View File

@@ -0,0 +1,37 @@
package _115_open
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
driver.RootID
// define other
RefreshToken string `json:"refresh_token" required:"true"`
OrderBy string `json:"order_by" type:"select" options:"file_name,file_size,user_utime,file_type"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc"`
LimitRate float64 `json:"limit_rate" type:"float" default:"1" help:"limit all api request rate ([limit]r/1s)"`
AccessToken string
}
var config = driver.Config{
Name: "115 Open",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Open115{}
})
}

59
drivers/115_open/types.go Normal file
View File

@@ -0,0 +1,59 @@
package _115_open
import (
"time"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
sdk "github.com/xhofe/115-sdk-go"
)
type Obj sdk.GetFilesResp_File
// Thumb implements model.Thumb.
func (o *Obj) Thumb() string {
return o.Thumbnail
}
// CreateTime implements model.Obj.
func (o *Obj) CreateTime() time.Time {
return time.Unix(o.UpPt, 0)
}
// GetHash implements model.Obj.
func (o *Obj) GetHash() utils.HashInfo {
return utils.NewHashInfo(utils.SHA1, o.Sha1)
}
// GetID implements model.Obj.
func (o *Obj) GetID() string {
return o.Fid
}
// GetName implements model.Obj.
func (o *Obj) GetName() string {
return o.Fn
}
// GetPath implements model.Obj.
func (o *Obj) GetPath() string {
return ""
}
// GetSize implements model.Obj.
func (o *Obj) GetSize() int64 {
return o.FS
}
// IsDir implements model.Obj.
func (o *Obj) IsDir() bool {
return o.Fc == "0"
}
// ModTime implements model.Obj.
func (o *Obj) ModTime() time.Time {
return time.Unix(o.Upt, 0)
}
var _ model.Obj = (*Obj)(nil)
var _ model.Thumb = (*Obj)(nil)

140
drivers/115_open/upload.go Normal file
View File

@@ -0,0 +1,140 @@
package _115_open
import (
"context"
"encoding/base64"
"io"
"time"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/aliyun/aliyun-oss-go-sdk/oss"
"github.com/avast/retry-go"
sdk "github.com/xhofe/115-sdk-go"
)
func calPartSize(fileSize int64) int64 {
var partSize int64 = 20 * utils.MB
if fileSize > partSize {
if fileSize > 1*utils.TB { // file Size over 1TB
partSize = 5 * utils.GB // file part size 5GB
} else if fileSize > 768*utils.GB { // over 768GB
partSize = 109951163 // ≈ 104.8576MB, split 1TB into 10,000 part
} else if fileSize > 512*utils.GB { // over 512GB
partSize = 82463373 // ≈ 78.6432MB
} else if fileSize > 384*utils.GB { // over 384GB
partSize = 54975582 // ≈ 52.4288MB
} else if fileSize > 256*utils.GB { // over 256GB
partSize = 41231687 // ≈ 39.3216MB
} else if fileSize > 128*utils.GB { // over 128GB
partSize = 27487791 // ≈ 26.2144MB
}
}
return partSize
}
func (d *Open115) singleUpload(ctx context.Context, tempF model.File, tokenResp *sdk.UploadGetTokenResp, initResp *sdk.UploadInitResp) error {
ossClient, err := oss.New(tokenResp.Endpoint, tokenResp.AccessKeyId, tokenResp.AccessKeySecret, oss.SecurityToken(tokenResp.SecurityToken))
if err != nil {
return err
}
bucket, err := ossClient.Bucket(initResp.Bucket)
if err != nil {
return err
}
err = bucket.PutObject(initResp.Object, tempF,
oss.Callback(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.Callback))),
oss.CallbackVar(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.CallbackVar))),
)
return err
}
// type CallbackResult struct {
// State bool `json:"state"`
// Code int `json:"code"`
// Message string `json:"message"`
// Data struct {
// PickCode string `json:"pick_code"`
// FileName string `json:"file_name"`
// FileSize int64 `json:"file_size"`
// FileID string `json:"file_id"`
// ThumbURL string `json:"thumb_url"`
// Sha1 string `json:"sha1"`
// Aid int `json:"aid"`
// Cid string `json:"cid"`
// } `json:"data"`
// }
func (d *Open115) multpartUpload(ctx context.Context, tempF model.File, stream model.FileStreamer, up driver.UpdateProgress, tokenResp *sdk.UploadGetTokenResp, initResp *sdk.UploadInitResp) error {
fileSize := stream.GetSize()
chunkSize := calPartSize(fileSize)
ossClient, err := oss.New(tokenResp.Endpoint, tokenResp.AccessKeyId, tokenResp.AccessKeySecret, oss.SecurityToken(tokenResp.SecurityToken))
if err != nil {
return err
}
bucket, err := ossClient.Bucket(initResp.Bucket)
if err != nil {
return err
}
imur, err := bucket.InitiateMultipartUpload(initResp.Object, oss.Sequential())
if err != nil {
return err
}
partNum := (stream.GetSize() + chunkSize - 1) / chunkSize
parts := make([]oss.UploadPart, partNum)
offset := int64(0)
for i := int64(1); i <= partNum; i++ {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
partSize := chunkSize
if i == partNum {
partSize = fileSize - (i-1)*chunkSize
}
rd := utils.NewMultiReadable(io.LimitReader(stream, partSize))
err = retry.Do(func() error {
_ = rd.Reset()
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
part, err := bucket.UploadPart(imur, rateLimitedRd, partSize, int(i))
if err != nil {
return err
}
parts[i-1] = part
return nil
},
retry.Attempts(3),
retry.DelayType(retry.BackOffDelay),
retry.Delay(time.Second))
if err != nil {
return err
}
if i == partNum {
offset = fileSize
} else {
offset += partSize
}
up(float64(offset) / float64(fileSize))
}
// callbackRespBytes := make([]byte, 1024)
_, err = bucket.CompleteMultipartUpload(
imur,
parts,
oss.Callback(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.Callback))),
oss.CallbackVar(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.CallbackVar))),
// oss.CallbackResult(&callbackRespBytes),
)
if err != nil {
return err
}
return nil
}

3
drivers/115_open/util.go Normal file
View File

@@ -0,0 +1,3 @@
package _115_open
// do others that not defined in Driver interface

View File

@@ -2,11 +2,8 @@ package _123
import (
"context"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"fmt"
"io"
"net/http"
"net/url"
"sync"
@@ -18,6 +15,7 @@ import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
@@ -185,32 +183,22 @@ func (d *Pan123) Remove(ctx context.Context, obj model.Obj) error {
}
}
func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
// const DEFAULT int64 = 10485760
h := md5.New()
// need to calculate md5 of the full content
tempFile, err := stream.CacheFullInTempFile()
func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
etag := file.GetHash().GetHash(utils.MD5)
var err error
if len(etag) < utils.MD5.Width {
_, etag, err = stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil {
return err
}
defer func() {
_ = tempFile.Close()
}()
if _, err = utils.CopyWithBuffer(h, tempFile); err != nil {
return err
}
_, err = tempFile.Seek(0, io.SeekStart)
if err != nil {
return err
}
etag := hex.EncodeToString(h.Sum(nil))
data := base.Json{
"driveId": 0,
"duplicate": 2, // 2->覆盖 1->重命名 0->默认
"etag": etag,
"fileName": stream.GetName(),
"fileName": file.GetName(),
"parentFileId": dstDir.GetID(),
"size": stream.GetSize(),
"size": file.GetSize(),
"type": 0,
}
var resp UploadResp
@@ -225,7 +213,7 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
return nil
}
if resp.Data.AccessKeyId == "" || resp.Data.SecretAccessKey == "" || resp.Data.SessionToken == "" {
err = d.newUpload(ctx, &resp, stream, tempFile, up)
err = d.newUpload(ctx, &resp, file, up)
return err
} else {
cfg := &aws.Config{
@@ -239,15 +227,21 @@ func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
return err
}
uploader := s3manager.NewUploader(s)
if stream.GetSize() > s3manager.MaxUploadParts*s3manager.DefaultUploadPartSize {
uploader.PartSize = stream.GetSize() / (s3manager.MaxUploadParts - 1)
if file.GetSize() > s3manager.MaxUploadParts*s3manager.DefaultUploadPartSize {
uploader.PartSize = file.GetSize() / (s3manager.MaxUploadParts - 1)
}
input := &s3manager.UploadInput{
Bucket: &resp.Data.Bucket,
Key: &resp.Data.Key,
Body: tempFile,
Body: driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: file,
UpdateProgress: up,
}),
}
_, err = uploader.UploadWithContext(ctx, input)
if err != nil {
return err
}
}
_, err = d.Request(UploadComplete, http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{

View File

@@ -4,7 +4,6 @@ import (
"context"
"fmt"
"io"
"math"
"net/http"
"strconv"
@@ -69,15 +68,25 @@ func (d *Pan123) completeS3(ctx context.Context, upReq *UploadResp, file model.F
return err
}
func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.FileStreamer, reader io.Reader, up driver.UpdateProgress) error {
chunkSize := int64(1024 * 1024 * 16)
func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.FileStreamer, up driver.UpdateProgress) error {
tmpF, err := file.CacheFullInTempFile()
if err != nil {
return err
}
// fetch s3 pre signed urls
chunkCount := int(math.Ceil(float64(file.GetSize()) / float64(chunkSize)))
size := file.GetSize()
chunkSize := min(size, 16*utils.MB)
chunkCount := int(size / chunkSize)
lastChunkSize := size % chunkSize
if lastChunkSize > 0 {
chunkCount++
} else {
lastChunkSize = chunkSize
}
// only 1 batch is allowed
isMultipart := chunkCount > 1
batchSize := 1
getS3UploadUrl := d.getS3Auth
if isMultipart {
if chunkCount > 1 {
batchSize = 10
getS3UploadUrl = d.getS3PreSignedUrls
}
@@ -86,10 +95,7 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
return ctx.Err()
}
start := i
end := i + batchSize
if end > chunkCount+1 {
end = chunkCount + 1
}
end := min(i+batchSize, chunkCount+1)
s3PreSignedUrls, err := getS3UploadUrl(ctx, upReq, start, end)
if err != nil {
return err
@@ -101,9 +107,9 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
}
curSize := chunkSize
if j == chunkCount {
curSize = file.GetSize() - (int64(chunkCount)-1)*chunkSize
curSize = lastChunkSize
}
err = d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, j, end, io.LimitReader(reader, chunkSize), curSize, false, getS3UploadUrl)
err = d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, j, end, io.NewSectionReader(tmpF, chunkSize*int64(j-1), curSize), curSize, false, getS3UploadUrl)
if err != nil {
return err
}
@@ -114,12 +120,12 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
return d.completeS3(ctx, upReq, file, chunkCount > 1)
}
func (d *Pan123) uploadS3Chunk(ctx context.Context, upReq *UploadResp, s3PreSignedUrls *S3PreSignedURLs, cur, end int, reader io.Reader, curSize int64, retry bool, getS3UploadUrl func(ctx context.Context, upReq *UploadResp, start int, end int) (*S3PreSignedURLs, error)) error {
func (d *Pan123) uploadS3Chunk(ctx context.Context, upReq *UploadResp, s3PreSignedUrls *S3PreSignedURLs, cur, end int, reader *io.SectionReader, curSize int64, retry bool, getS3UploadUrl func(ctx context.Context, upReq *UploadResp, start int, end int) (*S3PreSignedURLs, error)) error {
uploadUrl := s3PreSignedUrls.Data.PreSignedUrls[strconv.Itoa(cur)]
if uploadUrl == "" {
return fmt.Errorf("upload url is empty, s3PreSignedUrls: %+v", s3PreSignedUrls)
}
req, err := http.NewRequest("PUT", uploadUrl, reader)
req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, reader))
if err != nil {
return err
}
@@ -142,6 +148,7 @@ func (d *Pan123) uploadS3Chunk(ctx context.Context, upReq *UploadResp, s3PreSign
}
s3PreSignedUrls.Data.PreSignedUrls = newS3PreSignedUrls.Data.PreSignedUrls
// retry
reader.Seek(0, io.SeekStart)
return d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, cur, end, reader, curSize, true, getS3UploadUrl)
}
if res.StatusCode != http.StatusOK {

View File

@@ -163,10 +163,10 @@ func (d *Pan123) login() error {
SetHeaders(map[string]string{
"origin": "https://www.123pan.com",
"referer": "https://www.123pan.com/",
"user-agent": "Dart/2.19(dart:io)-alist",
//"user-agent": "Dart/2.19(dart:io)-alist",
"platform": "web",
"app-version": "3",
//"user-agent": base.UserAgent,
"user-agent": base.UserAgent,
}).
SetBody(body).Post(SignIn)
if err != nil {
@@ -202,7 +202,7 @@ do:
"origin": "https://www.123pan.com",
"referer": "https://www.123pan.com/",
"authorization": "Bearer " + d.AccessToken,
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) alist-client",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)",
"platform": "web",
"app-version": "3",
//"user-agent": base.UserAgent,

191
drivers/123_open/api.go Normal file
View File

@@ -0,0 +1,191 @@
package _123Open
import (
"fmt"
"github.com/go-resty/resty/v2"
"net/http"
)
const (
// baseurl
ApiBaseURL = "https://open-api.123pan.com"
// auth
ApiToken = "/api/v1/access_token"
// file list
ApiFileList = "/api/v2/file/list"
// direct link
ApiGetDirectLink = "/api/v1/direct-link/url"
// mkdir
ApiMakeDir = "/upload/v1/file/mkdir"
// remove
ApiRemove = "/api/v1/file/trash"
// upload
ApiUploadDomainURL = "/upload/v2/file/domain"
ApiSingleUploadURL = "/upload/v2/file/single/create"
ApiCreateUploadURL = "/upload/v2/file/create"
ApiUploadSliceURL = "/upload/v2/file/slice"
ApiUploadCompleteURL = "/upload/v2/file/upload_complete"
// move
ApiMove = "/api/v1/file/move"
// rename
ApiRename = "/api/v1/file/name"
)
type Response[T any] struct {
Code int `json:"code"`
Message string `json:"message"`
Data T `json:"data"`
}
type TokenResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data TokenData `json:"data"`
}
type TokenData struct {
AccessToken string `json:"accessToken"`
ExpiredAt string `json:"expiredAt"`
}
type FileListResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data FileListData `json:"data"`
}
type FileListData struct {
LastFileId int64 `json:"lastFileId"`
FileList []File `json:"fileList"`
}
type DirectLinkResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data DirectLinkData `json:"data"`
}
type DirectLinkData struct {
URL string `json:"url"`
}
type MakeDirRequest struct {
Name string `json:"name"`
ParentID int64 `json:"parentID"`
}
type MakeDirResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data MakeDirData `json:"data"`
}
type MakeDirData struct {
DirID int64 `json:"dirID"`
}
type RemoveRequest struct {
FileIDs []int64 `json:"fileIDs"`
}
type UploadCreateResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data UploadCreateData `json:"data"`
}
type UploadCreateData struct {
FileID int64 `json:"fileId"`
Reuse bool `json:"reuse"`
PreuploadID string `json:"preuploadId"`
SliceSize int64 `json:"sliceSize"`
Servers []string `json:"servers"`
}
type UploadUrlResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data UploadUrlData `json:"data"`
}
type UploadUrlData struct {
PresignedURL string `json:"presignedUrl"`
}
type UploadCompleteResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data UploadCompleteData `json:"data"`
}
type UploadCompleteData struct {
FileID int `json:"fileID"`
Completed bool `json:"completed"`
}
func (d *Open123) Request(endpoint string, method string, setup func(*resty.Request), result any) (*resty.Response, error) {
client := resty.New()
token, err := d.tm.getToken()
if err != nil {
return nil, err
}
req := client.R().
SetHeader("Authorization", "Bearer "+token).
SetHeader("Platform", "open_platform").
SetHeader("Content-Type", "application/json").
SetResult(result)
if setup != nil {
setup(req)
}
switch method {
case http.MethodGet:
return req.Get(ApiBaseURL + endpoint)
case http.MethodPost:
return req.Post(ApiBaseURL + endpoint)
case http.MethodPut:
return req.Put(ApiBaseURL + endpoint)
default:
return nil, fmt.Errorf("unsupported method: %s", method)
}
}
func (d *Open123) RequestTo(fullURL string, method string, setup func(*resty.Request), result any) (*resty.Response, error) {
client := resty.New()
token, err := d.tm.getToken()
if err != nil {
return nil, err
}
req := client.R().
SetHeader("Authorization", "Bearer "+token).
SetHeader("Platform", "open_platform").
SetHeader("Content-Type", "application/json").
SetResult(result)
if setup != nil {
setup(req)
}
switch method {
case http.MethodGet:
return req.Get(fullURL)
case http.MethodPost:
return req.Post(fullURL)
case http.MethodPut:
return req.Put(fullURL)
default:
return nil, fmt.Errorf("unsupported method: %s", method)
}
}

277
drivers/123_open/driver.go Normal file
View File

@@ -0,0 +1,277 @@
package _123Open
import (
"context"
"fmt"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"net/http"
"strconv"
)
type Open123 struct {
model.Storage
Addition
UploadThread int
tm *tokenManager
}
func (d *Open123) Config() driver.Config {
return config
}
func (d *Open123) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Open123) Init(ctx context.Context) error {
d.tm = newTokenManager(d.ClientID, d.ClientSecret)
if _, err := d.tm.getToken(); err != nil {
return fmt.Errorf("token 初始化失败: %w", err)
}
return nil
}
func (d *Open123) Drop(ctx context.Context) error {
return nil
}
func (d *Open123) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
parentFileId, err := strconv.ParseInt(dir.GetID(), 10, 64)
if err != nil {
return nil, err
}
fileLastId := int64(0)
var results []File
for fileLastId != -1 {
files, err := d.getFiles(parentFileId, 100, fileLastId)
if err != nil {
return nil, err
}
for _, f := range files.Data.FileList {
if f.Trashed == 0 {
results = append(results, f)
}
}
fileLastId = files.Data.LastFileId
}
objs := make([]model.Obj, 0, len(results))
for _, f := range results {
objs = append(objs, f)
}
return objs, nil
}
func (d *Open123) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if file.IsDir() {
return nil, errs.LinkIsDir
}
fileID := file.GetID()
var result DirectLinkResp
url := fmt.Sprintf("%s?fileID=%s", ApiGetDirectLink, fileID)
_, err := d.Request(url, http.MethodGet, nil, &result)
if err != nil {
return nil, err
}
if result.Code != 0 {
return nil, fmt.Errorf("get link failed: %s", result.Message)
}
return &model.Link{
URL: result.Data.URL,
}, nil
}
func (d *Open123) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
parentID, err := strconv.ParseInt(parentDir.GetID(), 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid parent ID: %w", err)
}
var result MakeDirResp
reqBody := MakeDirRequest{
Name: dirName,
ParentID: parentID,
}
_, err = d.Request(ApiMakeDir, http.MethodPost, func(r *resty.Request) {
r.SetBody(reqBody)
}, &result)
if err != nil {
return nil, err
}
if result.Code != 0 {
return nil, fmt.Errorf("mkdir failed: %s", result.Message)
}
newDir := File{
FileId: result.Data.DirID,
FileName: dirName,
Type: 1,
ParentFileId: int(parentID),
Size: 0,
Trashed: 0,
}
return newDir, nil
}
func (d *Open123) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
srcID, err := strconv.ParseInt(srcObj.GetID(), 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid src file ID: %w", err)
}
dstID, err := strconv.ParseInt(dstDir.GetID(), 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid dest dir ID: %w", err)
}
var result Response[any]
reqBody := map[string]interface{}{
"fileIDs": []int64{srcID},
"toParentFileID": dstID,
}
_, err = d.Request(ApiMove, http.MethodPost, func(r *resty.Request) {
r.SetBody(reqBody)
}, &result)
if err != nil {
return nil, err
}
if result.Code != 0 {
return nil, fmt.Errorf("move failed: %s", result.Message)
}
files, err := d.getFiles(dstID, 100, 0)
if err != nil {
return nil, fmt.Errorf("move succeed but failed to get target dir: %w", err)
}
for _, f := range files.Data.FileList {
if f.FileId == srcID {
return f, nil
}
}
return nil, fmt.Errorf("move succeed but file not found in target dir")
}
func (d *Open123) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
srcID, err := strconv.ParseInt(srcObj.GetID(), 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid file ID: %w", err)
}
var result Response[any]
reqBody := map[string]interface{}{
"fileId": srcID,
"fileName": newName,
}
_, err = d.Request(ApiRename, http.MethodPut, func(r *resty.Request) {
r.SetBody(reqBody)
}, &result)
if err != nil {
return nil, err
}
if result.Code != 0 {
return nil, fmt.Errorf("rename failed: %s", result.Message)
}
parentID := 0
if file, ok := srcObj.(File); ok {
parentID = file.ParentFileId
}
files, err := d.getFiles(int64(parentID), 100, 0)
if err != nil {
return nil, fmt.Errorf("rename succeed but failed to get parent dir: %w", err)
}
for _, f := range files.Data.FileList {
if f.FileId == srcID {
return f, nil
}
}
return nil, fmt.Errorf("rename succeed but file not found in parent dir")
}
func (d *Open123) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
return nil, errs.NotSupport
}
func (d *Open123) Remove(ctx context.Context, obj model.Obj) error {
idStr := obj.GetID()
id, err := strconv.ParseInt(idStr, 10, 64)
if err != nil {
return fmt.Errorf("invalid file ID: %w", err)
}
var result Response[any]
reqBody := RemoveRequest{
FileIDs: []int64{id},
}
_, err = d.Request(ApiRemove, http.MethodPost, func(r *resty.Request) {
r.SetBody(reqBody)
}, &result)
if err != nil {
return err
}
if result.Code != 0 {
return fmt.Errorf("remove failed: %s", result.Message)
}
return nil
}
func (d *Open123) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
parentFileId, err := strconv.ParseInt(dstDir.GetID(), 10, 64)
etag := file.GetHash().GetHash(utils.MD5)
if len(etag) < utils.MD5.Width {
up = model.UpdateProgressWithRange(up, 50, 100)
_, etag, err = stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil {
return err
}
}
createResp, err := d.create(parentFileId, file.GetName(), etag, file.GetSize(), 2, false)
if err != nil {
return err
}
if createResp.Data.Reuse {
return nil
}
return d.Upload(ctx, file, parentFileId, createResp, up)
}
func (d *Open123) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
return nil, errs.NotSupport
}
func (d *Open123) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
return nil, errs.NotSupport
}
func (d *Open123) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
return nil, errs.NotSupport
}
func (d *Open123) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
return nil, errs.NotSupport
}
//func (d *Open123) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Open123)(nil)

33
drivers/123_open/meta.go Normal file
View File

@@ -0,0 +1,33 @@
package _123Open
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootID
ClientID string `json:"client_id" required:"true" label:"Client ID"`
ClientSecret string `json:"client_secret" required:"true" label:"Client Secret"`
}
var config = driver.Config{
Name: "123 Open",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Open123{}
})
}

85
drivers/123_open/token.go Normal file
View File

@@ -0,0 +1,85 @@
package _123Open
import (
"bytes"
"encoding/json"
"fmt"
"net/http"
"sync"
"time"
)
const tokenURL = ApiBaseURL + ApiToken
type tokenManager struct {
clientID string
clientSecret string
mu sync.Mutex
accessToken string
expireTime time.Time
}
func newTokenManager(clientID, clientSecret string) *tokenManager {
return &tokenManager{
clientID: clientID,
clientSecret: clientSecret,
}
}
func (tm *tokenManager) getToken() (string, error) {
tm.mu.Lock()
defer tm.mu.Unlock()
if tm.accessToken != "" && time.Now().Before(tm.expireTime.Add(-5*time.Minute)) {
return tm.accessToken, nil
}
reqBody := map[string]string{
"clientID": tm.clientID,
"clientSecret": tm.clientSecret,
}
body, _ := json.Marshal(reqBody)
req, err := http.NewRequest("POST", tokenURL, bytes.NewBuffer(body))
if err != nil {
return "", err
}
req.Header.Set("Platform", "open_platform")
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
var result TokenResp
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return "", err
}
if result.Code != 0 {
return "", fmt.Errorf("get token failed: %s", result.Message)
}
tm.accessToken = result.Data.AccessToken
expireAt, err := time.Parse(time.RFC3339, result.Data.ExpiredAt)
if err != nil {
return "", fmt.Errorf("parse expire time failed: %w", err)
}
tm.expireTime = expireAt
return tm.accessToken, nil
}
func (tm *tokenManager) buildHeaders() (http.Header, error) {
token, err := tm.getToken()
if err != nil {
return nil, err
}
header := http.Header{}
header.Set("Authorization", "Bearer "+token)
header.Set("Platform", "open_platform")
header.Set("Content-Type", "application/json")
return header, nil
}

70
drivers/123_open/types.go Normal file
View File

@@ -0,0 +1,70 @@
package _123Open
import (
"fmt"
"github.com/alist-org/alist/v3/pkg/utils"
"time"
)
type File struct {
FileName string `json:"filename"`
Size int64 `json:"size"`
CreateAt string `json:"createAt"`
UpdateAt string `json:"updateAt"`
FileId int64 `json:"fileId"`
Type int `json:"type"`
Etag string `json:"etag"`
S3KeyFlag string `json:"s3KeyFlag"`
ParentFileId int `json:"parentFileId"`
Category int `json:"category"`
Status int `json:"status"`
Trashed int `json:"trashed"`
}
func (f File) GetID() string {
return fmt.Sprint(f.FileId)
}
func (f File) GetName() string {
return f.FileName
}
func (f File) GetSize() int64 {
return f.Size
}
func (f File) IsDir() bool {
return f.Type == 1
}
func (f File) GetModified() string {
return f.UpdateAt
}
func (f File) GetThumb() string {
return ""
}
func (f File) ModTime() time.Time {
t, err := time.Parse("2006-01-02 15:04:05", f.UpdateAt)
if err != nil {
return time.Time{}
}
return t
}
func (f File) CreateTime() time.Time {
t, err := time.Parse("2006-01-02 15:04:05", f.CreateAt)
if err != nil {
return time.Time{}
}
return t
}
func (f File) GetHash() utils.HashInfo {
return utils.NewHashInfo(utils.MD5, f.Etag)
}
func (f File) GetPath() string {
return ""
}

282
drivers/123_open/upload.go Normal file
View File

@@ -0,0 +1,282 @@
package _123Open
import (
"bytes"
"context"
"crypto/md5"
"encoding/hex"
"encoding/json"
"fmt"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"golang.org/x/sync/errgroup"
"io"
"mime/multipart"
"net/http"
"runtime"
"strconv"
"time"
)
func (d *Open123) create(parentFileID int64, filename, etag string, size int64, duplicate int, containDir bool) (*UploadCreateResp, error) {
var resp UploadCreateResp
_, err := d.Request(ApiCreateUploadURL, http.MethodPost, func(req *resty.Request) {
body := base.Json{
"parentFileID": parentFileID,
"filename": filename,
"etag": etag,
"size": size,
}
if duplicate > 0 {
body["duplicate"] = duplicate
}
if containDir {
body["containDir"] = true
}
req.SetBody(body)
}, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}
func (d *Open123) GetUploadDomains() ([]string, error) {
var resp struct {
Code int `json:"code"`
Message string `json:"message"`
Data []string `json:"data"`
}
_, err := d.Request(ApiUploadDomainURL, http.MethodGet, nil, &resp)
if err != nil {
return nil, err
}
if resp.Code != 0 {
return nil, fmt.Errorf("get upload domain failed: %s", resp.Message)
}
return resp.Data, nil
}
func (d *Open123) UploadSingle(ctx context.Context, createResp *UploadCreateResp, file model.FileStreamer, parentID int64) error {
domain := createResp.Data.Servers[0]
etag := file.GetHash().GetHash(utils.MD5)
if len(etag) < utils.MD5.Width {
_, _, err := stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil {
return err
}
}
reader, err := file.RangeRead(http_range.Range{Start: 0, Length: file.GetSize()})
if err != nil {
return err
}
reader = driver.NewLimitedUploadStream(ctx, reader)
var b bytes.Buffer
mw := multipart.NewWriter(&b)
mw.WriteField("parentFileID", fmt.Sprint(parentID))
mw.WriteField("filename", file.GetName())
mw.WriteField("etag", etag)
mw.WriteField("size", fmt.Sprint(file.GetSize()))
fw, _ := mw.CreateFormFile("file", file.GetName())
_, err = io.Copy(fw, reader)
mw.Close()
req, err := http.NewRequestWithContext(ctx, "POST", domain+ApiSingleUploadURL, &b)
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+d.tm.accessToken)
req.Header.Set("Platform", "open_platform")
req.Header.Set("Content-Type", mw.FormDataContentType())
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
var result struct {
Code int `json:"code"`
Message string `json:"message"`
Data struct {
FileID int64 `json:"fileID"`
Completed bool `json:"completed"`
} `json:"data"`
}
body, _ := io.ReadAll(resp.Body)
if err := json.Unmarshal(body, &result); err != nil {
return fmt.Errorf("unmarshal response error: %v, body: %s", err, string(body))
}
if result.Code != 0 {
return fmt.Errorf("upload failed: %s", result.Message)
}
if !result.Data.Completed || result.Data.FileID == 0 {
return fmt.Errorf("upload incomplete or missing fileID")
}
return nil
}
func (d *Open123) Upload(ctx context.Context, file model.FileStreamer, parentID int64, createResp *UploadCreateResp, up driver.UpdateProgress) error {
if cacher, ok := file.(interface{ CacheFullInTempFile() (model.File, error) }); ok {
if _, err := cacher.CacheFullInTempFile(); err != nil {
return err
}
}
size := file.GetSize()
chunkSize := createResp.Data.SliceSize
uploadNums := (size + chunkSize - 1) / chunkSize
uploadDomain := createResp.Data.Servers[0]
if d.UploadThread <= 0 {
cpuCores := runtime.NumCPU()
threads := cpuCores * 2
if threads < 4 {
threads = 4
}
if threads > 16 {
threads = 16
}
d.UploadThread = threads
fmt.Printf("[Upload] Auto set upload concurrency: %d (CPU cores=%d)\n", d.UploadThread, cpuCores)
}
fmt.Printf("[Upload] File size: %d bytes, chunk size: %d bytes, total slices: %d, concurrency: %d\n",
size, chunkSize, uploadNums, d.UploadThread)
if size <= 1<<30 {
return d.UploadSingle(ctx, createResp, file, parentID)
}
if createResp.Data.Reuse {
up(100)
return nil
}
client := resty.New()
semaphore := make(chan struct{}, d.UploadThread)
threadG, _ := errgroup.WithContext(ctx)
var progressArr = make([]int64, uploadNums)
for partIndex := int64(0); partIndex < uploadNums; partIndex++ {
partIndex := partIndex
semaphore <- struct{}{}
threadG.Go(func() error {
defer func() { <-semaphore }()
offset := partIndex * chunkSize
length := min(chunkSize, size-offset)
partNumber := partIndex + 1
fmt.Printf("[Slice %d] Starting read from offset %d, length %d\n", partNumber, offset, length)
reader, err := file.RangeRead(http_range.Range{Start: offset, Length: length})
if err != nil {
return fmt.Errorf("[Slice %d] RangeRead error: %v", partNumber, err)
}
buf := make([]byte, length)
n, err := io.ReadFull(reader, buf)
if err != nil && err != io.EOF {
return fmt.Errorf("[Slice %d] Read error: %v", partNumber, err)
}
buf = buf[:n]
hash := md5.Sum(buf)
sliceMD5Str := hex.EncodeToString(hash[:])
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
writer.WriteField("preuploadID", createResp.Data.PreuploadID)
writer.WriteField("sliceNo", strconv.FormatInt(partNumber, 10))
writer.WriteField("sliceMD5", sliceMD5Str)
partName := fmt.Sprintf("%s.part%d", file.GetName(), partNumber)
fw, _ := writer.CreateFormFile("slice", partName)
fw.Write(buf)
writer.Close()
resp, err := client.R().
SetHeader("Authorization", "Bearer "+d.tm.accessToken).
SetHeader("Platform", "open_platform").
SetHeader("Content-Type", writer.FormDataContentType()).
SetBody(body.Bytes()).
Post(uploadDomain + ApiUploadSliceURL)
if err != nil {
return fmt.Errorf("[Slice %d] Upload HTTP error: %v", partNumber, err)
}
if resp.StatusCode() != 200 {
return fmt.Errorf("[Slice %d] Upload failed with status: %s, resp: %s", partNumber, resp.Status(), resp.String())
}
progressArr[partIndex] = length
var totalUploaded int64 = 0
for _, v := range progressArr {
totalUploaded += v
}
if up != nil {
percent := float64(totalUploaded) / float64(size) * 100
up(percent)
}
fmt.Printf("[Slice %d] MD5: %s\n", partNumber, sliceMD5Str)
fmt.Printf("[Slice %d] Upload finished\n", partNumber)
return nil
})
}
if err := threadG.Wait(); err != nil {
return err
}
var completeResp struct {
Code int `json:"code"`
Message string `json:"message"`
Data struct {
Completed bool `json:"completed"`
FileID int64 `json:"fileID"`
} `json:"data"`
}
for {
reqBody := fmt.Sprintf(`{"preuploadID":"%s"}`, createResp.Data.PreuploadID)
req, err := http.NewRequestWithContext(ctx, "POST", uploadDomain+ApiUploadCompleteURL, bytes.NewBufferString(reqBody))
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+d.tm.accessToken)
req.Header.Set("Platform", "open_platform")
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
body, _ := io.ReadAll(resp.Body)
resp.Body.Close()
if err := json.Unmarshal(body, &completeResp); err != nil {
return fmt.Errorf("completion response unmarshal error: %v, body: %s", err, string(body))
}
if completeResp.Code != 0 {
return fmt.Errorf("completion API returned error code %d: %s", completeResp.Code, completeResp.Message)
}
if completeResp.Data.Completed && completeResp.Data.FileID != 0 {
fmt.Printf("[Upload] Upload completed successfully. FileID: %d\n", completeResp.Data.FileID)
break
}
time.Sleep(time.Second)
}
up(100)
return nil
}

20
drivers/123_open/util.go Normal file
View File

@@ -0,0 +1,20 @@
package _123Open
import (
"fmt"
"net/http"
)
func (d *Open123) getFiles(parentFileId int64, limit int, lastFileId int64) (*FileListResp, error) {
var result FileListResp
url := fmt.Sprintf("%s?parentFileId=%d&limit=%d&lastFileId=%d", ApiFileList, parentFileId, limit, lastFileId)
_, err := d.Request(url, http.MethodGet, nil, &result)
if err != nil {
return nil, err
}
if result.Code != 0 {
return nil, fmt.Errorf("list error: %s", result.Message)
}
return &result, nil
}

View File

@@ -2,6 +2,7 @@ package _139
import (
"context"
"encoding/xml"
"fmt"
"io"
"net/http"
@@ -13,6 +14,7 @@ import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
streamPkg "github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/cron"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/pkg/utils/random"
@@ -25,6 +27,7 @@ type Yun139 struct {
cron *cron.Cron
Account string
ref *Yun139
PersonalCloudHost string
}
func (d *Yun139) Config() driver.Config {
@@ -37,13 +40,36 @@ func (d *Yun139) GetAddition() driver.Additional {
func (d *Yun139) Init(ctx context.Context) error {
if d.ref == nil {
if d.Authorization == "" {
if len(d.Authorization) == 0 {
return fmt.Errorf("authorization is empty")
}
err := d.refreshToken()
if err != nil {
return err
}
// Query Route Policy
var resp QueryRoutePolicyResp
_, err = d.requestRoute(base.Json{
"userInfo": base.Json{
"userType": 1,
"accountType": 1,
"accountName": d.Account},
"modAddrType": 1,
}, &resp)
if err != nil {
return err
}
for _, policyItem := range resp.Data.RoutePolicyList {
if policyItem.ModName == "personal" {
d.PersonalCloudHost = policyItem.HttpsUrl
break
}
}
if len(d.PersonalCloudHost) == 0 {
return fmt.Errorf("PersonalCloudHost is empty")
}
d.cron = cron.NewCron(time.Hour * 12)
d.cron.Do(func() {
err := d.refreshToken()
@@ -69,28 +95,6 @@ func (d *Yun139) Init(ctx context.Context) error {
default:
return errs.NotImplement
}
// if d.ref != nil {
// return nil
// }
// decode, err := base64.StdEncoding.DecodeString(d.Authorization)
// if err != nil {
// return err
// }
// decodeStr := string(decode)
// splits := strings.Split(decodeStr, ":")
// if len(splits) < 2 {
// return fmt.Errorf("authorization is invalid, splits < 2")
// }
// d.Account = splits[1]
// _, err = d.post("/orchestration/personalCloud/user/v1.0/qryUserExternInfo", base.Json{
// "qryUserExternInfoReq": base.Json{
// "commonAccountInfo": base.Json{
// "account": d.getAccount(),
// "accountType": 1,
// },
// },
// }, nil)
// return err
return nil
}
@@ -158,7 +162,7 @@ func (d *Yun139) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
"type": "folder",
"fileRenameMode": "force_rename",
}
pathname := "/hcy/file/create"
pathname := "/file/create"
_, err = d.personalPost(pathname, data, nil)
case MetaPersonal:
data := base.Json{
@@ -211,7 +215,7 @@ func (d *Yun139) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj,
"fileIds": []string{srcObj.GetID()},
"toParentFileId": dstDir.GetID(),
}
pathname := "/hcy/file/batchMove"
pathname := "/file/batchMove"
_, err := d.personalPost(pathname, data, nil)
if err != nil {
return nil, err
@@ -288,7 +292,7 @@ func (d *Yun139) Rename(ctx context.Context, srcObj model.Obj, newName string) e
"name": newName,
"description": "",
}
pathname := "/hcy/file/update"
pathname := "/file/update"
_, err = d.personalPost(pathname, data, nil)
case MetaPersonal:
var data base.Json
@@ -388,7 +392,7 @@ func (d *Yun139) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
"fileIds": []string{srcObj.GetID()},
"toParentFileId": dstDir.GetID(),
}
pathname := "/hcy/file/batchCopy"
pathname := "/file/batchCopy"
_, err := d.personalPost(pathname, data, nil)
return err
case MetaPersonal:
@@ -428,7 +432,7 @@ func (d *Yun139) Remove(ctx context.Context, obj model.Obj) error {
data := base.Json{
"fileIds": []string{obj.GetID()},
}
pathname := "/hcy/recyclebin/batchTrash"
pathname := "/recyclebin/batchTrash"
_, err := d.personalPost(pathname, data, nil)
return err
case MetaGroup:
@@ -501,23 +505,15 @@ func (d *Yun139) Remove(ctx context.Context, obj model.Obj) error {
}
}
const (
_ = iota //ignore first value by assigning to blank identifier
KB = 1 << (10 * iota)
MB
GB
TB
)
func (d *Yun139) getPartSize(size int64) int64 {
if d.CustomUploadPartSize != 0 {
return d.CustomUploadPartSize
}
// 网盘对于分片数量存在上限
if size/GB > 30 {
return 512 * MB
if size/utils.GB > 30 {
return 512 * utils.MB
}
return 100 * MB
return 100 * utils.MB
}
func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
@@ -525,29 +521,28 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
case MetaPersonalNew:
var err error
fullHash := stream.GetHash().GetHash(utils.SHA256)
if len(fullHash) <= 0 {
tmpF, err := stream.CacheFullInTempFile()
if err != nil {
return err
}
fullHash, err = utils.HashFile(utils.SHA256, tmpF)
if len(fullHash) != utils.SHA256.Width {
_, fullHash, err = streamPkg.CacheFullInTempFileAndHash(stream, utils.SHA256)
if err != nil {
return err
}
}
partInfos := []PartInfo{}
var partSize = d.getPartSize(stream.GetSize())
part := (stream.GetSize() + partSize - 1) / partSize
if part == 0 {
size := stream.GetSize()
var partSize = d.getPartSize(size)
part := size / partSize
if size%partSize > 0 {
part++
} else if part == 0 {
part = 1
}
partInfos := make([]PartInfo, 0, part)
for i := int64(0); i < part; i++ {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
start := i * partSize
byteSize := stream.GetSize() - start
byteSize := size - start
if byteSize > partSize {
byteSize = partSize
}
@@ -575,13 +570,13 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
"contentType": "application/octet-stream",
"parallelUpload": false,
"partInfos": firstPartInfos,
"size": stream.GetSize(),
"size": size,
"parentFileId": dstDir.GetID(),
"name": stream.GetName(),
"type": "file",
"fileRenameMode": "auto_rename",
}
pathname := "/hcy/file/create"
pathname := "/file/create"
var resp PersonalUploadResp
_, err = d.personalPost(pathname, data, &resp)
if err != nil {
@@ -618,7 +613,7 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
"accountType": 1,
},
}
pathname := "/hcy/file/getUploadUrl"
pathname := "/file/getUploadUrl"
var moreresp PersonalUploadUrlResp
_, err = d.personalPost(pathname, moredata, &moreresp)
if err != nil {
@@ -628,14 +623,15 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
}
// Progress
p := driver.NewProgress(stream.GetSize(), up)
p := driver.NewProgress(size, up)
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
// 上传所有分片
for _, uploadPartInfo := range uploadPartInfos {
index := uploadPartInfo.PartNumber - 1
partSize := partInfos[index].PartSize
log.Debugf("[139] uploading part %+v/%+v", index, len(uploadPartInfos))
limitReader := io.LimitReader(stream, partSize)
limitReader := io.LimitReader(rateLimited, partSize)
// Update Progress
r := io.TeeReader(limitReader, p)
@@ -668,7 +664,7 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
"fileId": resp.Data.FileId,
"uploadId": resp.Data.UploadId,
}
_, err = d.personalPost("/hcy/file/complete", data, nil)
_, err = d.personalPost("/file/complete", data, nil)
if err != nil {
return err
}
@@ -738,14 +734,20 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
break
}
}
var reportSize int64
if d.ReportRealSize {
reportSize = stream.GetSize()
} else {
reportSize = 0
}
data := base.Json{
"manualRename": 2,
"operation": 0,
"fileCount": 1,
"totalSize": 0, // 去除上传大小限制
"totalSize": reportSize,
"uploadContentList": []base.Json{{
"contentName": stream.GetName(),
"contentSize": 0, // 去除上传大小限制
"contentSize": reportSize,
// "digest": "5a3231986ce7a6b46e408612d385bafa"
}},
"parentCatalogID": dstDir.GetID(),
@@ -763,10 +765,10 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
"operation": 0,
"path": path.Join(dstDir.GetPath(), dstDir.GetID()),
"seqNo": random.String(32), //序列号不能为空
"totalSize": 0,
"totalSize": reportSize,
"uploadContentList": []base.Json{{
"contentName": stream.GetName(),
"contentSize": 0,
"contentSize": reportSize,
// "digest": "5a3231986ce7a6b46e408612d385bafa"
}},
})
@@ -777,27 +779,30 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
if err != nil {
return err
}
if resp.Data.Result.ResultCode != "0" {
return fmt.Errorf("get file upload url failed with result code: %s, message: %s", resp.Data.Result.ResultCode, resp.Data.Result.ResultDesc)
}
size := stream.GetSize()
// Progress
p := driver.NewProgress(stream.GetSize(), up)
var partSize = d.getPartSize(stream.GetSize())
part := (stream.GetSize() + partSize - 1) / partSize
if part == 0 {
p := driver.NewProgress(size, up)
var partSize = d.getPartSize(size)
part := size / partSize
if size%partSize > 0 {
part++
} else if part == 0 {
part = 1
}
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
for i := int64(0); i < part; i++ {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
start := i * partSize
byteSize := stream.GetSize() - start
if byteSize > partSize {
byteSize = partSize
}
byteSize := min(size-start, partSize)
limitReader := io.LimitReader(stream, byteSize)
limitReader := io.LimitReader(rateLimited, byteSize)
// Update Progress
r := io.TeeReader(limitReader, p)
req, err := http.NewRequest("POST", resp.Data.UploadResult.RedirectionURL, r)
@@ -807,7 +812,7 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "text/plain;name="+unicode(stream.GetName()))
req.Header.Set("contentSize", strconv.FormatInt(stream.GetSize(), 10))
req.Header.Set("contentSize", strconv.FormatInt(size, 10))
req.Header.Set("range", fmt.Sprintf("bytes=%d-%d", start, start+byteSize-1))
req.Header.Set("uploadtaskID", resp.Data.UploadResult.UploadTaskID)
req.Header.Set("rangeType", "0")
@@ -817,13 +822,23 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
if err != nil {
return err
}
_ = res.Body.Close()
log.Debugf("%+v", res)
if res.StatusCode != http.StatusOK {
res.Body.Close()
return fmt.Errorf("unexpected status code: %d", res.StatusCode)
}
bodyBytes, err := io.ReadAll(res.Body)
if err != nil {
return fmt.Errorf("error reading response body: %v", err)
}
var result InterLayerUploadResult
err = xml.Unmarshal(bodyBytes, &result)
if err != nil {
return fmt.Errorf("error parsing XML: %v", err)
}
if result.ResultCode != 0 {
return fmt.Errorf("upload failed with result code: %d, message: %s", result.ResultCode, result.Msg)
}
}
return nil
default:
return errs.NotImplement
@@ -841,7 +856,7 @@ func (d *Yun139) Other(ctx context.Context, args model.OtherArgs) (interface{},
}
switch args.Method {
case "video_preview":
uri = "/hcy/videoPreview/getPreviewInfo"
uri = "/videoPreview/getPreviewInfo"
default:
return nil, errs.NotSupport
}

View File

@@ -12,6 +12,8 @@ type Addition struct {
Type string `json:"type" type:"select" options:"personal_new,family,group,personal" default:"personal_new"`
CloudID string `json:"cloud_id"`
CustomUploadPartSize int64 `json:"custom_upload_part_size" type:"number" default:"0" help:"0 for auto"`
ReportRealSize bool `json:"report_real_size" type:"bool" default:"true" help:"Enable to report the real file size during upload"`
UseLargeThumbnail bool `json:"use_large_thumbnail" type:"bool" default:"false" help:"Enable to use large thumbnail for images"`
}
var config = driver.Config{

View File

@@ -143,6 +143,13 @@ type UploadResp struct {
} `json:"data"`
}
type InterLayerUploadResult struct {
XMLName xml.Name `xml:"result"`
Text string `xml:",chardata"`
ResultCode int `xml:"resultCode"`
Msg string `xml:"msg"`
}
type CloudContent struct {
ContentID string `json:"contentID"`
//Modifier string `json:"modifier"`
@@ -278,6 +285,25 @@ type PersonalUploadUrlResp struct {
}
}
type QueryRoutePolicyResp struct {
Success bool `json:"success"`
Code string `json:"code"`
Message string `json:"message"`
Data struct {
RoutePolicyList []struct {
SiteID string `json:"siteID"`
SiteCode string `json:"siteCode"`
ModName string `json:"modName"`
HttpUrl string `json:"httpUrl"`
HttpsUrl string `json:"httpsUrl"`
EnvID string `json:"envID"`
ExtInfo string `json:"extInfo"`
HashName string `json:"hashName"`
ModAddrType int `json:"modAddrType"`
} `json:"routePolicyList"`
} `json:"data"`
}
type RefreshTokenResp struct {
XMLName xml.Name `xml:"root"`
Return string `xml:"return"`

View File

@@ -67,6 +67,7 @@ func (d *Yun139) refreshToken() error {
if len(splits) < 3 {
return fmt.Errorf("authorization is invalid, splits < 3")
}
d.Account = splits[1]
strs := strings.Split(splits[2], "|")
if len(strs) < 4 {
return fmt.Errorf("authorization is invalid, strs < 4")
@@ -156,6 +157,64 @@ func (d *Yun139) request(pathname string, method string, callback base.ReqCallba
}
return res.Body(), nil
}
func (d *Yun139) requestRoute(data interface{}, resp interface{}) ([]byte, error) {
url := "https://user-njs.yun.139.com/user/route/qryRoutePolicy"
req := base.RestyClient.R()
randStr := random.String(16)
ts := time.Now().Format("2006-01-02 15:04:05")
callback := func(req *resty.Request) {
req.SetBody(data)
}
if callback != nil {
callback(req)
}
body, err := utils.Json.Marshal(req.Body)
if err != nil {
return nil, err
}
sign := calSign(string(body), ts, randStr)
svcType := "1"
if d.isFamily() {
svcType = "2"
}
req.SetHeaders(map[string]string{
"Accept": "application/json, text/plain, */*",
"CMS-DEVICE": "default",
"Authorization": "Basic " + d.getAuthorization(),
"mcloud-channel": "1000101",
"mcloud-client": "10701",
//"mcloud-route": "001",
"mcloud-sign": fmt.Sprintf("%s,%s,%s", ts, randStr, sign),
//"mcloud-skey":"",
"mcloud-version": "7.14.0",
"Origin": "https://yun.139.com",
"Referer": "https://yun.139.com/w/",
"x-DeviceInfo": "||9|7.14.0|chrome|120.0.0.0|||windows 10||zh-CN|||",
"x-huawei-channelSrc": "10000034",
"x-inner-ntwk": "2",
"x-m4c-caller": "PC",
"x-m4c-src": "10002",
"x-SvcType": svcType,
"Inner-Hcy-Router-Https": "1",
})
var e BaseResp
req.SetResult(&e)
res, err := req.Execute(http.MethodPost, url)
log.Debugln(res.String())
if !e.Success {
return nil, errors.New(e.Message)
}
if resp != nil {
err = utils.Json.Unmarshal(res.Body(), resp)
if err != nil {
return nil, err
}
}
return res.Body(), nil
}
func (d *Yun139) post(pathname string, data interface{}, resp interface{}) ([]byte, error) {
return d.request(pathname, http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
@@ -390,7 +449,7 @@ func unicode(str string) string {
}
func (d *Yun139) personalRequest(pathname string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
url := "https://personal-kd-njs.yun.139.com" + pathname
url := d.getPersonalCloudHost() + pathname
req := base.RestyClient.R()
randStr := random.String(16)
ts := time.Now().Format("2006-01-02 15:04:05")
@@ -416,8 +475,6 @@ func (d *Yun139) personalRequest(pathname string, method string, callback base.R
"Mcloud-Route": "001",
"Mcloud-Sign": fmt.Sprintf("%s,%s,%s", ts, randStr, sign),
"Mcloud-Version": "7.14.0",
"Origin": "https://yun.139.com",
"Referer": "https://yun.139.com/w/",
"x-DeviceInfo": "||9|7.14.0|chrome|120.0.0.0|||windows 10||zh-CN|||",
"x-huawei-channelSrc": "10000034",
"x-inner-ntwk": "2",
@@ -479,7 +536,7 @@ func (d *Yun139) personalGetFiles(fileId string) ([]model.Obj, error) {
"parentFileId": fileId,
}
var resp PersonalListResp
_, err := d.personalPost("/hcy/file/list", data, &resp)
_, err := d.personalPost("/file/list", data, &resp)
if err != nil {
return nil, err
}
@@ -499,7 +556,15 @@ func (d *Yun139) personalGetFiles(fileId string) ([]model.Obj, error) {
} else {
var Thumbnails = item.Thumbnails
var ThumbnailUrl string
if len(Thumbnails) > 0 {
if d.UseLargeThumbnail {
for _, thumb := range Thumbnails {
if strings.Contains(thumb.Style, "Large") {
ThumbnailUrl = thumb.Url
break
}
}
}
if ThumbnailUrl == "" && len(Thumbnails) > 0 {
ThumbnailUrl = Thumbnails[len(Thumbnails)-1].Url
}
f = &model.ObjThumb{
@@ -527,7 +592,7 @@ func (d *Yun139) personalGetLink(fileId string) (string, error) {
data := base.Json{
"fileId": fileId,
}
res, err := d.personalPost("/hcy/file/getDownloadUrl",
res, err := d.personalPost("/file/getDownloadUrl",
data, nil)
if err != nil {
return "", err
@@ -552,3 +617,9 @@ func (d *Yun139) getAccount() string {
}
return d.Account
}
func (d *Yun139) getPersonalCloudHost() string {
if d.ref != nil {
return d.ref.getPersonalCloudHost()
}
return d.PersonalCloudHost
}

View File

@@ -365,7 +365,7 @@ func (d *Cloud189) newUpload(ctx context.Context, dstDir model.Obj, file model.F
log.Debugf("uploadData: %+v", uploadData)
requestURL := uploadData.RequestURL
uploadHeaders := strings.Split(decodeURIComponent(uploadData.RequestHeader), "&")
req, err := http.NewRequest(http.MethodPut, requestURL, bytes.NewReader(byteData))
req, err := http.NewRequest(http.MethodPut, requestURL, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
@@ -375,11 +375,11 @@ func (d *Cloud189) newUpload(ctx context.Context, dstDir model.Obj, file model.F
req.Header.Set(v[0:i], v[i+1:])
}
r, err := base.HttpClient.Do(req)
log.Debugf("%+v %+v", r, r.Request.Header)
r.Body.Close()
if err != nil {
return err
}
log.Debugf("%+v %+v", r, r.Request.Header)
_ = r.Body.Close()
up(float64(i) * 100 / float64(count))
}
fileMd5 := hex.EncodeToString(md5Sum.Sum(nil))

View File

@@ -1,8 +1,8 @@
package _189pc
import (
"container/ring"
"context"
"fmt"
"net/http"
"strconv"
"strings"
@@ -14,6 +14,7 @@ import (
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
)
type Cloud189PC struct {
@@ -29,7 +30,7 @@ type Cloud189PC struct {
uploadThread int
familyTransferFolder *ring.Ring
familyTransferFolder *Cloud189Folder
cleanFamilyTransferFile func()
storageConfig driver.Config
@@ -48,9 +49,18 @@ func (y *Cloud189PC) GetAddition() driver.Additional {
}
func (y *Cloud189PC) Init(ctx context.Context) (err error) {
y.storageConfig = config
if y.isFamily() {
// 兼容旧上传接口
y.storageConfig.NoOverwriteUpload = y.isFamily() && (y.Addition.RapidUpload || y.Addition.UploadMethod == "old")
if y.Addition.RapidUpload || y.Addition.UploadMethod == "old" {
y.storageConfig.NoOverwriteUpload = true
}
} else {
// 家庭云转存,不支持覆盖上传
if y.Addition.FamilyTransfer {
y.storageConfig.NoOverwriteUpload = true
}
}
// 处理个人云和家庭云参数
if y.isFamily() && y.RootFolderID == "-11" {
y.RootFolderID = ""
@@ -91,13 +101,14 @@ func (y *Cloud189PC) Init(ctx context.Context) (err error) {
}
}
// 创建中转文件夹,防止重名文件
// 创建中转文件夹
if y.FamilyTransfer {
if y.familyTransferFolder, err = y.createFamilyTransferFolder(32); err != nil {
if err := y.createFamilyTransferFolder(); err != nil {
return err
}
}
// 清理转存文件节流
y.cleanFamilyTransferFile = utils.NewThrottle2(time.Minute, func() {
if err := y.cleanFamilyTransfer(context.TODO()); err != nil {
utils.Log.Errorf("cleanFamilyTransferFolderError:%s", err)
@@ -327,35 +338,49 @@ func (y *Cloud189PC) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
if !isFamily && y.FamilyTransfer {
// 修改上传目标为家庭云文件夹
transferDstDir := dstDir
dstDir = (y.familyTransferFolder.Value).(*Cloud189Folder)
y.familyTransferFolder = y.familyTransferFolder.Next()
dstDir = y.familyTransferFolder
// 使用临时文件名
srcName := stream.GetName()
stream = &WrapFileStreamer{
FileStreamer: stream,
Name: fmt.Sprintf("0%s.transfer", uuid.NewString()),
}
// 使用家庭云上传
isFamily = true
overwrite = false
defer func() {
if newObj != nil {
// 批量任务有概率删不掉
y.cleanFamilyTransferFile()
// 转存家庭云文件到个人云
err = y.SaveFamilyFileToPersonCloud(context.TODO(), y.FamilyID, newObj, transferDstDir, true)
task := BatchTaskInfo{
FileId: newObj.GetID(),
FileName: newObj.GetName(),
IsFolder: BoolToNumber(newObj.IsDir()),
// 删除家庭云源文件
go y.Delete(context.TODO(), y.FamilyID, newObj)
// 批量任务有概率删不掉
go y.cleanFamilyTransferFile()
// 转存失败返回错误
if err != nil {
return
}
// 删除源文件
if resp, err := y.CreateBatchTask("DELETE", y.FamilyID, "", nil, task); err == nil {
y.WaitBatchTask("DELETE", resp.TaskID, time.Second)
// 永久删除
if resp, err := y.CreateBatchTask("CLEAR_RECYCLE", y.FamilyID, "", nil, task); err == nil {
y.WaitBatchTask("CLEAR_RECYCLE", resp.TaskID, time.Second)
// 查找转存文件
var file *Cloud189File
file, err = y.findFileByName(context.TODO(), newObj.GetName(), transferDstDir.GetID(), false)
if err != nil {
if err == errs.ObjectNotFound {
err = fmt.Errorf("unknown error: No transfer file obtained %s", newObj.GetName())
}
return
}
newObj = nil
// 重命名转存文件
newObj, err = y.Rename(context.TODO(), file, srcName)
if err != nil {
// 重命名失败删除源文件
_ = y.Delete(context.TODO(), "", file)
}
return
}
}()
}

View File

@@ -18,6 +18,7 @@ import (
"strings"
"time"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils/random"
)
@@ -208,3 +209,12 @@ func IF[V any](o bool, t V, f V) V {
}
return f
}
type WrapFileStreamer struct {
model.FileStreamer
Name string
}
func (w *WrapFileStreamer) GetName() string {
return w.Name
}

View File

@@ -2,30 +2,32 @@ package _189pc
import (
"bytes"
"container/ring"
"context"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"encoding/xml"
"fmt"
"io"
"math"
"net/http"
"net/http/cookiejar"
"net/url"
"os"
"regexp"
"sort"
"strconv"
"strings"
"time"
"golang.org/x/sync/semaphore"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/setting"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/errgroup"
"github.com/alist-org/alist/v3/pkg/utils"
@@ -174,8 +176,8 @@ func (y *Cloud189PC) put(ctx context.Context, url string, headers map[string]str
}
var erron RespErr
jsoniter.Unmarshal(body, &erron)
xml.Unmarshal(body, &erron)
_ = jsoniter.Unmarshal(body, &erron)
_ = xml.Unmarshal(body, &erron)
if erron.HasError() {
return nil, &erron
}
@@ -185,39 +187,9 @@ func (y *Cloud189PC) put(ctx context.Context, url string, headers map[string]str
return body, nil
}
func (y *Cloud189PC) getFiles(ctx context.Context, fileId string, isFamily bool) ([]model.Obj, error) {
fullUrl := API_URL
if isFamily {
fullUrl += "/family/file"
}
fullUrl += "/listFiles.action"
res := make([]model.Obj, 0, 130)
res := make([]model.Obj, 0, 100)
for pageNum := 1; ; pageNum++ {
var resp Cloud189FilesResp
_, err := y.get(fullUrl, func(r *resty.Request) {
r.SetContext(ctx)
r.SetQueryParams(map[string]string{
"folderId": fileId,
"fileType": "0",
"mediaAttr": "0",
"iconOption": "5",
"pageNum": fmt.Sprint(pageNum),
"pageSize": "130",
})
if isFamily {
r.SetQueryParams(map[string]string{
"familyId": y.FamilyID,
"orderBy": toFamilyOrderBy(y.OrderBy),
"descending": toDesc(y.OrderDirection),
})
} else {
r.SetQueryParams(map[string]string{
"recursive": "0",
"orderBy": y.OrderBy,
"descending": toDesc(y.OrderDirection),
})
}
}, &resp, isFamily)
resp, err := y.getFilesWithPage(ctx, fileId, isFamily, pageNum, 1000, y.OrderBy, y.OrderDirection)
if err != nil {
return nil, err
}
@@ -236,6 +208,63 @@ func (y *Cloud189PC) getFiles(ctx context.Context, fileId string, isFamily bool)
return res, nil
}
func (y *Cloud189PC) getFilesWithPage(ctx context.Context, fileId string, isFamily bool, pageNum int, pageSize int, orderBy string, orderDirection string) (*Cloud189FilesResp, error) {
fullUrl := API_URL
if isFamily {
fullUrl += "/family/file"
}
fullUrl += "/listFiles.action"
var resp Cloud189FilesResp
_, err := y.get(fullUrl, func(r *resty.Request) {
r.SetContext(ctx)
r.SetQueryParams(map[string]string{
"folderId": fileId,
"fileType": "0",
"mediaAttr": "0",
"iconOption": "5",
"pageNum": fmt.Sprint(pageNum),
"pageSize": fmt.Sprint(pageSize),
})
if isFamily {
r.SetQueryParams(map[string]string{
"familyId": y.FamilyID,
"orderBy": toFamilyOrderBy(orderBy),
"descending": toDesc(orderDirection),
})
} else {
r.SetQueryParams(map[string]string{
"recursive": "0",
"orderBy": orderBy,
"descending": toDesc(orderDirection),
})
}
}, &resp, isFamily)
if err != nil {
return nil, err
}
return &resp, nil
}
func (y *Cloud189PC) findFileByName(ctx context.Context, searchName string, folderId string, isFamily bool) (*Cloud189File, error) {
for pageNum := 1; ; pageNum++ {
resp, err := y.getFilesWithPage(ctx, folderId, isFamily, pageNum, 10, "filename", "asc")
if err != nil {
return nil, err
}
// 获取完毕跳出
if resp.FileListAO.Count == 0 {
return nil, errs.ObjectNotFound
}
for i := 0; i < len(resp.FileListAO.FileList); i++ {
file := resp.FileListAO.FileList[i]
if file.Name == searchName {
return &file, nil
}
}
}
}
func (y *Cloud189PC) login() (err error) {
// 初始化登陆所需参数
if y.loginParam == nil {
@@ -295,7 +324,7 @@ func (y *Cloud189PC) login() (err error) {
_, err = y.client.R().
SetResult(&tokenInfo).SetError(&erron).
SetQueryParams(clientSuffix()).
SetQueryParam("redirectURL", url.QueryEscape(loginresp.ToUrl)).
SetQueryParam("redirectURL", loginresp.ToUrl).
Post(API_URL + "/getSessionForPC.action")
if err != nil {
return
@@ -444,12 +473,8 @@ func (y *Cloud189PC) refreshSession() (err error) {
// 普通上传
// 无法上传大小为0的文件
func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
var sliceSize = partSize(file.GetSize())
count := int(math.Ceil(float64(file.GetSize()) / float64(sliceSize)))
lastPartSize := file.GetSize() % sliceSize
if file.GetSize() > 0 && lastPartSize == 0 {
lastPartSize = sliceSize
}
size := file.GetSize()
sliceSize := partSize(size)
params := Params{
"parentFolderId": dstDir.GetID(),
@@ -481,24 +506,32 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
retry.Attempts(3),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
sem := semaphore.NewWeighted(3)
fileMd5 := md5.New()
silceMd5 := md5.New()
count := int(size / sliceSize)
lastPartSize := size % sliceSize
if lastPartSize > 0 {
count++
} else {
lastPartSize = sliceSize
}
fileMd5 := utils.MD5.NewFunc()
silceMd5 := utils.MD5.NewFunc()
silceMd5Hexs := make([]string, 0, count)
teeReader := io.TeeReader(file, io.MultiWriter(fileMd5, silceMd5))
byteSize := sliceSize
for i := 1; i <= count; i++ {
if utils.IsCanceled(upCtx) {
break
}
byteData := make([]byte, sliceSize)
if i == count {
byteData = byteData[:lastPartSize]
byteSize = lastPartSize
}
byteData := make([]byte, byteSize)
// 读取块
silceMd5.Reset()
if _, err := io.ReadFull(io.TeeReader(file, io.MultiWriter(fileMd5, silceMd5)), byteData); err != io.EOF && err != nil {
if _, err := io.ReadFull(teeReader, byteData); err != io.EOF && err != nil {
sem.Release(1)
return nil, err
}
@@ -508,6 +541,10 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
partInfo := fmt.Sprintf("%d-%s", i, base64.StdEncoding.EncodeToString(md5Bytes))
threadG.Go(func(ctx context.Context) error {
if err = sem.Acquire(ctx, 1); err != nil {
return err
}
defer sem.Release(1)
uploadUrls, err := y.GetMultiUploadUrls(ctx, isFamily, initMultiUpload.Data.UploadFileID, partInfo)
if err != nil {
return err
@@ -515,7 +552,8 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
// step.4 上传切片
uploadUrl := uploadUrls[0]
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, bytes.NewReader(byteData), isFamily)
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false,
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)), isFamily)
if err != nil {
return err
}
@@ -572,24 +610,43 @@ func (y *Cloud189PC) RapidUpload(ctx context.Context, dstDir model.Obj, stream m
// 快传
func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
tempFile, err := file.CacheFullInTempFile()
var (
cache = file.GetFile()
tmpF *os.File
err error
)
size := file.GetSize()
if _, ok := cache.(io.ReaderAt); !ok && size > 0 {
tmpF, err = os.CreateTemp(conf.Conf.TempDir, "file-*")
if err != nil {
return nil, err
}
var sliceSize = partSize(file.GetSize())
count := int(math.Ceil(float64(file.GetSize()) / float64(sliceSize)))
lastSliceSize := file.GetSize() % sliceSize
if file.GetSize() > 0 && lastSliceSize == 0 {
defer func() {
_ = tmpF.Close()
_ = os.Remove(tmpF.Name())
}()
cache = tmpF
}
sliceSize := partSize(size)
count := int(size / sliceSize)
lastSliceSize := size % sliceSize
if lastSliceSize > 0 {
count++
} else {
lastSliceSize = sliceSize
}
//step.1 优先计算所需信息
byteSize := sliceSize
fileMd5 := md5.New()
silceMd5 := md5.New()
silceMd5Hexs := make([]string, 0, count)
fileMd5 := utils.MD5.NewFunc()
sliceMd5 := utils.MD5.NewFunc()
sliceMd5Hexs := make([]string, 0, count)
partInfos := make([]string, 0, count)
writers := []io.Writer{fileMd5, sliceMd5}
if tmpF != nil {
writers = append(writers, tmpF)
}
written := int64(0)
for i := 1; i <= count; i++ {
if utils.IsCanceled(ctx) {
return nil, ctx.Err()
@@ -599,19 +656,31 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
byteSize = lastSliceSize
}
silceMd5.Reset()
if _, err := utils.CopyWithBufferN(io.MultiWriter(fileMd5, silceMd5), tempFile, byteSize); err != nil && err != io.EOF {
n, err := utils.CopyWithBufferN(io.MultiWriter(writers...), file, byteSize)
written += n
if err != nil && err != io.EOF {
return nil, err
}
md5Byte := silceMd5.Sum(nil)
silceMd5Hexs = append(silceMd5Hexs, strings.ToUpper(hex.EncodeToString(md5Byte)))
md5Byte := sliceMd5.Sum(nil)
sliceMd5Hexs = append(sliceMd5Hexs, strings.ToUpper(hex.EncodeToString(md5Byte)))
partInfos = append(partInfos, fmt.Sprint(i, "-", base64.StdEncoding.EncodeToString(md5Byte)))
sliceMd5.Reset()
}
if tmpF != nil {
if size > 0 && written != size {
return nil, errs.NewErr(err, "CreateTempFile failed, incoming stream actual size= %d, expect = %d ", written, size)
}
_, err = tmpF.Seek(0, io.SeekStart)
if err != nil {
return nil, errs.NewErr(err, "CreateTempFile failed, can't seek to 0 ")
}
}
fileMd5Hex := strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
sliceMd5Hex := fileMd5Hex
if file.GetSize() > sliceSize {
sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(silceMd5Hexs, "\n")))
if size > sliceSize {
sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(sliceMd5Hexs, "\n")))
}
fullUrl := UPLOAD_URL
@@ -677,7 +746,7 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
}
// step.4 上传切片
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, io.NewSectionReader(tempFile, offset, byteSize), isFamily)
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, io.NewSectionReader(cache, offset, byteSize), isFamily)
if err != nil {
return err
}
@@ -759,14 +828,11 @@ func (y *Cloud189PC) GetMultiUploadUrls(ctx context.Context, isFamily bool, uplo
// 旧版本上传,家庭云不支持覆盖
func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
tempFile, err := file.CacheFullInTempFile()
if err != nil {
return nil, err
}
fileMd5, err := utils.HashFile(utils.MD5, tempFile)
tempFile, fileMd5, err := stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil {
return nil, err
}
rateLimited := driver.NewLimitedUploadStream(ctx, io.NopCloser(tempFile))
// 创建上传会话
uploadInfo, err := y.OldUploadCreate(ctx, dstDir.GetID(), fileMd5, file.GetName(), fmt.Sprint(file.GetSize()), isFamily)
@@ -793,7 +859,7 @@ func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model
header["Edrive-UploadFileId"] = fmt.Sprint(status.UploadFileId)
}
_, err := y.put(ctx, status.FileUploadUrl, header, true, io.NopCloser(tempFile), isFamily)
_, err := y.put(ctx, status.FileUploadUrl, header, true, rateLimited, isFamily)
if err, ok := err.(*RespErr); ok && err.Code != "InputStreamReadError" {
return nil, err
}
@@ -902,8 +968,7 @@ func (y *Cloud189PC) isLogin() bool {
}
// 创建家庭云中转文件夹
func (y *Cloud189PC) createFamilyTransferFolder(count int) (*ring.Ring, error) {
folders := ring.New(count)
func (y *Cloud189PC) createFamilyTransferFolder() error {
var rootFolder Cloud189Folder
_, err := y.post(API_URL+"/family/file/createFolder.action", func(req *resty.Request) {
req.SetQueryParams(map[string]string{
@@ -912,63 +977,42 @@ func (y *Cloud189PC) createFamilyTransferFolder(count int) (*ring.Ring, error) {
})
}, &rootFolder, true)
if err != nil {
return nil, err
return err
}
folderCount := 0
// 获取已有目录
files, err := y.getFiles(context.TODO(), rootFolder.GetID(), true)
if err != nil {
return nil, err
}
for _, file := range files {
if folder, ok := file.(*Cloud189Folder); ok {
folders.Value = folder
folders = folders.Next()
folderCount++
}
}
// 创建新的目录
for folderCount < count {
var newFolder Cloud189Folder
_, err := y.post(API_URL+"/family/file/createFolder.action", func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"folderName": uuid.NewString(),
"familyId": y.FamilyID,
"parentId": rootFolder.GetID(),
})
}, &newFolder, true)
if err != nil {
return nil, err
}
folders.Value = &newFolder
folders = folders.Next()
folderCount++
}
return folders, nil
y.familyTransferFolder = &rootFolder
return nil
}
// 清理中转文件夹
func (y *Cloud189PC) cleanFamilyTransfer(ctx context.Context) error {
var tasks []BatchTaskInfo
r := y.familyTransferFolder
for p := r.Next(); p != r; p = p.Next() {
folder := p.Value.(*Cloud189Folder)
files, err := y.getFiles(ctx, folder.GetID(), true)
transferFolderId := y.familyTransferFolder.GetID()
for pageNum := 1; ; pageNum++ {
resp, err := y.getFilesWithPage(ctx, transferFolderId, true, pageNum, 100, "lastOpTime", "asc")
if err != nil {
return err
}
for _, file := range files {
// 获取完毕跳出
if resp.FileListAO.Count == 0 {
break
}
var tasks []BatchTaskInfo
for i := 0; i < len(resp.FileListAO.FolderList); i++ {
folder := resp.FileListAO.FolderList[i]
tasks = append(tasks, BatchTaskInfo{
FileId: folder.GetID(),
FileName: folder.GetName(),
IsFolder: BoolToNumber(folder.IsDir()),
})
}
for i := 0; i < len(resp.FileListAO.FileList); i++ {
file := resp.FileListAO.FileList[i]
tasks = append(tasks, BatchTaskInfo{
FileId: file.GetID(),
FileName: file.GetName(),
IsFolder: BoolToNumber(file.IsDir()),
})
}
}
if len(tasks) > 0 {
// 删除
@@ -988,6 +1032,7 @@ func (y *Cloud189PC) cleanFamilyTransfer(ctx context.Context) error {
err = y.WaitBatchTask("CLEAR_RECYCLE", resp.TaskID, time.Second)
return err
}
}
return nil
}
@@ -1063,6 +1108,34 @@ func (y *Cloud189PC) SaveFamilyFileToPersonCloud(ctx context.Context, familyId s
}
}
// 永久删除文件
func (y *Cloud189PC) Delete(ctx context.Context, familyId string, srcObj model.Obj) error {
task := BatchTaskInfo{
FileId: srcObj.GetID(),
FileName: srcObj.GetName(),
IsFolder: BoolToNumber(srcObj.IsDir()),
}
// 删除源文件
resp, err := y.CreateBatchTask("DELETE", familyId, "", nil, task)
if err != nil {
return err
}
err = y.WaitBatchTask("DELETE", resp.TaskID, time.Second)
if err != nil {
return err
}
// 清除回收站
resp, err = y.CreateBatchTask("CLEAR_RECYCLE", familyId, "", nil, task)
if err != nil {
return err
}
err = y.WaitBatchTask("CLEAR_RECYCLE", resp.TaskID, time.Second)
if err != nil {
return err
}
return nil
}
func (y *Cloud189PC) CreateBatchTask(aType string, familyID string, targetFolderId string, other map[string]string, taskInfos ...BatchTaskInfo) (*CreateBatchTaskResp, error) {
var resp CreateBatchTaskResp
_, err := y.post(API_URL+"/batch/createBatchTask.action", func(req *resty.Request) {

View File

@@ -3,6 +3,7 @@ package alias
import (
"context"
"errors"
stdpath "path"
"strings"
"github.com/alist-org/alist/v3/internal/driver"
@@ -126,8 +127,46 @@ func (d *Alias) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
return nil, errs.ObjectNotFound
}
func (d *Alias) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, parentDir, true)
if err == nil {
return fs.MakeDir(ctx, stdpath.Join(*reqPath, dirName))
}
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot make sub-dir")
}
return err
}
func (d *Alias) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
if !d.Writable {
return errs.PermissionDenied
}
srcPath, err := d.getReqPath(ctx, srcObj, false)
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot be moved")
}
if err != nil {
return err
}
dstPath, err := d.getReqPath(ctx, dstDir, true)
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be moved to")
}
if err != nil {
return err
}
return fs.Move(ctx, *srcPath, *dstPath)
}
func (d *Alias) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
reqPath, err := d.getReqPath(ctx, srcObj)
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, srcObj, false)
if err == nil {
return fs.Rename(ctx, *reqPath, newName)
}
@@ -137,8 +176,33 @@ func (d *Alias) Rename(ctx context.Context, srcObj model.Obj, newName string) er
return err
}
func (d *Alias) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
if !d.Writable {
return errs.PermissionDenied
}
srcPath, err := d.getReqPath(ctx, srcObj, false)
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot be copied")
}
if err != nil {
return err
}
dstPath, err := d.getReqPath(ctx, dstDir, true)
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be copied to")
}
if err != nil {
return err
}
_, err = fs.Copy(ctx, *srcPath, *dstPath)
return err
}
func (d *Alias) Remove(ctx context.Context, obj model.Obj) error {
reqPath, err := d.getReqPath(ctx, obj)
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, obj, false)
if err == nil {
return fs.Remove(ctx, *reqPath)
}
@@ -148,4 +212,110 @@ func (d *Alias) Remove(ctx context.Context, obj model.Obj) error {
return err
}
func (d *Alias) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) error {
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, dstDir, true)
if err == nil {
return fs.PutDirectly(ctx, *reqPath, s)
}
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be Put")
}
return err
}
func (d *Alias) PutURL(ctx context.Context, dstDir model.Obj, name, url string) error {
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, dstDir, true)
if err == nil {
return fs.PutURL(ctx, *reqPath, name, url)
}
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot offline download")
}
return err
}
func (d *Alias) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
root, sub := d.getRootAndPath(obj.GetPath())
dsts, ok := d.pathMap[root]
if !ok {
return nil, errs.ObjectNotFound
}
for _, dst := range dsts {
meta, err := d.getArchiveMeta(ctx, dst, sub, args)
if err == nil {
return meta, nil
}
}
return nil, errs.NotImplement
}
func (d *Alias) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
root, sub := d.getRootAndPath(obj.GetPath())
dsts, ok := d.pathMap[root]
if !ok {
return nil, errs.ObjectNotFound
}
for _, dst := range dsts {
l, err := d.listArchive(ctx, dst, sub, args)
if err == nil {
return l, nil
}
}
return nil, errs.NotImplement
}
func (d *Alias) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// alias的两个驱动一个支持驱动提取一个不支持如何兼容
// 如果访问的是不支持驱动提取的驱动内的压缩文件GetArchiveMeta就会返回errs.NotImplement提取URL前缀就会是/aeExtract就不会被调用
// 如果访问的是支持驱动提取的驱动内的压缩文件GetArchiveMeta就会返回有效值提取URL前缀就会是/adExtract就会被调用
root, sub := d.getRootAndPath(obj.GetPath())
dsts, ok := d.pathMap[root]
if !ok {
return nil, errs.ObjectNotFound
}
for _, dst := range dsts {
link, err := d.extract(ctx, dst, sub, args)
if err == nil {
if !args.Redirect && len(link.URL) > 0 {
if d.DownloadConcurrency > 0 {
link.Concurrency = d.DownloadConcurrency
}
if d.DownloadPartSize > 0 {
link.PartSize = d.DownloadPartSize * utils.KB
}
}
return link, nil
}
}
return nil, errs.NotImplement
}
func (d *Alias) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) error {
if !d.Writable {
return errs.PermissionDenied
}
srcPath, err := d.getReqPath(ctx, srcObj, false)
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot be decompressed")
}
if err != nil {
return err
}
dstPath, err := d.getReqPath(ctx, dstDir, true)
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be decompressed to")
}
if err != nil {
return err
}
_, err = fs.ArchiveDecompress(ctx, *srcPath, *dstPath, args)
return err
}
var _ driver.Driver = (*Alias)(nil)

View File

@@ -13,13 +13,14 @@ type Addition struct {
ProtectSameName bool `json:"protect_same_name" default:"true" required:"false" help:"Protects same-name files from Delete or Rename"`
DownloadConcurrency int `json:"download_concurrency" default:"0" required:"false" type:"number" help:"Need to enable proxy"`
DownloadPartSize int `json:"download_part_size" default:"0" type:"number" required:"false" help:"Need to enable proxy. Unit: KB"`
Writable bool `json:"writable" type:"bool" default:"false"`
}
var config = driver.Config{
Name: "Alias",
LocalSort: true,
NoCache: true,
NoUpload: true,
NoUpload: false,
DefaultRoot: "/",
ProxyRangeOption: true,
}

View File

@@ -3,9 +3,11 @@ package alias
import (
"context"
"fmt"
"net/url"
stdpath "path"
"strings"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/fs"
"github.com/alist-org/alist/v3/internal/model"
@@ -63,6 +65,7 @@ func (d *Alias) get(ctx context.Context, path string, dst, sub string) (model.Ob
Size: obj.GetSize(),
Modified: obj.ModTime(),
IsFolder: obj.IsDir(),
HashInfo: obj.GetHash(),
}, nil
}
@@ -124,9 +127,9 @@ func (d *Alias) link(ctx context.Context, dst, sub string, args model.LinkArgs)
return link, err
}
func (d *Alias) getReqPath(ctx context.Context, obj model.Obj) (*string, error) {
func (d *Alias) getReqPath(ctx context.Context, obj model.Obj, isParent bool) (*string, error) {
root, sub := d.getRootAndPath(obj.GetPath())
if sub == "" {
if sub == "" && !isParent {
return nil, errs.NotSupport
}
dsts, ok := d.pathMap[root]
@@ -155,3 +158,68 @@ func (d *Alias) getReqPath(ctx context.Context, obj model.Obj) (*string, error)
}
return reqPath, nil
}
func (d *Alias) getArchiveMeta(ctx context.Context, dst, sub string, args model.ArchiveArgs) (model.ArchiveMeta, error) {
reqPath := stdpath.Join(dst, sub)
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil {
return nil, err
}
if _, ok := storage.(driver.ArchiveReader); ok {
return op.GetArchiveMeta(ctx, storage, reqActualPath, model.ArchiveMetaArgs{
ArchiveArgs: args,
Refresh: true,
})
}
return nil, errs.NotImplement
}
func (d *Alias) listArchive(ctx context.Context, dst, sub string, args model.ArchiveInnerArgs) ([]model.Obj, error) {
reqPath := stdpath.Join(dst, sub)
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil {
return nil, err
}
if _, ok := storage.(driver.ArchiveReader); ok {
return op.ListArchive(ctx, storage, reqActualPath, model.ArchiveListArgs{
ArchiveInnerArgs: args,
Refresh: true,
})
}
return nil, errs.NotImplement
}
func (d *Alias) extract(ctx context.Context, dst, sub string, args model.ArchiveInnerArgs) (*model.Link, error) {
reqPath := stdpath.Join(dst, sub)
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil {
return nil, err
}
if _, ok := storage.(driver.ArchiveReader); ok {
if _, ok := storage.(*Alias); !ok && !args.Redirect {
link, _, err := op.DriverExtract(ctx, storage, reqActualPath, args)
return link, err
}
_, err = fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true})
if err != nil {
return nil, err
}
if common.ShouldProxy(storage, stdpath.Base(sub)) {
link := &model.Link{
URL: fmt.Sprintf("%s/ap%s?inner=%s&pass=%s&sign=%s",
common.GetApiUrl(args.HttpReq),
utils.EncodePath(reqPath, true),
utils.EncodePath(args.InnerPath, true),
url.QueryEscape(args.Password),
sign.SignArchive(reqPath)),
}
if args.HttpReq != nil && d.ProxyRange {
link.RangeReadCloser = common.NoProxyRange
}
return link, nil
}
link, _, err := op.DriverExtract(ctx, storage, reqActualPath, args)
return link, err
}
return nil, errs.NotImplement
}

View File

@@ -5,12 +5,14 @@ import (
"fmt"
"io"
"net/http"
"net/url"
"path"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/server/common"
@@ -34,7 +36,7 @@ func (d *AListV3) GetAddition() driver.Additional {
func (d *AListV3) Init(ctx context.Context) error {
d.Addition.Address = strings.TrimSuffix(d.Addition.Address, "/")
var resp common.Resp[MeResp]
_, err := d.request("/me", http.MethodGet, func(req *resty.Request) {
_, _, err := d.request("/me", http.MethodGet, func(req *resty.Request) {
req.SetResult(&resp)
})
if err != nil {
@@ -48,15 +50,15 @@ func (d *AListV3) Init(ctx context.Context) error {
}
}
// re-get the user info
_, err = d.request("/me", http.MethodGet, func(req *resty.Request) {
_, _, err = d.request("/me", http.MethodGet, func(req *resty.Request) {
req.SetResult(&resp)
})
if err != nil {
return err
}
if resp.Data.Role == model.GUEST {
url := d.Address + "/api/public/settings"
res, err := base.RestyClient.R().Get(url)
if utils.SliceContains(resp.Data.Role, model.GUEST) {
u := d.Address + "/api/public/settings"
res, err := base.RestyClient.R().Get(u)
if err != nil {
return err
}
@@ -74,7 +76,7 @@ func (d *AListV3) Drop(ctx context.Context) error {
func (d *AListV3) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var resp common.Resp[FsListResp]
_, err := d.request("/fs/list", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/list", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ListReq{
PageReq: model.PageReq{
Page: 1,
@@ -116,7 +118,7 @@ func (d *AListV3) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
userAgent = base.UserAgent
}
}
_, err := d.request("/fs/get", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/get", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(FsGetReq{
Path: file.GetPath(),
Password: d.MetaPassword,
@@ -131,7 +133,7 @@ func (d *AListV3) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
}
func (d *AListV3) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
_, err := d.request("/fs/mkdir", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/mkdir", http.MethodPost, func(req *resty.Request) {
req.SetBody(MkdirOrLinkReq{
Path: path.Join(parentDir.GetPath(), dirName),
})
@@ -140,7 +142,7 @@ func (d *AListV3) MakeDir(ctx context.Context, parentDir model.Obj, dirName stri
}
func (d *AListV3) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
_, err := d.request("/fs/move", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/move", http.MethodPost, func(req *resty.Request) {
req.SetBody(MoveCopyReq{
SrcDir: path.Dir(srcObj.GetPath()),
DstDir: dstDir.GetPath(),
@@ -151,7 +153,7 @@ func (d *AListV3) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
}
func (d *AListV3) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
_, err := d.request("/fs/rename", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/rename", http.MethodPost, func(req *resty.Request) {
req.SetBody(RenameReq{
Path: srcObj.GetPath(),
Name: newName,
@@ -161,7 +163,7 @@ func (d *AListV3) Rename(ctx context.Context, srcObj model.Obj, newName string)
}
func (d *AListV3) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
_, err := d.request("/fs/copy", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/copy", http.MethodPost, func(req *resty.Request) {
req.SetBody(MoveCopyReq{
SrcDir: path.Dir(srcObj.GetPath()),
DstDir: dstDir.GetPath(),
@@ -172,7 +174,7 @@ func (d *AListV3) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
}
func (d *AListV3) Remove(ctx context.Context, obj model.Obj) error {
_, err := d.request("/fs/remove", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/remove", http.MethodPost, func(req *resty.Request) {
req.SetBody(RemoveReq{
Dir: path.Dir(obj.GetPath()),
Names: []string{obj.GetName()},
@@ -181,25 +183,29 @@ func (d *AListV3) Remove(ctx context.Context, obj model.Obj) error {
return err
}
func (d *AListV3) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
req, err := http.NewRequestWithContext(ctx, http.MethodPut, d.Address+"/api/fs/put", stream)
func (d *AListV3) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) error {
reader := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: s,
UpdateProgress: up,
})
req, err := http.NewRequestWithContext(ctx, http.MethodPut, d.Address+"/api/fs/put", reader)
if err != nil {
return err
}
req.Header.Set("Authorization", d.Token)
req.Header.Set("File-Path", path.Join(dstDir.GetPath(), stream.GetName()))
req.Header.Set("File-Path", path.Join(dstDir.GetPath(), s.GetName()))
req.Header.Set("Password", d.MetaPassword)
if md5 := stream.GetHash().GetHash(utils.MD5); len(md5) > 0 {
if md5 := s.GetHash().GetHash(utils.MD5); len(md5) > 0 {
req.Header.Set("X-File-Md5", md5)
}
if sha1 := stream.GetHash().GetHash(utils.SHA1); len(sha1) > 0 {
if sha1 := s.GetHash().GetHash(utils.SHA1); len(sha1) > 0 {
req.Header.Set("X-File-Sha1", sha1)
}
if sha256 := stream.GetHash().GetHash(utils.SHA256); len(sha256) > 0 {
if sha256 := s.GetHash().GetHash(utils.SHA256); len(sha256) > 0 {
req.Header.Set("X-File-Sha256", sha256)
}
req.ContentLength = stream.GetSize()
req.ContentLength = s.GetSize()
// client := base.NewHttpClient()
// client.Timeout = time.Hour * 6
res, err := base.HttpClient.Do(req)
@@ -228,6 +234,127 @@ func (d *AListV3) Put(ctx context.Context, dstDir model.Obj, stream model.FileSt
return nil
}
func (d *AListV3) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
if !d.ForwardArchiveReq {
return nil, errs.NotImplement
}
var resp common.Resp[ArchiveMetaResp]
_, code, err := d.request("/fs/archive/meta", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveMetaReq{
ArchivePass: args.Password,
Password: d.MetaPassword,
Path: obj.GetPath(),
Refresh: false,
})
})
if code == 202 {
return nil, errs.WrongArchivePassword
}
if err != nil {
return nil, err
}
var tree []model.ObjTree
if resp.Data.Content != nil {
tree = make([]model.ObjTree, 0, len(resp.Data.Content))
for _, content := range resp.Data.Content {
tree = append(tree, &content)
}
}
return &model.ArchiveMetaInfo{
Comment: resp.Data.Comment,
Encrypted: resp.Data.Encrypted,
Tree: tree,
}, nil
}
func (d *AListV3) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
if !d.ForwardArchiveReq {
return nil, errs.NotImplement
}
var resp common.Resp[ArchiveListResp]
_, code, err := d.request("/fs/archive/list", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveListReq{
ArchiveMetaReq: ArchiveMetaReq{
ArchivePass: args.Password,
Password: d.MetaPassword,
Path: obj.GetPath(),
Refresh: false,
},
PageReq: model.PageReq{
Page: 1,
PerPage: 0,
},
InnerPath: args.InnerPath,
})
})
if code == 202 {
return nil, errs.WrongArchivePassword
}
if err != nil {
return nil, err
}
var files []model.Obj
for _, f := range resp.Data.Content {
file := model.ObjThumb{
Object: model.Object{
Name: f.Name,
Modified: f.Modified,
Ctime: f.Created,
Size: f.Size,
IsFolder: f.IsDir,
HashInfo: utils.FromString(f.HashInfo),
},
Thumbnail: model.Thumbnail{Thumbnail: f.Thumb},
}
files = append(files, &file)
}
return files, nil
}
func (d *AListV3) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
if !d.ForwardArchiveReq {
return nil, errs.NotSupport
}
var resp common.Resp[ArchiveMetaResp]
_, _, err := d.request("/fs/archive/meta", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveMetaReq{
ArchivePass: args.Password,
Password: d.MetaPassword,
Path: obj.GetPath(),
Refresh: false,
})
})
if err != nil {
return nil, err
}
return &model.Link{
URL: fmt.Sprintf("%s?inner=%s&pass=%s&sign=%s",
resp.Data.RawURL,
utils.EncodePath(args.InnerPath, true),
url.QueryEscape(args.Password),
resp.Data.Sign),
}, nil
}
func (d *AListV3) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) error {
if !d.ForwardArchiveReq {
return errs.NotImplement
}
dir, name := path.Split(srcObj.GetPath())
_, _, err := d.request("/fs/archive/decompress", http.MethodPost, func(req *resty.Request) {
req.SetBody(DecompressReq{
ArchivePass: args.Password,
CacheFull: args.CacheFull,
DstDir: dstDir.GetPath(),
InnerPath: args.InnerPath,
Name: []string{name},
PutIntoNewDir: args.PutIntoNewDir,
SrcDir: dir,
})
})
return err
}
//func (d *AList) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}

View File

@@ -13,6 +13,7 @@ type Addition struct {
Password string `json:"password"`
Token string `json:"token"`
PassUAToUpsteam bool `json:"pass_ua_to_upsteam" default:"true"`
ForwardArchiveReq bool `json:"forward_archive_requests" default:"true"`
}
var config = driver.Config{

View File

@@ -1,9 +1,11 @@
package alist_v3
import (
"encoding/json"
"time"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
)
type ListReq struct {
@@ -75,9 +77,109 @@ type MeResp struct {
Username string `json:"username"`
Password string `json:"password"`
BasePath string `json:"base_path"`
Role int `json:"role"`
Role IntSlice `json:"role"`
Disabled bool `json:"disabled"`
Permission int `json:"permission"`
SsoId string `json:"sso_id"`
Otp bool `json:"otp"`
}
type ArchiveMetaReq struct {
ArchivePass string `json:"archive_pass"`
Password string `json:"password"`
Path string `json:"path"`
Refresh bool `json:"refresh"`
}
type TreeResp struct {
ObjResp
Children []TreeResp `json:"children"`
hashCache *utils.HashInfo
}
func (t *TreeResp) GetSize() int64 {
return t.Size
}
func (t *TreeResp) GetName() string {
return t.Name
}
func (t *TreeResp) ModTime() time.Time {
return t.Modified
}
func (t *TreeResp) CreateTime() time.Time {
return t.Created
}
func (t *TreeResp) IsDir() bool {
return t.ObjResp.IsDir
}
func (t *TreeResp) GetHash() utils.HashInfo {
return utils.FromString(t.HashInfo)
}
func (t *TreeResp) GetID() string {
return ""
}
func (t *TreeResp) GetPath() string {
return ""
}
func (t *TreeResp) GetChildren() []model.ObjTree {
ret := make([]model.ObjTree, 0, len(t.Children))
for _, child := range t.Children {
ret = append(ret, &child)
}
return ret
}
func (t *TreeResp) Thumb() string {
return t.ObjResp.Thumb
}
type ArchiveMetaResp struct {
Comment string `json:"comment"`
Encrypted bool `json:"encrypted"`
Content []TreeResp `json:"content"`
RawURL string `json:"raw_url"`
Sign string `json:"sign"`
}
type ArchiveListReq struct {
model.PageReq
ArchiveMetaReq
InnerPath string `json:"inner_path"`
}
type ArchiveListResp struct {
Content []ObjResp `json:"content"`
Total int64 `json:"total"`
}
type DecompressReq struct {
ArchivePass string `json:"archive_pass"`
CacheFull bool `json:"cache_full"`
DstDir string `json:"dst_dir"`
InnerPath string `json:"inner_path"`
Name []string `json:"name"`
PutIntoNewDir bool `json:"put_into_new_dir"`
SrcDir string `json:"src_dir"`
}
type IntSlice []int
func (s *IntSlice) UnmarshalJSON(data []byte) error {
if len(data) > 0 && data[0] == '[' {
return json.Unmarshal(data, (*[]int)(s))
}
var single int
if err := json.Unmarshal(data, &single); err != nil {
return err
}
*s = []int{single}
return nil
}

View File

@@ -17,7 +17,7 @@ func (d *AListV3) login() error {
return nil
}
var resp common.Resp[LoginResp]
_, err := d.request("/auth/login", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/auth/login", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(base.Json{
"username": d.Username,
"password": d.Password,
@@ -31,7 +31,7 @@ func (d *AListV3) login() error {
return nil
}
func (d *AListV3) request(api, method string, callback base.ReqCallback, retry ...bool) ([]byte, error) {
func (d *AListV3) request(api, method string, callback base.ReqCallback, retry ...bool) ([]byte, int, error) {
url := d.Address + "/api" + api
req := base.RestyClient.R()
req.SetHeader("Authorization", d.Token)
@@ -40,22 +40,26 @@ func (d *AListV3) request(api, method string, callback base.ReqCallback, retry .
}
res, err := req.Execute(method, url)
if err != nil {
return nil, err
code := 0
if res != nil {
code = res.StatusCode()
}
return nil, code, err
}
log.Debugf("[alist_v3] response body: %s", res.String())
if res.StatusCode() >= 400 {
return nil, fmt.Errorf("request failed, status: %s", res.Status())
return nil, res.StatusCode(), fmt.Errorf("request failed, status: %s", res.Status())
}
code := utils.Json.Get(res.Body(), "code").ToInt()
if code != 200 {
if (code == 401 || code == 403) && !utils.IsBool(retry...) {
err = d.login()
if err != nil {
return nil, err
return nil, code, err
}
return d.request(api, method, callback, true)
}
return nil, fmt.Errorf("request failed,code: %d, message: %s", code, utils.Json.Get(res.Body(), "message").ToString())
return nil, code, fmt.Errorf("request failed,code: %d, message: %s", code, utils.Json.Get(res.Body(), "message").ToString())
}
return res.Body(), nil
return res.Body(), 200, nil
}

View File

@@ -14,13 +14,12 @@ import (
"os"
"time"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/cron"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
@@ -56,7 +55,7 @@ func (d *AliDrive) Init(ctx context.Context) error {
if err != nil {
return err
}
d.DriveId = utils.Json.Get(res, "default_drive_id").ToString()
d.DriveId = d.Addition.DeviceID
d.UserID = utils.Json.Get(res, "user_id").ToString()
d.cron = cron.NewCron(time.Hour * 2)
d.cron.Do(func() {
@@ -194,7 +193,10 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.Fil
}
if d.RapidUpload {
buf := bytes.NewBuffer(make([]byte, 0, 1024))
utils.CopyWithBufferN(buf, file, 1024)
_, err := utils.CopyWithBufferN(buf, file, 1024)
if err != nil {
return err
}
reqBody["pre_hash"] = utils.HashData(utils.SHA1, buf.Bytes())
if localFile != nil {
if _, err := localFile.Seek(0, io.SeekStart); err != nil {
@@ -286,6 +288,7 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.Fil
file.Reader = localFile
}
rateLimited := driver.NewLimitedUploadStream(ctx, file)
for i, partInfo := range resp.PartInfoList {
if utils.IsCanceled(ctx) {
return ctx.Err()
@@ -294,7 +297,7 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.Fil
if d.InternalUpload {
url = partInfo.InternalUploadUrl
}
req, err := http.NewRequest("PUT", url, io.LimitReader(file, DEFAULT))
req, err := http.NewRequest("PUT", url, io.LimitReader(rateLimited, DEFAULT))
if err != nil {
return err
}
@@ -303,7 +306,7 @@ func (d *AliDrive) Put(ctx context.Context, dstDir model.Obj, streamer model.Fil
if err != nil {
return err
}
res.Body.Close()
_ = res.Body.Close()
if count > 0 {
up(float64(i) * 100 / float64(count))
}

View File

@@ -8,7 +8,7 @@ import (
type Addition struct {
driver.RootID
RefreshToken string `json:"refresh_token" required:"true"`
//DeviceID string `json:"device_id" required:"true"`
DeviceID string `json:"device_id" required:"true"`
OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at"`
OrderDirection string `json:"order_direction" type:"select" options:"ASC,DESC"`
RapidUpload bool `json:"rapid_upload"`

View File

@@ -5,6 +5,7 @@ import (
"errors"
"fmt"
"net/http"
"path/filepath"
"time"
"github.com/Xhofe/rateg"
@@ -14,6 +15,7 @@ import (
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus"
)
type AliyundriveOpen struct {
@@ -72,6 +74,18 @@ func (d *AliyundriveOpen) Drop(ctx context.Context) error {
return nil
}
// GetRoot implements the driver.GetRooter interface to properly set up the root object
func (d *AliyundriveOpen) GetRoot(ctx context.Context) (model.Obj, error) {
return &model.Object{
ID: d.RootFolderID,
Path: "/",
Name: "root",
Size: 0,
Modified: d.Modified,
IsFolder: true,
}, nil
}
func (d *AliyundriveOpen) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
if d.limitList == nil {
return nil, fmt.Errorf("driver not init")
@@ -80,9 +94,17 @@ func (d *AliyundriveOpen) List(ctx context.Context, dir model.Obj, args model.Li
if err != nil {
return nil, err
}
return utils.SliceConvert(files, func(src File) (model.Obj, error) {
return fileToObj(src), nil
objs, err := utils.SliceConvert(files, func(src File) (model.Obj, error) {
obj := fileToObj(src)
// Set the correct path for the object
if dir.GetPath() != "" {
obj.Path = filepath.Join(dir.GetPath(), obj.GetName())
}
return obj, nil
})
return objs, err
}
func (d *AliyundriveOpen) link(ctx context.Context, file model.Obj) (*model.Link, error) {
@@ -132,7 +154,16 @@ func (d *AliyundriveOpen) MakeDir(ctx context.Context, parentDir model.Obj, dirN
if err != nil {
return nil, err
}
return fileToObj(newDir), nil
obj := fileToObj(newDir)
// Set the correct Path for the returned directory object
if parentDir.GetPath() != "" {
obj.Path = filepath.Join(parentDir.GetPath(), dirName)
} else {
obj.Path = "/" + dirName
}
return obj, nil
}
func (d *AliyundriveOpen) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
@@ -142,20 +173,24 @@ func (d *AliyundriveOpen) Move(ctx context.Context, srcObj, dstDir model.Obj) (m
"drive_id": d.DriveId,
"file_id": srcObj.GetID(),
"to_parent_file_id": dstDir.GetID(),
"check_name_mode": "refuse", // optional:ignore,auto_rename,refuse
"check_name_mode": "ignore", // optional:ignore,auto_rename,refuse
//"new_name": "newName", // The new name to use when a file of the same name exists
}).SetResult(&resp)
})
if err != nil {
return nil, err
}
if resp.Exist {
return nil, errors.New("existence of files with the same name")
}
if srcObj, ok := srcObj.(*model.ObjThumb); ok {
srcObj.ID = resp.FileID
srcObj.Modified = time.Now()
srcObj.Path = filepath.Join(dstDir.GetPath(), srcObj.GetName())
// Check for duplicate files in the destination directory
if err := d.removeDuplicateFiles(ctx, dstDir.GetPath(), srcObj.GetName(), srcObj.GetID()); err != nil {
// Only log a warning instead of returning an error since the move operation has already completed successfully
log.Warnf("Failed to remove duplicate files after move: %v", err)
}
return srcObj, nil
}
return nil, nil
@@ -173,19 +208,47 @@ func (d *AliyundriveOpen) Rename(ctx context.Context, srcObj model.Obj, newName
if err != nil {
return nil, err
}
return fileToObj(newFile), nil
// Check for duplicate files in the parent directory
parentPath := filepath.Dir(srcObj.GetPath())
if err := d.removeDuplicateFiles(ctx, parentPath, newName, newFile.FileId); err != nil {
// Only log a warning instead of returning an error since the rename operation has already completed successfully
log.Warnf("Failed to remove duplicate files after rename: %v", err)
}
obj := fileToObj(newFile)
// Set the correct Path for the renamed object
if parentPath != "" && parentPath != "." {
obj.Path = filepath.Join(parentPath, newName)
} else {
obj.Path = "/" + newName
}
return obj, nil
}
func (d *AliyundriveOpen) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
var resp MoveOrCopyResp
_, err := d.request("/adrive/v1.0/openFile/copy", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
"file_id": srcObj.GetID(),
"to_parent_file_id": dstDir.GetID(),
"auto_rename": true,
})
"auto_rename": false,
}).SetResult(&resp)
})
if err != nil {
return err
}
// Check for duplicate files in the destination directory
if err := d.removeDuplicateFiles(ctx, dstDir.GetPath(), srcObj.GetName(), resp.FileID); err != nil {
// Only log a warning instead of returning an error since the copy operation has already completed successfully
log.Warnf("Failed to remove duplicate files after copy: %v", err)
}
return nil
}
func (d *AliyundriveOpen) Remove(ctx context.Context, obj model.Obj) error {
@@ -203,7 +266,18 @@ func (d *AliyundriveOpen) Remove(ctx context.Context, obj model.Obj) error {
}
func (d *AliyundriveOpen) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
return d.upload(ctx, dstDir, stream, up)
obj, err := d.upload(ctx, dstDir, stream, up)
// Set the correct Path for the returned file object
if obj != nil && obj.GetPath() == "" {
if dstDir.GetPath() != "" {
if objWithPath, ok := obj.(model.SetPath); ok {
objWithPath.SetPath(filepath.Join(dstDir.GetPath(), obj.GetName()))
}
}
}
return obj, err
}
func (d *AliyundriveOpen) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
@@ -235,3 +309,4 @@ var _ driver.MkdirResult = (*AliyundriveOpen)(nil)
var _ driver.MoveResult = (*AliyundriveOpen)(nil)
var _ driver.RenameResult = (*AliyundriveOpen)(nil)
var _ driver.PutResult = (*AliyundriveOpen)(nil)
var _ driver.GetRooter = (*AliyundriveOpen)(nil)

View File

@@ -11,7 +11,7 @@ type Addition struct {
RefreshToken string `json:"refresh_token" required:"true"`
OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at"`
OrderDirection string `json:"order_direction" type:"select" options:"ASC,DESC"`
OauthTokenURL string `json:"oauth_token_url" default:"https://api.nn.ci/alist/ali_open/token"`
OauthTokenURL string `json:"oauth_token_url" default:"https://api.alistgo.com/alist/ali_open/token"`
ClientID string `json:"client_id" required:"false" help:"Keep it empty if you don't have one"`
ClientSecret string `json:"client_secret" required:"false" help:"Keep it empty if you don't have one"`
RemoveWay string `json:"remove_way" required:"true" type:"select" options:"trash,delete"`

View File

@@ -1,7 +1,6 @@
package aliyundrive_open
import (
"bytes"
"context"
"encoding/base64"
"fmt"
@@ -15,6 +14,7 @@ import (
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
streamPkg "github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/avast/retry-go"
@@ -77,7 +77,7 @@ func (d *AliyundriveOpen) uploadPart(ctx context.Context, r io.Reader, partInfo
if err != nil {
return err
}
res.Body.Close()
_ = res.Body.Close()
if res.StatusCode != http.StatusOK && res.StatusCode != http.StatusConflict {
return fmt.Errorf("upload status: %d", res.StatusCode)
}
@@ -131,16 +131,19 @@ func (d *AliyundriveOpen) calProofCode(stream model.FileStreamer) (string, error
return "", err
}
length := proofRange.End - proofRange.Start
buf := bytes.NewBuffer(make([]byte, 0, length))
reader, err := stream.RangeRead(http_range.Range{Start: proofRange.Start, Length: length})
if err != nil {
return "", err
}
_, err = utils.CopyWithBufferN(buf, reader, length)
buf := make([]byte, length)
n, err := io.ReadFull(reader, buf)
if err == io.ErrUnexpectedEOF {
return "", fmt.Errorf("can't read data, expected=%d, got=%d", len(buf), n)
}
if err != nil {
return "", err
}
return base64.StdEncoding.EncodeToString(buf.Bytes()), nil
return base64.StdEncoding.EncodeToString(buf), nil
}
func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
@@ -183,25 +186,18 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
_, err, e := d.requestReturnErrResp("/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
req.SetBody(createData).SetResult(&createResp)
})
var tmpF model.File
if err != nil {
if e.Code != "PreHashMatched" || !rapidUpload {
return nil, err
}
log.Debugf("[aliyundrive_open] pre_hash matched, start rapid upload")
hi := stream.GetHash()
hash := hi.GetHash(utils.SHA1)
if len(hash) <= 0 {
tmpF, err = stream.CacheFullInTempFile()
hash := stream.GetHash().GetHash(utils.SHA1)
if len(hash) != utils.SHA1.Width {
_, hash, err = streamPkg.CacheFullInTempFileAndHash(stream, utils.SHA1)
if err != nil {
return nil, err
}
hash, err = utils.HashFile(utils.SHA1, tmpF)
if err != nil {
return nil, err
}
}
delete(createData, "pre_hash")
@@ -251,8 +247,9 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
rd = utils.NewMultiReadable(srd)
}
err = retry.Do(func() error {
rd.Reset()
return d.uploadPart(ctx, rd, createResp.PartInfoList[i])
_ = rd.Reset()
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
return d.uploadPart(ctx, rateLimitedRd, createResp.PartInfoList[i])
},
retry.Attempts(3),
retry.DelayType(retry.BackOffDelay),

View File

@@ -10,6 +10,7 @@ import (
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
@@ -186,3 +187,36 @@ func (d *AliyundriveOpen) getAccessToken() string {
}
return d.AccessToken
}
// Remove duplicate files with the same name in the given directory path,
// preserving the file with the given skipID if provided
func (d *AliyundriveOpen) removeDuplicateFiles(ctx context.Context, parentPath string, fileName string, skipID string) error {
// Handle empty path (root directory) case
if parentPath == "" {
parentPath = "/"
}
// List all files in the parent directory
files, err := op.List(ctx, d, parentPath, model.ListArgs{})
if err != nil {
return err
}
// Find all files with the same name
var duplicates []model.Obj
for _, file := range files {
if file.GetName() == fileName && file.GetID() != skipID {
duplicates = append(duplicates, file)
}
}
// Remove all duplicates files, except the file with the given ID
for _, file := range duplicates {
err := d.Remove(ctx, file)
if err != nil {
return err
}
}
return nil
}

View File

@@ -2,9 +2,11 @@ package drivers
import (
_ "github.com/alist-org/alist/v3/drivers/115"
_ "github.com/alist-org/alist/v3/drivers/115_open"
_ "github.com/alist-org/alist/v3/drivers/115_share"
_ "github.com/alist-org/alist/v3/drivers/123"
_ "github.com/alist-org/alist/v3/drivers/123_link"
_ "github.com/alist-org/alist/v3/drivers/123_open"
_ "github.com/alist-org/alist/v3/drivers/123_share"
_ "github.com/alist-org/alist/v3/drivers/139"
_ "github.com/alist-org/alist/v3/drivers/189"
@@ -15,12 +17,16 @@ import (
_ "github.com/alist-org/alist/v3/drivers/aliyundrive"
_ "github.com/alist-org/alist/v3/drivers/aliyundrive_open"
_ "github.com/alist-org/alist/v3/drivers/aliyundrive_share"
_ "github.com/alist-org/alist/v3/drivers/azure_blob"
_ "github.com/alist-org/alist/v3/drivers/baidu_netdisk"
_ "github.com/alist-org/alist/v3/drivers/baidu_photo"
_ "github.com/alist-org/alist/v3/drivers/baidu_share"
_ "github.com/alist-org/alist/v3/drivers/chaoxing"
_ "github.com/alist-org/alist/v3/drivers/cloudreve"
_ "github.com/alist-org/alist/v3/drivers/cloudreve_v4"
_ "github.com/alist-org/alist/v3/drivers/crypt"
_ "github.com/alist-org/alist/v3/drivers/doubao"
_ "github.com/alist-org/alist/v3/drivers/doubao_share"
_ "github.com/alist-org/alist/v3/drivers/dropbox"
_ "github.com/alist-org/alist/v3/drivers/febbox"
_ "github.com/alist-org/alist/v3/drivers/ftp"

View File

@@ -0,0 +1,313 @@
package azure_blob
import (
"context"
"fmt"
"io"
"path"
"regexp"
"strings"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/sas"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
)
// Azure Blob Storage based on the blob APIs
// Link: https://learn.microsoft.com/rest/api/storageservices/blob-service-rest-api
type AzureBlob struct {
model.Storage
Addition
client *azblob.Client
containerClient *container.Client
config driver.Config
}
// Config returns the driver configuration.
func (d *AzureBlob) Config() driver.Config {
return d.config
}
// GetAddition returns additional settings specific to Azure Blob Storage.
func (d *AzureBlob) GetAddition() driver.Additional {
return &d.Addition
}
// Init initializes the Azure Blob Storage client using shared key authentication.
func (d *AzureBlob) Init(ctx context.Context) error {
// Validate the endpoint URL
accountName := extractAccountName(d.Addition.Endpoint)
if !regexp.MustCompile(`^[a-z0-9]+$`).MatchString(accountName) {
return fmt.Errorf("invalid storage account name: must be chars of lowercase letters or numbers only")
}
credential, err := azblob.NewSharedKeyCredential(accountName, d.Addition.AccessKey)
if err != nil {
return fmt.Errorf("failed to create credential: %w", err)
}
// Check if Endpoint is just account name
endpoint := d.Addition.Endpoint
if accountName == endpoint {
endpoint = fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
}
// Initialize Azure Blob client with retry policy
client, err := azblob.NewClientWithSharedKeyCredential(endpoint, credential,
&azblob.ClientOptions{ClientOptions: azcore.ClientOptions{
Retry: policy.RetryOptions{
MaxRetries: MaxRetries,
RetryDelay: RetryDelay,
},
}})
if err != nil {
return fmt.Errorf("failed to create client: %w", err)
}
d.client = client
// Ensure container exists or create it
containerName := strings.Trim(d.Addition.ContainerName, "/ \\")
if containerName == "" {
return fmt.Errorf("container name cannot be empty")
}
return d.createContainerIfNotExists(ctx, containerName)
}
// Drop releases resources associated with the Azure Blob client.
func (d *AzureBlob) Drop(ctx context.Context) error {
d.client = nil
return nil
}
// List retrieves blobs and directories under the specified path.
func (d *AzureBlob) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
prefix := ensureTrailingSlash(dir.GetPath())
pager := d.containerClient.NewListBlobsHierarchyPager("/", &container.ListBlobsHierarchyOptions{
Prefix: &prefix,
})
var objs []model.Obj
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
return nil, fmt.Errorf("failed to list blobs: %w", err)
}
// Process directories
for _, blobPrefix := range page.Segment.BlobPrefixes {
objs = append(objs, &model.Object{
Name: path.Base(strings.TrimSuffix(*blobPrefix.Name, "/")),
Path: *blobPrefix.Name,
Modified: *blobPrefix.Properties.LastModified,
Ctime: *blobPrefix.Properties.CreationTime,
IsFolder: true,
})
}
// Process files
for _, blob := range page.Segment.BlobItems {
if strings.HasSuffix(*blob.Name, "/") {
continue
}
objs = append(objs, &model.Object{
Name: path.Base(*blob.Name),
Path: *blob.Name,
Size: *blob.Properties.ContentLength,
Modified: *blob.Properties.LastModified,
Ctime: *blob.Properties.CreationTime,
IsFolder: false,
})
}
}
return objs, nil
}
// Link generates a temporary SAS URL for accessing a blob.
func (d *AzureBlob) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
blobClient := d.containerClient.NewBlobClient(file.GetPath())
expireDuration := time.Hour * time.Duration(d.SignURLExpire)
sasURL, err := blobClient.GetSASURL(sas.BlobPermissions{Read: true}, time.Now().Add(expireDuration), nil)
if err != nil {
return nil, fmt.Errorf("failed to generate SAS URL: %w", err)
}
return &model.Link{URL: sasURL}, nil
}
// MakeDir creates a virtual directory by uploading an empty blob as a marker.
func (d *AzureBlob) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
dirPath := path.Join(parentDir.GetPath(), dirName)
if err := d.mkDir(ctx, dirPath); err != nil {
return nil, fmt.Errorf("failed to create directory marker: %w", err)
}
return &model.Object{
Path: dirPath,
Name: dirName,
IsFolder: true,
}, nil
}
// Move relocates an object (file or directory) to a new directory.
func (d *AzureBlob) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
srcPath := srcObj.GetPath()
dstPath := path.Join(dstDir.GetPath(), srcObj.GetName())
if err := d.moveOrRename(ctx, srcPath, dstPath, srcObj.IsDir(), srcObj.GetSize()); err != nil {
return nil, fmt.Errorf("move operation failed: %w", err)
}
return &model.Object{
Path: dstPath,
Name: srcObj.GetName(),
Modified: time.Now(),
IsFolder: srcObj.IsDir(),
Size: srcObj.GetSize(),
}, nil
}
// Rename changes the name of an existing object.
func (d *AzureBlob) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
srcPath := srcObj.GetPath()
dstPath := path.Join(path.Dir(srcPath), newName)
if err := d.moveOrRename(ctx, srcPath, dstPath, srcObj.IsDir(), srcObj.GetSize()); err != nil {
return nil, fmt.Errorf("rename operation failed: %w", err)
}
return &model.Object{
Path: dstPath,
Name: newName,
Modified: time.Now(),
IsFolder: srcObj.IsDir(),
Size: srcObj.GetSize(),
}, nil
}
// Copy duplicates an object (file or directory) to a specified destination directory.
func (d *AzureBlob) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
dstPath := path.Join(dstDir.GetPath(), srcObj.GetName())
// Handle directory copying using flat listing
if srcObj.IsDir() {
srcPrefix := srcObj.GetPath()
srcPrefix = ensureTrailingSlash(srcPrefix)
// Get all blobs under the source directory
blobs, err := d.flattenListBlobs(ctx, srcPrefix)
if err != nil {
return nil, fmt.Errorf("failed to list source directory contents: %w", err)
}
// Process each blob - copy to destination
for _, blob := range blobs {
// Skip the directory marker itself
if *blob.Name == srcPrefix {
continue
}
// Calculate relative path from source
relPath := strings.TrimPrefix(*blob.Name, srcPrefix)
itemDstPath := path.Join(dstPath, relPath)
if strings.HasSuffix(itemDstPath, "/") || (blob.Metadata["hdi_isfolder"] != nil && *blob.Metadata["hdi_isfolder"] == "true") {
// Create directory marker at destination
err := d.mkDir(ctx, itemDstPath)
if err != nil {
return nil, fmt.Errorf("failed to create directory marker [%s]: %w", itemDstPath, err)
}
} else {
// Copy the blob
if err := d.copyFile(ctx, *blob.Name, itemDstPath); err != nil {
return nil, fmt.Errorf("failed to copy %s: %w", *blob.Name, err)
}
}
}
// Create directory marker at destination if needed
if len(blobs) == 0 {
err := d.mkDir(ctx, dstPath)
if err != nil {
return nil, fmt.Errorf("failed to create directory [%s]: %w", dstPath, err)
}
}
return &model.Object{
Path: dstPath,
Name: srcObj.GetName(),
Modified: time.Now(),
IsFolder: true,
}, nil
}
// Copy a single file
if err := d.copyFile(ctx, srcObj.GetPath(), dstPath); err != nil {
return nil, fmt.Errorf("failed to copy blob: %w", err)
}
return &model.Object{
Path: dstPath,
Name: srcObj.GetName(),
Size: srcObj.GetSize(),
Modified: time.Now(),
IsFolder: false,
}, nil
}
// Remove deletes a specified blob or recursively deletes a directory and its contents.
func (d *AzureBlob) Remove(ctx context.Context, obj model.Obj) error {
path := obj.GetPath()
// Handle recursive directory deletion
if obj.IsDir() {
return d.deleteFolder(ctx, path)
}
// Delete single file
return d.deleteFile(ctx, path, false)
}
// Put uploads a file stream to Azure Blob Storage with progress tracking.
func (d *AzureBlob) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
blobPath := path.Join(dstDir.GetPath(), stream.GetName())
blobClient := d.containerClient.NewBlockBlobClient(blobPath)
// Determine optimal upload options based on file size
options := optimizedUploadOptions(stream.GetSize())
// Track upload progress
progressTracker := &progressTracker{
total: stream.GetSize(),
updateProgress: up,
}
// Wrap stream to handle context cancellation and progress tracking
limitedStream := driver.NewLimitedUploadStream(ctx, io.TeeReader(stream, progressTracker))
// Upload the stream to Azure Blob Storage
_, err := blobClient.UploadStream(ctx, limitedStream, options)
if err != nil {
return nil, fmt.Errorf("failed to upload file: %w", err)
}
return &model.Object{
Path: blobPath,
Name: stream.GetName(),
Size: stream.GetSize(),
Modified: time.Now(),
IsFolder: false,
}, nil
}
// The following methods related to archive handling are not implemented yet.
// func (d *AzureBlob) GetArchiveMeta(...) {...}
// func (d *AzureBlob) ListArchive(...) {...}
// func (d *AzureBlob) Extract(...) {...}
// func (d *AzureBlob) ArchiveDecompress(...) {...}
// Ensure AzureBlob implements the driver.Driver interface.
var _ driver.Driver = (*AzureBlob)(nil)

View File

@@ -0,0 +1,32 @@
package azure_blob
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
Endpoint string `json:"endpoint" required:"true" default:"https://<accountname>.blob.core.windows.net/" help:"e.g. https://accountname.blob.core.windows.net/. The full endpoint URL for Azure Storage, including the unique storage account name (3 ~ 24 numbers and lowercase letters only)."`
AccessKey string `json:"access_key" required:"true" help:"The access key for Azure Storage, used for authentication. https://learn.microsoft.com/azure/storage/common/storage-account-keys-manage"`
ContainerName string `json:"container_name" required:"true" help:"The name of the container in Azure Storage (created in the Azure portal). https://learn.microsoft.com/azure/storage/blobs/blob-containers-portal"`
SignURLExpire int `json:"sign_url_expire" type:"number" default:"4" help:"The expiration time for SAS URLs, in hours."`
}
// implement GetRootId interface
func (r Addition) GetRootId() string {
return r.ContainerName
}
var config = driver.Config{
Name: "Azure Blob Storage",
LocalSort: true,
CheckStatus: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &AzureBlob{
config: config,
}
})
}

View File

@@ -0,0 +1,20 @@
package azure_blob
import "github.com/alist-org/alist/v3/internal/driver"
// progressTracker is used to track upload progress
type progressTracker struct {
total int64
current int64
updateProgress driver.UpdateProgress
}
// Write implements io.Writer to track progress
func (pt *progressTracker) Write(p []byte) (n int, err error) {
n = len(p)
pt.current += int64(n)
if pt.updateProgress != nil && pt.total > 0 {
pt.updateProgress(float64(pt.current) * 100 / float64(pt.total))
}
return n, nil
}

401
drivers/azure_blob/util.go Normal file
View File

@@ -0,0 +1,401 @@
package azure_blob
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"path"
"sort"
"strings"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/sas"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/service"
log "github.com/sirupsen/logrus"
)
const (
// MaxRetries defines the maximum number of retry attempts for Azure operations
MaxRetries = 3
// RetryDelay defines the base delay between retries
RetryDelay = 3 * time.Second
// MaxBatchSize defines the maximum number of operations in a single batch request
MaxBatchSize = 128
)
// extractAccountName 从 Azure 存储 Endpoint 中提取账户名
func extractAccountName(endpoint string) string {
// 移除协议前缀
endpoint = strings.TrimPrefix(endpoint, "https://")
endpoint = strings.TrimPrefix(endpoint, "http://")
// 获取第一个点之前的部分(即账户名)
parts := strings.Split(endpoint, ".")
if len(parts) > 0 {
// to lower case
return strings.ToLower(parts[0])
}
return ""
}
// isNotFoundError checks if the error is a "not found" type error
func isNotFoundError(err error) bool {
var storageErr *azcore.ResponseError
if errors.As(err, &storageErr) {
return storageErr.StatusCode == 404
}
// Fallback to string matching for backwards compatibility
return err != nil && strings.Contains(err.Error(), "BlobNotFound")
}
// flattenListBlobs - Optimize blob listing to handle pagination better
func (d *AzureBlob) flattenListBlobs(ctx context.Context, prefix string) ([]container.BlobItem, error) {
// Standardize prefix format
prefix = ensureTrailingSlash(prefix)
var blobItems []container.BlobItem
pager := d.containerClient.NewListBlobsFlatPager(&container.ListBlobsFlatOptions{
Prefix: &prefix,
Include: container.ListBlobsInclude{
Metadata: true,
},
})
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
return nil, fmt.Errorf("failed to list blobs: %w", err)
}
for _, blob := range page.Segment.BlobItems {
blobItems = append(blobItems, *blob)
}
}
return blobItems, nil
}
// batchDeleteBlobs - Simplify batch deletion logic
func (d *AzureBlob) batchDeleteBlobs(ctx context.Context, blobPaths []string) error {
if len(blobPaths) == 0 {
return nil
}
// Process in batches of MaxBatchSize
for i := 0; i < len(blobPaths); i += MaxBatchSize {
end := min(i+MaxBatchSize, len(blobPaths))
currentBatch := blobPaths[i:end]
// Create batch builder
batchBuilder, err := d.containerClient.NewBatchBuilder()
if err != nil {
return fmt.Errorf("failed to create batch builder: %w", err)
}
// Add delete operations
for _, blobPath := range currentBatch {
if err := batchBuilder.Delete(blobPath, nil); err != nil {
return fmt.Errorf("failed to add delete operation for %s: %w", blobPath, err)
}
}
// Submit batch
responses, err := d.containerClient.SubmitBatch(ctx, batchBuilder, nil)
if err != nil {
return fmt.Errorf("batch delete request failed: %w", err)
}
// Check responses
for _, resp := range responses.Responses {
if resp.Error != nil && !isNotFoundError(resp.Error) {
// 获取 blob 名称以提供更好的错误信息
blobName := "unknown"
if resp.BlobName != nil {
blobName = *resp.BlobName
}
return fmt.Errorf("failed to delete blob %s: %v", blobName, resp.Error)
}
}
}
return nil
}
// deleteFolder recursively deletes a directory and all its contents
func (d *AzureBlob) deleteFolder(ctx context.Context, prefix string) error {
// Ensure directory path ends with slash
prefix = ensureTrailingSlash(prefix)
// Get all blobs under the directory using flattenListBlobs
globs, err := d.flattenListBlobs(ctx, prefix)
if err != nil {
return fmt.Errorf("failed to list blobs for deletion: %w", err)
}
// If there are blobs in the directory, delete them
if len(globs) > 0 {
// 分离文件和目录标记
var filePaths []string
var dirPaths []string
for _, blob := range globs {
blobName := *blob.Name
if isDirectory(blob) {
// remove trailing slash for directory names
dirPaths = append(dirPaths, strings.TrimSuffix(blobName, "/"))
} else {
filePaths = append(filePaths, blobName)
}
}
// 先删除文件,再删除目录
if len(filePaths) > 0 {
if err := d.batchDeleteBlobs(ctx, filePaths); err != nil {
return err
}
}
if len(dirPaths) > 0 {
// 按路径深度分组
depthMap := make(map[int][]string)
for _, dir := range dirPaths {
depth := strings.Count(dir, "/") // 计算目录深度
depthMap[depth] = append(depthMap[depth], dir)
}
// 按深度从大到小排序
var depths []int
for depth := range depthMap {
depths = append(depths, depth)
}
sort.Sort(sort.Reverse(sort.IntSlice(depths)))
// 按深度逐层批量删除
for _, depth := range depths {
batch := depthMap[depth]
if err := d.batchDeleteBlobs(ctx, batch); err != nil {
return err
}
}
}
}
// 最后删除目录标记本身
return d.deleteEmptyDirectory(ctx, prefix)
}
// deleteFile deletes a single file or blob with better error handling
func (d *AzureBlob) deleteFile(ctx context.Context, path string, isDir bool) error {
blobClient := d.containerClient.NewBlobClient(path)
_, err := blobClient.Delete(ctx, nil)
if err != nil && !(isDir && isNotFoundError(err)) {
return err
}
return nil
}
// copyFile copies a single blob from source path to destination path
func (d *AzureBlob) copyFile(ctx context.Context, srcPath, dstPath string) error {
srcBlob := d.containerClient.NewBlobClient(srcPath)
dstBlob := d.containerClient.NewBlobClient(dstPath)
// Use configured expiration time for SAS URL
expireDuration := time.Hour * time.Duration(d.SignURLExpire)
srcURL, err := srcBlob.GetSASURL(sas.BlobPermissions{Read: true}, time.Now().Add(expireDuration), nil)
if err != nil {
return fmt.Errorf("failed to generate source SAS URL: %w", err)
}
_, err = dstBlob.StartCopyFromURL(ctx, srcURL, nil)
return err
}
// createContainerIfNotExists - Create container if not exists
// Clean up commented code
func (d *AzureBlob) createContainerIfNotExists(ctx context.Context, containerName string) error {
serviceClient := d.client.ServiceClient()
containerClient := serviceClient.NewContainerClient(containerName)
var options = service.CreateContainerOptions{}
_, err := containerClient.Create(ctx, &options)
if err != nil {
var responseErr *azcore.ResponseError
if errors.As(err, &responseErr) && responseErr.ErrorCode != "ContainerAlreadyExists" {
return fmt.Errorf("failed to create or access container [%s]: %w", containerName, err)
}
}
d.containerClient = containerClient
return nil
}
// mkDir creates a virtual directory marker by uploading an empty blob with metadata.
func (d *AzureBlob) mkDir(ctx context.Context, fullDirName string) error {
dirPath := ensureTrailingSlash(fullDirName)
blobClient := d.containerClient.NewBlockBlobClient(dirPath)
// Upload an empty blob with metadata indicating it's a directory
_, err := blobClient.Upload(ctx, struct {
*bytes.Reader
io.Closer
}{
Reader: bytes.NewReader([]byte{}),
Closer: io.NopCloser(nil),
}, &blockblob.UploadOptions{
Metadata: map[string]*string{
"hdi_isfolder": to.Ptr("true"),
},
})
return err
}
// ensureTrailingSlash ensures the provided path ends with a trailing slash.
func ensureTrailingSlash(path string) string {
if !strings.HasSuffix(path, "/") {
return path + "/"
}
return path
}
// moveOrRename moves or renames blobs or directories from source to destination.
func (d *AzureBlob) moveOrRename(ctx context.Context, srcPath, dstPath string, isDir bool, srcSize int64) error {
if isDir {
// Normalize paths for directory operations
srcPath = ensureTrailingSlash(srcPath)
dstPath = ensureTrailingSlash(dstPath)
// List all blobs under the source directory
blobs, err := d.flattenListBlobs(ctx, srcPath)
if err != nil {
return fmt.Errorf("failed to list blobs: %w", err)
}
// Iterate and copy each blob to the destination
for _, item := range blobs {
srcBlobName := *item.Name
relPath := strings.TrimPrefix(srcBlobName, srcPath)
itemDstPath := path.Join(dstPath, relPath)
if isDirectory(item) {
// Create directory marker at destination
if err := d.mkDir(ctx, itemDstPath); err != nil {
return fmt.Errorf("failed to create directory marker [%s]: %w", itemDstPath, err)
}
} else {
// Copy file blob to destination
if err := d.copyFile(ctx, srcBlobName, itemDstPath); err != nil {
return fmt.Errorf("failed to copy blob [%s]: %w", srcBlobName, err)
}
}
}
// Handle empty directories by creating a marker at destination
if len(blobs) == 0 {
if err := d.mkDir(ctx, dstPath); err != nil {
return fmt.Errorf("failed to create directory [%s]: %w", dstPath, err)
}
}
// Delete source directory and its contents
if err := d.deleteFolder(ctx, srcPath); err != nil {
log.Warnf("failed to delete source directory [%s]: %v\n, and try again", srcPath, err)
// Retry deletion once more and ignore the result
if err := d.deleteFolder(ctx, srcPath); err != nil {
log.Errorf("Retry deletion of source directory [%s] failed: %v", srcPath, err)
}
}
return nil
}
// Single file move or rename operation
if err := d.copyFile(ctx, srcPath, dstPath); err != nil {
return fmt.Errorf("failed to copy file: %w", err)
}
// Delete source file after successful copy
if err := d.deleteFile(ctx, srcPath, false); err != nil {
log.Errorf("Error deleting source file [%s]: %v", srcPath, err)
}
return nil
}
// optimizedUploadOptions returns the optimal upload options based on file size
func optimizedUploadOptions(fileSize int64) *azblob.UploadStreamOptions {
options := &azblob.UploadStreamOptions{
BlockSize: 4 * 1024 * 1024, // 4MB block size
Concurrency: 4, // Default concurrency
}
// For large files, increase block size and concurrency
if fileSize > 256*1024*1024 { // For files larger than 256MB
options.BlockSize = 8 * 1024 * 1024 // 8MB blocks
options.Concurrency = 8 // More concurrent uploads
}
// For very large files (>1GB)
if fileSize > 1024*1024*1024 {
options.BlockSize = 16 * 1024 * 1024 // 16MB blocks
options.Concurrency = 16 // Higher concurrency
}
return options
}
// isDirectory determines if a blob represents a directory
// Checks multiple indicators: path suffix, metadata, and content type
func isDirectory(blob container.BlobItem) bool {
// Check path suffix
if strings.HasSuffix(*blob.Name, "/") {
return true
}
// Check metadata for directory marker
if blob.Metadata != nil {
if val, ok := blob.Metadata["hdi_isfolder"]; ok && val != nil && *val == "true" {
return true
}
// Azure Storage Explorer and other tools may use different metadata keys
if val, ok := blob.Metadata["is_directory"]; ok && val != nil && strings.ToLower(*val) == "true" {
return true
}
}
// Check content type (some tools mark directories with specific content types)
if blob.Properties != nil && blob.Properties.ContentType != nil {
contentType := strings.ToLower(*blob.Properties.ContentType)
if blob.Properties.ContentLength != nil && *blob.Properties.ContentLength == 0 && (contentType == "application/directory" || contentType == "directory") {
return true
}
}
return false
}
// deleteEmptyDirectory deletes a directory only if it's empty
func (d *AzureBlob) deleteEmptyDirectory(ctx context.Context, dirPath string) error {
// Directory is empty, delete the directory marker
blobClient := d.containerClient.NewBlobClient(strings.TrimSuffix(dirPath, "/"))
_, err := blobClient.Delete(ctx, nil)
// Also try deleting with trailing slash (for different directory marker formats)
if err != nil && isNotFoundError(err) {
blobClient = d.containerClient.NewBlobClient(dirPath)
_, err = blobClient.Delete(ctx, nil)
}
// Ignore not found errors
if err != nil && isNotFoundError(err) {
log.Infof("Directory [%s] not found during deletion: %v", dirPath, err)
return nil
}
return err
}

View File

@@ -6,13 +6,16 @@ import (
"encoding/hex"
"errors"
"io"
"math"
"net/url"
"os"
stdpath "path"
"strconv"
"time"
"golang.org/x/sync/semaphore"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
@@ -76,6 +79,8 @@ func (d *BaiduNetdisk) List(ctx context.Context, dir model.Obj, args model.ListA
func (d *BaiduNetdisk) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if d.DownloadAPI == "crack" {
return d.linkCrack(file, args)
} else if d.DownloadAPI == "crack_video" {
return d.linkCrackVideo(file, args)
}
return d.linkOfficial(file, args)
}
@@ -181,21 +186,35 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
return newObj, nil
}
tempFile, err := stream.CacheFullInTempFile()
var (
cache = stream.GetFile()
tmpF *os.File
err error
)
if _, ok := cache.(io.ReaderAt); !ok {
tmpF, err = os.CreateTemp(conf.Conf.TempDir, "file-*")
if err != nil {
return nil, err
}
defer func() {
_ = tmpF.Close()
_ = os.Remove(tmpF.Name())
}()
cache = tmpF
}
streamSize := stream.GetSize()
sliceSize := d.getSliceSize()
count := int(math.Max(math.Ceil(float64(streamSize)/float64(sliceSize)), 1))
sliceSize := d.getSliceSize(streamSize)
count := int(streamSize / sliceSize)
lastBlockSize := streamSize % sliceSize
if streamSize > 0 && lastBlockSize == 0 {
if lastBlockSize > 0 {
count++
} else {
lastBlockSize = sliceSize
}
//cal md5 for first 256k data
const SliceSize int64 = 256 * 1024
const SliceSize int64 = 256 * utils.KB
// cal md5
blockList := make([]string, 0, count)
byteSize := sliceSize
@@ -203,6 +222,11 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
sliceMd5H := md5.New()
sliceMd5H2 := md5.New()
slicemd5H2Write := utils.LimitWriter(sliceMd5H2, SliceSize)
writers := []io.Writer{fileMd5H, sliceMd5H, slicemd5H2Write}
if tmpF != nil {
writers = append(writers, tmpF)
}
written := int64(0)
for i := 1; i <= count; i++ {
if utils.IsCanceled(ctx) {
@@ -211,13 +235,23 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
if i == count {
byteSize = lastBlockSize
}
_, err := utils.CopyWithBufferN(io.MultiWriter(fileMd5H, sliceMd5H, slicemd5H2Write), tempFile, byteSize)
n, err := utils.CopyWithBufferN(io.MultiWriter(writers...), stream, byteSize)
written += n
if err != nil && err != io.EOF {
return nil, err
}
blockList = append(blockList, hex.EncodeToString(sliceMd5H.Sum(nil)))
sliceMd5H.Reset()
}
if tmpF != nil {
if written != streamSize {
return nil, errs.NewErr(err, "CreateTempFile failed, incoming stream actual size= %d, expect = %d ", written, streamSize)
}
_, err = tmpF.Seek(0, io.SeekStart)
if err != nil {
return nil, errs.NewErr(err, "CreateTempFile failed, can't seek to 0 ")
}
}
contentMd5 := hex.EncodeToString(fileMd5H.Sum(nil))
sliceMd5 := hex.EncodeToString(sliceMd5H2.Sum(nil))
blockListStr, _ := utils.Json.MarshalToString(blockList)
@@ -260,9 +294,10 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
}
// step.2 上传分片
threadG, upCtx := errgroup.NewGroupWithContext(ctx, d.uploadThread,
retry.Attempts(3),
retry.Attempts(1),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
sem := semaphore.NewWeighted(3)
for i, partseq := range precreateResp.BlockList {
if utils.IsCanceled(upCtx) {
break
@@ -273,6 +308,10 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
byteSize = lastBlockSize
}
threadG.Go(func(ctx context.Context) error {
if err = sem.Acquire(ctx, 1); err != nil {
return err
}
defer sem.Release(1)
params := map[string]string{
"method": "upload",
"access_token": d.AccessToken,
@@ -281,7 +320,8 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
"uploadid": precreateResp.Uploadid,
"partseq": strconv.Itoa(partseq),
}
err := d.uploadSlice(ctx, params, stream.GetName(), io.NewSectionReader(tempFile, offset, byteSize))
err := d.uploadSlice(ctx, params, stream.GetName(),
driver.NewLimitedUploadStream(ctx, io.NewSectionReader(cache, offset, byteSize)))
if err != nil {
return err
}

View File

@@ -10,14 +10,16 @@ type Addition struct {
driver.RootPath
OrderBy string `json:"order_by" type:"select" options:"name,time,size" default:"name"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
DownloadAPI string `json:"download_api" type:"select" options:"official,crack" default:"official"`
ClientID string `json:"client_id" required:"true" default:"iYCeC9g08h5vuP9UqvPHKKSVrKFXGa1v"`
ClientSecret string `json:"client_secret" required:"true" default:"jXiFMOPVPCWlO2M5CwWQzffpNPaGTRBG"`
DownloadAPI string `json:"download_api" type:"select" options:"official,crack,crack_video" default:"official"`
ClientID string `json:"client_id" required:"true" default:"hq9yQ9w9kR4YHj1kyYafLygVocobh7Sf"`
ClientSecret string `json:"client_secret" required:"true" default:"YH2VpZcFJHYNnV6vLfHQXDBhcE7ZChyE"`
CustomCrackUA string `json:"custom_crack_ua" required:"true" default:"netdisk"`
AccessToken string
UploadThread string `json:"upload_thread" default:"3" help:"1<=thread<=32"`
UploadAPI string `json:"upload_api" default:"https://d.pcs.baidu.com"`
CustomUploadPartSize int64 `json:"custom_upload_part_size" type:"number" default:"0" help:"0 for auto"`
LowBandwithUploadMode bool `json:"low_bandwith_upload_mode" default:"false"`
OnlyListVideoFile bool `json:"only_list_video_file" default:"false"`
}
var config = driver.Config{

View File

@@ -17,7 +17,7 @@ type TokenErrResp struct {
type File struct {
//TkbindId int `json:"tkbind_id"`
//OwnerType int `json:"owner_type"`
//Category int `json:"category"`
Category int `json:"category"`
//RealCategory string `json:"real_category"`
FsId int64 `json:"fs_id"`
//OperId int `json:"oper_id"`

View File

@@ -79,6 +79,12 @@ func (d *BaiduNetdisk) request(furl string, method string, callback base.ReqCall
return retry.Unrecoverable(err2)
}
}
if 31023 == errno && d.DownloadAPI == "crack_video" {
result = res.Body()
return nil
}
return fmt.Errorf("req: [%s] ,errno: %d, refer to https://pan.baidu.com/union/doc/", furl, errno)
}
result = res.Body()
@@ -131,12 +137,21 @@ func (d *BaiduNetdisk) getFiles(dir string) ([]File, error) {
if len(resp.List) == 0 {
break
}
if d.OnlyListVideoFile {
for _, file := range resp.List {
if file.Isdir == 1 || file.Category == 1 {
res = append(res, file)
}
}
} else {
res = append(res, resp.List...)
}
}
return res, nil
}
func (d *BaiduNetdisk) linkOfficial(file model.Obj, args model.LinkArgs) (*model.Link, error) {
func (d *BaiduNetdisk) linkOfficial(file model.Obj, _ model.LinkArgs) (*model.Link, error) {
var resp DownloadResp
params := map[string]string{
"method": "filemetas",
@@ -164,7 +179,7 @@ func (d *BaiduNetdisk) linkOfficial(file model.Obj, args model.LinkArgs) (*model
}, nil
}
func (d *BaiduNetdisk) linkCrack(file model.Obj, args model.LinkArgs) (*model.Link, error) {
func (d *BaiduNetdisk) linkCrack(file model.Obj, _ model.LinkArgs) (*model.Link, error) {
var resp DownloadResp2
param := map[string]string{
"target": fmt.Sprintf("[\"%s\"]", file.GetPath()),
@@ -187,6 +202,34 @@ func (d *BaiduNetdisk) linkCrack(file model.Obj, args model.LinkArgs) (*model.Li
}, nil
}
func (d *BaiduNetdisk) linkCrackVideo(file model.Obj, _ model.LinkArgs) (*model.Link, error) {
param := map[string]string{
"type": "VideoURL",
"path": fmt.Sprintf("%s", file.GetPath()),
"fs_id": file.GetID(),
"devuid": "0%1",
"clienttype": "1",
"channel": "android_15_25010PN30C_bd-netdisk_1523a",
"nom3u8": "1",
"dlink": "1",
"media": "1",
"origin": "dlna",
}
resp, err := d.request("https://pan.baidu.com/api/mediainfo", http.MethodGet, func(req *resty.Request) {
req.SetQueryParams(param)
}, nil)
if err != nil {
return nil, err
}
return &model.Link{
URL: utils.Json.Get(resp, "info", "dlink").ToString(),
Header: http.Header{
"User-Agent": []string{d.CustomCrackUA},
},
}, nil
}
func (d *BaiduNetdisk) manage(opera string, filelist any) ([]byte, error) {
params := map[string]string{
"method": "filemanager",
@@ -230,22 +273,72 @@ func joinTime(form map[string]string, ctime, mtime int64) {
const (
DefaultSliceSize int64 = 4 * utils.MB
VipSliceSize = 16 * utils.MB
SVipSliceSize = 32 * utils.MB
VipSliceSize int64 = 16 * utils.MB
SVipSliceSize int64 = 32 * utils.MB
MaxSliceNum = 2048 // 文档写的是 1024/没写 ,但实际测试是 2048
SliceStep int64 = 1 * utils.MB
)
func (d *BaiduNetdisk) getSliceSize() int64 {
func (d *BaiduNetdisk) getSliceSize(filesize int64) int64 {
// 非会员固定为 4MB
if d.vipType == 0 {
if d.CustomUploadPartSize != 0 {
return d.CustomUploadPartSize
log.Warnf("CustomUploadPartSize is not supported for non-vip user, use DefaultSliceSize")
}
switch d.vipType {
case 1:
return VipSliceSize
case 2:
return SVipSliceSize
default:
if filesize > MaxSliceNum*DefaultSliceSize {
log.Warnf("File size(%d) is too large, may cause upload failure", filesize)
}
return DefaultSliceSize
}
if d.CustomUploadPartSize != 0 {
if d.CustomUploadPartSize < DefaultSliceSize {
log.Warnf("CustomUploadPartSize(%d) is less than DefaultSliceSize(%d), use DefaultSliceSize", d.CustomUploadPartSize, DefaultSliceSize)
return DefaultSliceSize
}
if d.vipType == 1 && d.CustomUploadPartSize > VipSliceSize {
log.Warnf("CustomUploadPartSize(%d) is greater than VipSliceSize(%d), use VipSliceSize", d.CustomUploadPartSize, VipSliceSize)
return VipSliceSize
}
if d.vipType == 2 && d.CustomUploadPartSize > SVipSliceSize {
log.Warnf("CustomUploadPartSize(%d) is greater than SVipSliceSize(%d), use SVipSliceSize", d.CustomUploadPartSize, SVipSliceSize)
return SVipSliceSize
}
return d.CustomUploadPartSize
}
maxSliceSize := DefaultSliceSize
switch d.vipType {
case 1:
maxSliceSize = VipSliceSize
case 2:
maxSliceSize = SVipSliceSize
}
// upload on low bandwidth
if d.LowBandwithUploadMode {
size := DefaultSliceSize
for size <= maxSliceSize {
if filesize <= MaxSliceNum*size {
return size
}
size += SliceStep
}
}
if filesize > MaxSliceNum*maxSliceSize {
log.Warnf("File size(%d) is too large, may cause upload failure", filesize)
}
return maxSliceSize
}
// func encodeURIComponent(str string) string {

View File

@@ -7,13 +7,16 @@ import (
"errors"
"fmt"
"io"
"math"
"os"
"regexp"
"strconv"
"strings"
"time"
"golang.org/x/sync/semaphore"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
@@ -239,21 +242,33 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
// TODO:
// 暂时没有找到妙传方式
// 需要获取完整文件md5,必须支持 io.Seek
tempFile, err := stream.CacheFullInTempFile()
var (
cache = stream.GetFile()
tmpF *os.File
err error
)
if _, ok := cache.(io.ReaderAt); !ok {
tmpF, err = os.CreateTemp(conf.Conf.TempDir, "file-*")
if err != nil {
return nil, err
}
defer func() {
_ = tmpF.Close()
_ = os.Remove(tmpF.Name())
}()
cache = tmpF
}
const DEFAULT int64 = 1 << 22
const SliceSize int64 = 1 << 18
// 计算需要的数据
streamSize := stream.GetSize()
count := int(math.Ceil(float64(streamSize) / float64(DEFAULT)))
count := int(streamSize / DEFAULT)
lastBlockSize := streamSize % DEFAULT
if lastBlockSize == 0 {
if lastBlockSize > 0 {
count++
} else {
lastBlockSize = DEFAULT
}
@@ -264,6 +279,11 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
sliceMd5H := md5.New()
sliceMd5H2 := md5.New()
slicemd5H2Write := utils.LimitWriter(sliceMd5H2, SliceSize)
writers := []io.Writer{fileMd5H, sliceMd5H, slicemd5H2Write}
if tmpF != nil {
writers = append(writers, tmpF)
}
written := int64(0)
for i := 1; i <= count; i++ {
if utils.IsCanceled(ctx) {
return nil, ctx.Err()
@@ -271,13 +291,23 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
if i == count {
byteSize = lastBlockSize
}
_, err := utils.CopyWithBufferN(io.MultiWriter(fileMd5H, sliceMd5H, slicemd5H2Write), tempFile, byteSize)
n, err := utils.CopyWithBufferN(io.MultiWriter(writers...), stream, byteSize)
written += n
if err != nil && err != io.EOF {
return nil, err
}
sliceMD5List = append(sliceMD5List, hex.EncodeToString(sliceMd5H.Sum(nil)))
sliceMd5H.Reset()
}
if tmpF != nil {
if written != streamSize {
return nil, errs.NewErr(err, "CreateTempFile failed, incoming stream actual size= %d, expect = %d ", written, streamSize)
}
_, err = tmpF.Seek(0, io.SeekStart)
if err != nil {
return nil, errs.NewErr(err, "CreateTempFile failed, can't seek to 0 ")
}
}
contentMd5 := hex.EncodeToString(fileMd5H.Sum(nil))
sliceMd5 := hex.EncodeToString(sliceMd5H2.Sum(nil))
blockListStr, _ := utils.Json.MarshalToString(sliceMD5List)
@@ -289,7 +319,7 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
"rtype": "1",
"ctype": "11",
"path": fmt.Sprintf("/%s", stream.GetName()),
"size": fmt.Sprint(stream.GetSize()),
"size": fmt.Sprint(streamSize),
"slice-md5": sliceMd5,
"content-md5": contentMd5,
"block_list": blockListStr,
@@ -314,6 +344,7 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
retry.Attempts(3),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
sem := semaphore.NewWeighted(3)
for i, partseq := range precreateResp.BlockList {
if utils.IsCanceled(upCtx) {
break
@@ -325,6 +356,10 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
}
threadG.Go(func(ctx context.Context) error {
if err = sem.Acquire(ctx, 1); err != nil {
return err
}
defer sem.Release(1)
uploadParams := map[string]string{
"method": "upload",
"path": params["path"],
@@ -335,7 +370,8 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
_, err = d.Post("https://c3.pcs.baidu.com/rest/2.0/pcs/superfile2", func(r *resty.Request) {
r.SetContext(ctx)
r.SetQueryParams(uploadParams)
r.SetFileReader("file", stream.GetName(), io.NewSectionReader(tempFile, offset, byteSize))
r.SetFileReader("file", stream.GetName(),
driver.NewLimitedUploadStream(ctx, io.NewSectionReader(cache, offset, byteSize)))
}, nil)
if err != nil {
return err

View File

@@ -6,6 +6,7 @@ import (
"time"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/net"
"github.com/go-resty/resty/v2"
)
@@ -26,7 +27,7 @@ func InitClient() {
NoRedirectClient.SetHeader("user-agent", UserAgent)
RestyClient = NewRestyClient()
HttpClient = NewHttpClient()
HttpClient = net.NewHttpClient()
}
func NewRestyClient() *resty.Client {
@@ -38,13 +39,3 @@ func NewRestyClient() *resty.Client {
SetTLSClientConfig(&tls.Config{InsecureSkipVerify: conf.Conf.TlsInsecureSkipVerify})
return client
}
func NewHttpClient() *http.Client {
return &http.Client{
Timeout: time.Hour * 48,
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
TLSClientConfig: &tls.Config{InsecureSkipVerify: conf.Conf.TlsInsecureSkipVerify},
},
}
}

View File

@@ -215,7 +215,7 @@ func (d *ChaoXing) Remove(ctx context.Context, obj model.Obj) error {
return nil
}
func (d *ChaoXing) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
func (d *ChaoXing) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
var resp UploadDataRsp
_, err := d.request("https://noteyd.chaoxing.com/pc/files/getUploadConfig", http.MethodGet, func(req *resty.Request) {
}, &resp)
@@ -227,11 +227,11 @@ func (d *ChaoXing) Put(ctx context.Context, dstDir model.Obj, stream model.FileS
}
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
filePart, err := writer.CreateFormFile("file", stream.GetName())
filePart, err := writer.CreateFormFile("file", file.GetName())
if err != nil {
return err
}
_, err = utils.CopyWithBuffer(filePart, stream)
_, err = utils.CopyWithBuffer(filePart, file)
if err != nil {
return err
}
@@ -248,7 +248,14 @@ func (d *ChaoXing) Put(ctx context.Context, dstDir model.Obj, stream model.FileS
if err != nil {
return err
}
req, err := http.NewRequest("POST", "https://pan-yz.chaoxing.com/upload", body)
r := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: &driver.SimpleReaderWithSize{
Reader: body,
Size: int64(body.Len()),
},
UpdateProgress: up,
})
req, err := http.NewRequestWithContext(ctx, "POST", "https://pan-yz.chaoxing.com/upload", r)
if err != nil {
return err
}

View File

@@ -5,7 +5,6 @@ import (
"io"
"net/http"
"path"
"strconv"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
@@ -19,6 +18,7 @@ import (
type Cloudreve struct {
model.Storage
Addition
ref *Cloudreve
}
func (d *Cloudreve) Config() driver.Config {
@@ -38,8 +38,18 @@ func (d *Cloudreve) Init(ctx context.Context) error {
return d.login()
}
func (d *Cloudreve) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*Cloudreve)
if ok {
d.ref = refStorage
return nil
}
return errs.NotSupport
}
func (d *Cloudreve) Drop(ctx context.Context) error {
d.Cookie = ""
d.ref = nil
return nil
}
@@ -147,7 +157,7 @@ func (d *Cloudreve) Put(ctx context.Context, dstDir model.Obj, stream model.File
"size": stream.GetSize(),
"name": stream.GetName(),
"policy_id": r.Policy.Id,
"last_modified": stream.ModTime().Unix(),
"last_modified": stream.ModTime().UnixMilli(),
}
// 获取上传会话信息
@@ -163,42 +173,18 @@ func (d *Cloudreve) Put(ctx context.Context, dstDir model.Obj, stream model.File
switch r.Policy.Type {
case "onedrive":
err = d.upOneDrive(ctx, stream, u, up)
case "s3":
err = d.upS3(ctx, stream, u, up)
case "remote": // 从机存储
err = d.upRemote(ctx, stream, u, up)
case "local": // 本机存储
var chunkSize = u.ChunkSize
var buf []byte
var chunk int
for {
var n int
buf = make([]byte, chunkSize)
n, err = io.ReadAtLeast(stream, buf, chunkSize)
if err != nil && err != io.ErrUnexpectedEOF {
if err == io.EOF {
return nil
}
return err
}
if n == 0 {
break
}
buf = buf[:n]
err = d.request(http.MethodPost, "/file/upload/"+u.SessionID+"/"+strconv.Itoa(chunk), func(req *resty.Request) {
req.SetHeader("Content-Type", "application/octet-stream")
req.SetHeader("Content-Length", strconv.Itoa(n))
req.SetBody(buf)
}, nil)
if err != nil {
break
}
chunk++
}
err = d.upLocal(ctx, stream, u, up)
default:
err = errs.NotImplement
}
if err != nil {
// 删除失败的会话
err = d.request(http.MethodDelete, "/file/upload/"+u.SessionID, nil, nil)
_ = d.request(http.MethodDelete, "/file/upload/"+u.SessionID, nil, nil)
return err
}
return nil

View File

@@ -25,7 +25,8 @@ type UploadInfo struct {
ChunkSize int `json:"chunkSize"`
Expires int `json:"expires"`
UploadURLs []string `json:"uploadURLs"`
Credential string `json:"credential,omitempty"`
Credential string `json:"credential,omitempty"` // local
CompleteURL string `json:"completeURL,omitempty"` // s3
}
type DirectoryResp struct {

View File

@@ -4,12 +4,14 @@ import (
"bytes"
"context"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
@@ -19,7 +21,6 @@ import (
"github.com/alist-org/alist/v3/pkg/cookie"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
json "github.com/json-iterator/go"
jsoniter "github.com/json-iterator/go"
)
@@ -27,17 +28,23 @@ import (
const loginPath = "/user/session"
func (d *Cloudreve) request(method string, path string, callback base.ReqCallback, out interface{}) error {
u := d.Address + "/api/v3" + path
ua := d.CustomUA
if ua == "" {
ua = base.UserAgent
func (d *Cloudreve) getUA() string {
if d.CustomUA != "" {
return d.CustomUA
}
return base.UserAgent
}
func (d *Cloudreve) request(method string, path string, callback base.ReqCallback, out interface{}) error {
if d.ref != nil {
return d.ref.request(method, path, callback, out)
}
u := d.Address + "/api/v3" + path
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"Cookie": "cloudreve-session=" + d.Cookie,
"Accept": "application/json, text/plain, */*",
"User-Agent": ua,
"User-Agent": d.getUA(),
})
var r Resp
@@ -76,11 +83,11 @@ func (d *Cloudreve) request(method string, path string, callback base.ReqCallbac
}
if out != nil && r.Data != nil {
var marshal []byte
marshal, err = json.Marshal(r.Data)
marshal, err = jsoniter.Marshal(r.Data)
if err != nil {
return err
}
err = json.Unmarshal(marshal, out)
err = jsoniter.Unmarshal(marshal, out)
if err != nil {
return err
}
@@ -100,7 +107,7 @@ func (d *Cloudreve) login() error {
if err == nil {
break
}
if err != nil && err.Error() != "CAPTCHA not match." {
if err.Error() != "CAPTCHA not match." {
break
}
}
@@ -161,15 +168,11 @@ func (d *Cloudreve) GetThumb(file Object) (model.Thumbnail, error) {
if !d.Addition.EnableThumbAndFolderSize {
return model.Thumbnail{}, nil
}
ua := d.CustomUA
if ua == "" {
ua = base.UserAgent
}
req := base.NoRedirectClient.R()
req.SetHeaders(map[string]string{
"Cookie": "cloudreve-session=" + d.Cookie,
"Accept": "image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8",
"User-Agent": ua,
"User-Agent": d.getUA(),
})
resp, err := req.Execute(http.MethodGet, d.Address+"/api/v3/file/thumb/"+file.Id)
if err != nil {
@@ -180,9 +183,7 @@ func (d *Cloudreve) GetThumb(file Object) (model.Thumbnail, error) {
}, nil
}
func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
uploadUrl := u.UploadURLs[0]
credential := u.Credential
func (d *Cloudreve) upLocal(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
var finish int64 = 0
var chunk int = 0
DEFAULT := int64(u.ChunkSize)
@@ -190,33 +191,117 @@ func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u U
if utils.IsCanceled(ctx) {
return ctx.Err()
}
utils.Log.Debugf("[Cloudreve-Remote] upload: %d", finish)
var byteSize = DEFAULT
left := stream.GetSize() - finish
if left < DEFAULT {
byteSize = left
}
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Cloudreve-Local] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("POST", uploadUrl+"?chunk="+strconv.Itoa(chunk), bytes.NewBuffer(byteData))
err = d.request(http.MethodPost, "/file/upload/"+u.SessionID+"/"+strconv.Itoa(chunk), func(req *resty.Request) {
req.SetHeader("Content-Type", "application/octet-stream")
req.SetContentLength(true)
req.SetHeader("Content-Length", strconv.FormatInt(byteSize, 10))
req.SetHeader("User-Agent", d.getUA())
req.SetBody(driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
req.AddRetryCondition(func(r *resty.Response, err error) bool {
if err != nil {
return true
}
if r.IsError() {
return true
}
var retryResp Resp
jErr := base.RestyClient.JSONUnmarshal(r.Body(), &retryResp)
if jErr != nil {
return true
}
if retryResp.Code != 0 {
return true
}
return false
})
}, nil)
if err != nil {
return err
}
finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize()))
chunk++
}
return nil
}
func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
uploadUrl := u.UploadURLs[0]
credential := u.Credential
var finish int64 = 0
var chunk int = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < stream.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := stream.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Cloudreve-Remote] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("POST", uploadUrl+"?chunk="+strconv.Itoa(chunk),
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Authorization", fmt.Sprint(credential))
finish += byteSize
req.Header.Set("User-Agent", d.getUA())
err = func() error {
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
res.Body.Close()
defer res.Body.Close()
if res.StatusCode != 200 {
return errors.New(res.Status)
}
body, err := io.ReadAll(res.Body)
if err != nil {
return err
}
var up Resp
err = json.Unmarshal(body, &up)
if err != nil {
return err
}
if up.Code != 0 {
return errors.New(up.Msg)
}
return nil
}()
if err == nil {
retryCount = 0
finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize()))
chunk++
} else {
retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors, error: %s", maxRetries, err)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-Remote] server errors while uploading, retrying after %v...", backoff)
time.Sleep(backoff)
}
}
return nil
}
@@ -225,47 +310,151 @@ func (d *Cloudreve) upOneDrive(ctx context.Context, stream model.FileStreamer, u
uploadUrl := u.UploadURLs[0]
var finish int64 = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < stream.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
utils.Log.Debugf("[Cloudreve-OneDrive] upload: %d", finish)
var byteSize = DEFAULT
left := stream.GetSize() - finish
if left < DEFAULT {
byteSize = left
}
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Cloudreve-OneDrive] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("PUT", uploadUrl, bytes.NewBuffer(byteData))
req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", finish, finish+byteSize-1, stream.GetSize()))
req.Header.Set("User-Agent", d.getUA())
finish += byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
// https://learn.microsoft.com/zh-cn/onedrive/developer/rest-api/api/driveitem_createuploadsession
if res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200 {
switch {
case res.StatusCode >= 500 && res.StatusCode <= 504:
retryCount++
if retryCount > maxRetries {
res.Body.Close()
return fmt.Errorf("upload failed after %d retries due to server errors, error %d", maxRetries, res.StatusCode)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-OneDrive] server errors %d while uploading, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200:
data, _ := io.ReadAll(res.Body)
res.Body.Close()
return errors.New(string(data))
}
default:
res.Body.Close()
retryCount = 0
finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize()))
}
}
// 上传成功发送回调请求
err := d.request(http.MethodPost, "/callback/onedrive/finish/"+u.SessionID, func(req *resty.Request) {
return d.request(http.MethodPost, "/callback/onedrive/finish/"+u.SessionID, func(req *resty.Request) {
req.SetBody("{}")
}, nil)
}
func (d *Cloudreve) upS3(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
var finish int64 = 0
var chunk int = 0
var etags []string
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < stream.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := stream.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Cloudreve-S3] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("PUT", u.UploadURLs[chunk],
driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = byteSize
finish += byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
etag := res.Header.Get("ETag")
res.Body.Close()
switch {
case res.StatusCode != 200:
retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors, error %d", maxRetries, res.StatusCode)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-S3] server errors %d while uploading, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case etag == "":
return errors.New("faild to get ETag from header")
default:
retryCount = 0
etags = append(etags, etag)
finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize()))
chunk++
}
}
// s3LikeFinishUpload
// https://github.com/cloudreve/frontend/blob/b485bf297974cbe4834d2e8e744ae7b7e5b2ad39/src/component/Uploader/core/api/index.ts#L204-L252
bodyBuilder := &strings.Builder{}
bodyBuilder.WriteString("<CompleteMultipartUpload>")
for i, etag := range etags {
bodyBuilder.WriteString(fmt.Sprintf(
`<Part><PartNumber>%d</PartNumber><ETag>%s</ETag></Part>`,
i+1, // PartNumber 从 1 开始
etag,
))
}
bodyBuilder.WriteString("</CompleteMultipartUpload>")
req, err := http.NewRequest(
"POST",
u.CompleteURL,
strings.NewReader(bodyBuilder.String()),
)
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/xml")
req.Header.Set("User-Agent", d.getUA())
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
body, _ := io.ReadAll(res.Body)
return fmt.Errorf("up status: %d, error: %s", res.StatusCode, string(body))
}
// 上传成功发送回调请求
err = d.request(http.MethodGet, "/callback/s3/"+u.SessionID, nil, nil)
if err != nil {
return err
}

View File

@@ -0,0 +1,305 @@
package cloudreve_v4
import (
"context"
"errors"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
type CloudreveV4 struct {
model.Storage
Addition
ref *CloudreveV4
}
func (d *CloudreveV4) Config() driver.Config {
if d.ref != nil {
return d.ref.Config()
}
if d.EnableVersionUpload {
config.NoOverwriteUpload = false
}
return config
}
func (d *CloudreveV4) GetAddition() driver.Additional {
return &d.Addition
}
func (d *CloudreveV4) Init(ctx context.Context) error {
// removing trailing slash
d.Address = strings.TrimSuffix(d.Address, "/")
op.MustSaveDriverStorage(d)
if d.ref != nil {
return nil
}
if d.AccessToken == "" && d.RefreshToken != "" {
return d.refreshToken()
}
if d.Username != "" {
return d.login()
}
return nil
}
func (d *CloudreveV4) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*CloudreveV4)
if ok {
d.ref = refStorage
return nil
}
return errs.NotSupport
}
func (d *CloudreveV4) Drop(ctx context.Context) error {
d.ref = nil
return nil
}
func (d *CloudreveV4) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
const pageSize int = 100
var f []File
var r FileResp
params := map[string]string{
"page_size": strconv.Itoa(pageSize),
"uri": dir.GetPath(),
"order_by": d.OrderBy,
"order_direction": d.OrderDirection,
"page": "0",
}
for {
err := d.request(http.MethodGet, "/file", func(req *resty.Request) {
req.SetQueryParams(params)
}, &r)
if err != nil {
return nil, err
}
f = append(f, r.Files...)
if r.Pagination.NextToken == "" || len(r.Files) < pageSize {
break
}
params["next_page_token"] = r.Pagination.NextToken
}
return utils.SliceConvert(f, func(src File) (model.Obj, error) {
if d.EnableFolderSize && src.Type == 1 {
var ds FolderSummaryResp
err := d.request(http.MethodGet, "/file/info", func(req *resty.Request) {
req.SetQueryParam("uri", src.Path)
req.SetQueryParam("folder_summary", "true")
}, &ds)
if err == nil && ds.FolderSummary.Size > 0 {
src.Size = ds.FolderSummary.Size
}
}
var thumb model.Thumbnail
if d.EnableThumb && src.Type == 0 {
var t FileThumbResp
err := d.request(http.MethodGet, "/file/thumb", func(req *resty.Request) {
req.SetQueryParam("uri", src.Path)
}, &t)
if err == nil && t.URL != "" {
thumb = model.Thumbnail{
Thumbnail: t.URL,
}
}
}
return &model.ObjThumb{
Object: model.Object{
ID: src.ID,
Path: src.Path,
Name: src.Name,
Size: src.Size,
Modified: src.UpdatedAt,
Ctime: src.CreatedAt,
IsFolder: src.Type == 1,
},
Thumbnail: thumb,
}, nil
})
}
func (d *CloudreveV4) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
var url FileUrlResp
err := d.request(http.MethodPost, "/file/url", func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{file.GetPath()},
"download": true,
})
}, &url)
if err != nil {
return nil, err
}
if len(url.Urls) == 0 {
return nil, errors.New("server returns no url")
}
exp := time.Until(url.Expires)
return &model.Link{
URL: url.Urls[0].URL,
Expiration: &exp,
}, nil
}
func (d *CloudreveV4) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
return d.request(http.MethodPost, "/file/create", func(req *resty.Request) {
req.SetBody(base.Json{
"type": "folder",
"uri": parentDir.GetPath() + "/" + dirName,
"error_on_conflict": true,
})
}, nil)
}
func (d *CloudreveV4) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
return d.request(http.MethodPost, "/file/move", func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{srcObj.GetPath()},
"dst": dstDir.GetPath(),
"copy": false,
})
}, nil)
}
func (d *CloudreveV4) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
return d.request(http.MethodPost, "/file/create", func(req *resty.Request) {
req.SetBody(base.Json{
"new_name": newName,
"uri": srcObj.GetPath(),
})
}, nil)
}
func (d *CloudreveV4) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
return d.request(http.MethodPost, "/file/move", func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{srcObj.GetPath()},
"dst": dstDir.GetPath(),
"copy": true,
})
}, nil)
}
func (d *CloudreveV4) Remove(ctx context.Context, obj model.Obj) error {
return d.request(http.MethodDelete, "/file", func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{obj.GetPath()},
"unlink": false,
"skip_soft_delete": true,
})
}, nil)
}
func (d *CloudreveV4) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
if file.GetSize() == 0 {
// 空文件使用新建文件方法,避免上传卡锁
return d.request(http.MethodPost, "/file/create", func(req *resty.Request) {
req.SetBody(base.Json{
"type": "file",
"uri": dstDir.GetPath() + "/" + file.GetName(),
"error_on_conflict": true,
})
}, nil)
}
var p StoragePolicy
var r FileResp
var u FileUploadResp
var err error
params := map[string]string{
"page_size": "10",
"uri": dstDir.GetPath(),
"order_by": "created_at",
"order_direction": "asc",
"page": "0",
}
err = d.request(http.MethodGet, "/file", func(req *resty.Request) {
req.SetQueryParams(params)
}, &r)
if err != nil {
return err
}
p = r.StoragePolicy
body := base.Json{
"uri": dstDir.GetPath() + "/" + file.GetName(),
"size": file.GetSize(),
"policy_id": p.ID,
"last_modified": file.ModTime().UnixMilli(),
"mime_type": "",
}
if d.EnableVersionUpload {
body["entity_type"] = "version"
}
err = d.request(http.MethodPut, "/file/upload", func(req *resty.Request) {
req.SetBody(body)
}, &u)
if err != nil {
return err
}
if u.StoragePolicy.Relay {
err = d.upLocal(ctx, file, u, up)
} else {
switch u.StoragePolicy.Type {
case "local":
err = d.upLocal(ctx, file, u, up)
case "remote":
err = d.upRemote(ctx, file, u, up)
case "onedrive":
err = d.upOneDrive(ctx, file, u, up)
case "s3":
err = d.upS3(ctx, file, u, up)
default:
return errs.NotImplement
}
}
if err != nil {
// 删除失败的会话
_ = d.request(http.MethodDelete, "/file/upload", func(req *resty.Request) {
req.SetBody(base.Json{
"id": u.SessionID,
"uri": u.URI,
})
}, nil)
return err
}
return nil
}
func (d *CloudreveV4) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *CloudreveV4) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *CloudreveV4) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *CloudreveV4) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// return errs.NotImplement to use an internal archive tool
return nil, errs.NotImplement
}
//func (d *CloudreveV4) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*CloudreveV4)(nil)

View File

@@ -0,0 +1,44 @@
package cloudreve_v4
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
driver.RootPath
// driver.RootID
// define other
Address string `json:"address" required:"true"`
Username string `json:"username"`
Password string `json:"password"`
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
CustomUA string `json:"custom_ua"`
EnableFolderSize bool `json:"enable_folder_size"`
EnableThumb bool `json:"enable_thumb"`
EnableVersionUpload bool `json:"enable_version_upload"`
OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at" default:"name" required:"true"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc" required:"true"`
}
var config = driver.Config{
Name: "Cloudreve V4",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "cloudreve://my",
CheckStatus: true,
Alert: "",
NoOverwriteUpload: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &CloudreveV4{}
})
}

View File

@@ -0,0 +1,164 @@
package cloudreve_v4
import (
"time"
"github.com/alist-org/alist/v3/internal/model"
)
type Object struct {
model.Object
StoragePolicy StoragePolicy
}
type Resp struct {
Code int `json:"code"`
Msg string `json:"msg"`
Data any `json:"data"`
}
type BasicConfigResp struct {
InstanceID string `json:"instance_id"`
// Title string `json:"title"`
// Themes string `json:"themes"`
// DefaultTheme string `json:"default_theme"`
User struct {
ID string `json:"id"`
// Nickname string `json:"nickname"`
// CreatedAt time.Time `json:"created_at"`
// Anonymous bool `json:"anonymous"`
Group struct {
ID string `json:"id"`
Name string `json:"name"`
Permission string `json:"permission"`
} `json:"group"`
} `json:"user"`
// Logo string `json:"logo"`
// LogoLight string `json:"logo_light"`
// CaptchaReCaptchaKey string `json:"captcha_ReCaptchaKey"`
CaptchaType string `json:"captcha_type"` // support 'normal' only
// AppPromotion bool `json:"app_promotion"`
}
type SiteLoginConfigResp struct {
LoginCaptcha bool `json:"login_captcha"`
Authn bool `json:"authn"`
}
type PrepareLoginResp struct {
WebauthnEnabled bool `json:"webauthn_enabled"`
PasswordEnabled bool `json:"password_enabled"`
}
type CaptchaResp struct {
Image string `json:"image"`
Ticket string `json:"ticket"`
}
type Token struct {
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
AccessExpires time.Time `json:"access_expires"`
RefreshExpires time.Time `json:"refresh_expires"`
}
type TokenResponse struct {
User struct {
ID string `json:"id"`
// Email string `json:"email"`
// Nickname string `json:"nickname"`
Status string `json:"status"`
// CreatedAt time.Time `json:"created_at"`
Group struct {
ID string `json:"id"`
Name string `json:"name"`
Permission string `json:"permission"`
// DirectLinkBatchSize int `json:"direct_link_batch_size"`
// TrashRetention int `json:"trash_retention"`
} `json:"group"`
// Language string `json:"language"`
} `json:"user"`
Token Token `json:"token"`
}
type File struct {
Type int `json:"type"` // 0: file, 1: folder
ID string `json:"id"`
Name string `json:"name"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
Size int64 `json:"size"`
Metadata interface{} `json:"metadata"`
Path string `json:"path"`
Capability string `json:"capability"`
Owned bool `json:"owned"`
PrimaryEntity string `json:"primary_entity"`
}
type StoragePolicy struct {
ID string `json:"id"`
Name string `json:"name"`
Type string `json:"type"`
MaxSize int64 `json:"max_size"`
Relay bool `json:"relay,omitempty"`
}
type Pagination struct {
Page int `json:"page"`
PageSize int `json:"page_size"`
IsCursor bool `json:"is_cursor"`
NextToken string `json:"next_token,omitempty"`
}
type Props struct {
Capability string `json:"capability"`
MaxPageSize int `json:"max_page_size"`
OrderByOptions []string `json:"order_by_options"`
OrderDirectionOptions []string `json:"order_direction_options"`
}
type FileResp struct {
Files []File `json:"files"`
Parent File `json:"parent"`
Pagination Pagination `json:"pagination"`
Props Props `json:"props"`
ContextHint string `json:"context_hint"`
MixedType bool `json:"mixed_type"`
StoragePolicy StoragePolicy `json:"storage_policy"`
}
type FileUrlResp struct {
Urls []struct {
URL string `json:"url"`
} `json:"urls"`
Expires time.Time `json:"expires"`
}
type FileUploadResp struct {
// UploadID string `json:"upload_id"`
SessionID string `json:"session_id"`
ChunkSize int64 `json:"chunk_size"`
Expires int64 `json:"expires"`
StoragePolicy StoragePolicy `json:"storage_policy"`
URI string `json:"uri"`
CompleteURL string `json:"completeURL,omitempty"` // for S3-like
CallbackSecret string `json:"callback_secret,omitempty"` // for S3-like, OneDrive
UploadUrls []string `json:"upload_urls,omitempty"` // for not-local
Credential string `json:"credential,omitempty"` // for local
}
type FileThumbResp struct {
URL string `json:"url"`
Expires time.Time `json:"expires"`
}
type FolderSummaryResp struct {
File
FolderSummary struct {
Size int64 `json:"size"`
Files int64 `json:"files"`
Folders int64 `json:"folders"`
Completed bool `json:"completed"`
CalculatedAt time.Time `json:"calculated_at"`
} `json:"folder_summary"`
}

View File

@@ -0,0 +1,476 @@
package cloudreve_v4
import (
"bytes"
"context"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/setting"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go"
)
// do others that not defined in Driver interface
func (d *CloudreveV4) getUA() string {
if d.CustomUA != "" {
return d.CustomUA
}
return base.UserAgent
}
func (d *CloudreveV4) request(method string, path string, callback base.ReqCallback, out any) error {
if d.ref != nil {
return d.ref.request(method, path, callback, out)
}
u := d.Address + "/api/v4" + path
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"Accept": "application/json, text/plain, */*",
"User-Agent": d.getUA(),
})
if d.AccessToken != "" {
req.SetHeader("Authorization", "Bearer "+d.AccessToken)
}
var r Resp
req.SetResult(&r)
if callback != nil {
callback(req)
}
resp, err := req.Execute(method, u)
if err != nil {
return err
}
if !resp.IsSuccess() {
return errors.New(resp.String())
}
if r.Code != 0 {
if r.Code == 401 && d.RefreshToken != "" && path != "/session/token/refresh" {
// try to refresh token
err = d.refreshToken()
if err != nil {
return err
}
return d.request(method, path, callback, out)
}
return errors.New(r.Msg)
}
if out != nil && r.Data != nil {
var marshal []byte
marshal, err = json.Marshal(r.Data)
if err != nil {
return err
}
err = json.Unmarshal(marshal, out)
if err != nil {
return err
}
}
return nil
}
func (d *CloudreveV4) login() error {
var siteConfig SiteLoginConfigResp
err := d.request(http.MethodGet, "/site/config/login", nil, &siteConfig)
if err != nil {
return err
}
if !siteConfig.Authn {
return errors.New("authn not support")
}
var prepareLogin PrepareLoginResp
err = d.request(http.MethodGet, "/session/prepare?email="+d.Addition.Username, nil, &prepareLogin)
if err != nil {
return err
}
if !prepareLogin.PasswordEnabled {
return errors.New("password not enabled")
}
if prepareLogin.WebauthnEnabled {
return errors.New("webauthn not support")
}
for range 5 {
err = d.doLogin(siteConfig.LoginCaptcha)
if err == nil {
break
}
if err.Error() != "CAPTCHA not match." {
break
}
}
return err
}
func (d *CloudreveV4) doLogin(needCaptcha bool) error {
var err error
loginBody := base.Json{
"email": d.Username,
"password": d.Password,
}
if needCaptcha {
var config BasicConfigResp
err = d.request(http.MethodGet, "/site/config/basic", nil, &config)
if err != nil {
return err
}
if config.CaptchaType != "normal" {
return fmt.Errorf("captcha type %s not support", config.CaptchaType)
}
var captcha CaptchaResp
err = d.request(http.MethodGet, "/site/captcha", nil, &captcha)
if err != nil {
return err
}
if !strings.HasPrefix(captcha.Image, "data:image/png;base64,") {
return errors.New("can not get captcha")
}
loginBody["ticket"] = captcha.Ticket
i := strings.Index(captcha.Image, ",")
dec := base64.NewDecoder(base64.StdEncoding, strings.NewReader(captcha.Image[i+1:]))
vRes, err := base.RestyClient.R().SetMultipartField(
"image", "validateCode.png", "image/png", dec).
Post(setting.GetStr(conf.OcrApi))
if err != nil {
return err
}
if jsoniter.Get(vRes.Body(), "status").ToInt() != 200 {
return errors.New("ocr error:" + jsoniter.Get(vRes.Body(), "msg").ToString())
}
captchaCode := jsoniter.Get(vRes.Body(), "result").ToString()
if captchaCode == "" {
return errors.New("ocr error: empty result")
}
loginBody["captcha"] = captchaCode
}
var token TokenResponse
err = d.request(http.MethodPost, "/session/token", func(req *resty.Request) {
req.SetBody(loginBody)
}, &token)
if err != nil {
return err
}
d.AccessToken, d.RefreshToken = token.Token.AccessToken, token.Token.RefreshToken
op.MustSaveDriverStorage(d)
return nil
}
func (d *CloudreveV4) refreshToken() error {
var token Token
if token.RefreshToken == "" {
if d.Username != "" {
err := d.login()
if err != nil {
return fmt.Errorf("cannot login to get refresh token, error: %s", err)
}
}
return nil
}
err := d.request(http.MethodPost, "/session/token/refresh", func(req *resty.Request) {
req.SetBody(base.Json{
"refresh_token": d.RefreshToken,
})
}, &token)
if err != nil {
return err
}
d.AccessToken, d.RefreshToken = token.AccessToken, token.RefreshToken
op.MustSaveDriverStorage(d)
return nil
}
func (d *CloudreveV4) upLocal(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
var finish int64 = 0
var chunk int = 0
DEFAULT := int64(u.ChunkSize)
if DEFAULT == 0 {
// support relay
DEFAULT = file.GetSize()
}
for finish < file.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := file.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[CloudreveV4-Local] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(file, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
err = d.request(http.MethodPost, "/file/upload/"+u.SessionID+"/"+strconv.Itoa(chunk), func(req *resty.Request) {
req.SetHeader("Content-Type", "application/octet-stream")
req.SetContentLength(true)
req.SetHeader("Content-Length", strconv.FormatInt(byteSize, 10))
req.SetBody(driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
req.AddRetryCondition(func(r *resty.Response, err error) bool {
if err != nil {
return true
}
if r.IsError() {
return true
}
var retryResp Resp
jErr := base.RestyClient.JSONUnmarshal(r.Body(), &retryResp)
if jErr != nil {
return true
}
if retryResp.Code != 0 {
return true
}
return false
})
}, nil)
if err != nil {
return err
}
finish += byteSize
up(float64(finish) * 100 / float64(file.GetSize()))
chunk++
}
return nil
}
func (d *CloudreveV4) upRemote(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
uploadUrl := u.UploadUrls[0]
credential := u.Credential
var finish int64 = 0
var chunk int = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < file.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := file.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[CloudreveV4-Remote] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(file, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("POST", uploadUrl+"?chunk="+strconv.Itoa(chunk),
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Authorization", fmt.Sprint(credential))
req.Header.Set("User-Agent", d.getUA())
err = func() error {
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != 200 {
return errors.New(res.Status)
}
body, err := io.ReadAll(res.Body)
if err != nil {
return err
}
var up Resp
err = json.Unmarshal(body, &up)
if err != nil {
return err
}
if up.Code != 0 {
return errors.New(up.Msg)
}
return nil
}()
if err == nil {
retryCount = 0
finish += byteSize
up(float64(finish) * 100 / float64(file.GetSize()))
chunk++
} else {
retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors, error: %s", maxRetries, err)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-Remote] server errors while uploading, retrying after %v...", backoff)
time.Sleep(backoff)
}
}
return nil
}
func (d *CloudreveV4) upOneDrive(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
uploadUrl := u.UploadUrls[0]
var finish int64 = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < file.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := file.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[CloudreveV4-OneDrive] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(file, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest(http.MethodPut, uploadUrl, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", finish, finish+byteSize-1, file.GetSize()))
req.Header.Set("User-Agent", d.getUA())
finish += byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
// https://learn.microsoft.com/zh-cn/onedrive/developer/rest-api/api/driveitem_createuploadsession
switch {
case res.StatusCode >= 500 && res.StatusCode <= 504:
retryCount++
if retryCount > maxRetries {
res.Body.Close()
return fmt.Errorf("upload failed after %d retries due to server errors, error %d", maxRetries, res.StatusCode)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[CloudreveV4-OneDrive] server errors %d while uploading, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200:
data, _ := io.ReadAll(res.Body)
res.Body.Close()
return errors.New(string(data))
default:
res.Body.Close()
retryCount = 0
finish += byteSize
up(float64(finish) * 100 / float64(file.GetSize()))
}
}
// 上传成功发送回调请求
return d.request(http.MethodPost, "/callback/onedrive/"+u.SessionID+"/"+u.CallbackSecret, func(req *resty.Request) {
req.SetBody("{}")
}, nil)
}
func (d *CloudreveV4) upS3(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
var finish int64 = 0
var chunk int = 0
var etags []string
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < file.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := file.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[CloudreveV4-S3] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(file, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest(http.MethodPut, u.UploadUrls[chunk],
driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
etag := res.Header.Get("ETag")
res.Body.Close()
switch {
case res.StatusCode != 200:
retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors", maxRetries)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("server error %d, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case etag == "":
return errors.New("faild to get ETag from header")
default:
retryCount = 0
etags = append(etags, etag)
finish += byteSize
up(float64(finish) * 100 / float64(file.GetSize()))
chunk++
}
}
// s3LikeFinishUpload
bodyBuilder := &strings.Builder{}
bodyBuilder.WriteString("<CompleteMultipartUpload>")
for i, etag := range etags {
bodyBuilder.WriteString(fmt.Sprintf(
`<Part><PartNumber>%d</PartNumber><ETag>%s</ETag></Part>`,
i+1, // PartNumber 从 1 开始
etag,
))
}
bodyBuilder.WriteString("</CompleteMultipartUpload>")
req, err := http.NewRequest(
"POST",
u.CompleteURL,
strings.NewReader(bodyBuilder.String()),
)
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/xml")
req.Header.Set("User-Agent", d.getUA())
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
body, _ := io.ReadAll(res.Body)
return fmt.Errorf("up status: %d, error: %s", res.StatusCode, string(body))
}
// 上传成功发送回调请求
return d.request(http.MethodPost, "/callback/s3/"+u.SessionID+"/"+u.CallbackSecret, func(req *resty.Request) {
req.SetBody("{}")
}, nil)
}

View File

@@ -263,12 +263,7 @@ func (d *Crypt) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
}
rrc := remoteLink.RangeReadCloser
if len(remoteLink.URL) > 0 {
rangedRemoteLink := &model.Link{
URL: remoteLink.URL,
Header: remoteLink.Header,
}
var converted, err = stream.GetRangeReadCloserFromLink(remoteFileSize, rangedRemoteLink)
var converted, err = stream.GetRangeReadCloserFromLink(remoteFileSize, remoteLink)
if err != nil {
return nil, err
}
@@ -287,8 +282,9 @@ func (d *Crypt) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
if err != nil {
return nil, err
}
// 可以直接返回读取完也不会调用Close直到连接断开Close
return remoteLink.MFile, nil
//keep reuse same MFile and close at last.
remoteClosers.Add(remoteLink.MFile)
return io.NopCloser(remoteLink.MFile), nil
}
return nil, errs.NotSupport
@@ -304,7 +300,6 @@ func (d *Crypt) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
resultRangeReadCloser := &model.RangeReadCloser{RangeReader: resultRangeReader, Closers: remoteClosers}
resultLink := &model.Link{
Header: remoteLink.Header,
RangeReadCloser: resultRangeReadCloser,
Expiration: remoteLink.Expiration,
}

271
drivers/doubao/driver.go Normal file
View File

@@ -0,0 +1,271 @@
package doubao
import (
"context"
"errors"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
)
type Doubao struct {
model.Storage
Addition
*UploadToken
UserId string
uploadThread int
}
func (d *Doubao) Config() driver.Config {
return config
}
func (d *Doubao) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Doubao) Init(ctx context.Context) error {
// TODO login / refresh token
//op.MustSaveDriverStorage(d)
uploadThread, err := strconv.Atoi(d.UploadThread)
if err != nil || uploadThread < 1 {
d.uploadThread, d.UploadThread = 3, "3" // Set default value
} else {
d.uploadThread = uploadThread
}
if d.UserId == "" {
userInfo, err := d.getUserInfo()
if err != nil {
return err
}
d.UserId = strconv.FormatInt(userInfo.UserID, 10)
}
if d.UploadToken == nil {
uploadToken, err := d.initUploadToken()
if err != nil {
return err
}
d.UploadToken = uploadToken
}
return nil
}
func (d *Doubao) Drop(ctx context.Context) error {
return nil
}
func (d *Doubao) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var files []model.Obj
fileList, err := d.getFiles(dir.GetID(), "")
if err != nil {
return nil, err
}
for _, child := range fileList {
files = append(files, &Object{
Object: model.Object{
ID: child.ID,
Path: child.ParentID,
Name: child.Name,
Size: child.Size,
Modified: time.Unix(child.UpdateTime, 0),
Ctime: time.Unix(child.CreateTime, 0),
IsFolder: child.NodeType == 1,
},
Key: child.Key,
NodeType: child.NodeType,
})
}
return files, nil
}
func (d *Doubao) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
var downloadUrl string
if u, ok := file.(*Object); ok {
switch d.DownloadApi {
case "get_download_info":
var r GetDownloadInfoResp
_, err := d.request("/samantha/aispace/get_download_info", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"requests": []base.Json{{"node_id": file.GetID()}},
})
}, &r)
if err != nil {
return nil, err
}
downloadUrl = r.Data.DownloadInfos[0].MainURL
case "get_file_url":
switch u.NodeType {
case VideoType, AudioType:
var r GetVideoFileUrlResp
_, err := d.request("/samantha/media/get_play_info", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"key": u.Key,
"node_id": file.GetID(),
})
}, &r)
if err != nil {
return nil, err
}
downloadUrl = r.Data.OriginalMediaInfo.MainURL
default:
var r GetFileUrlResp
_, err := d.request("/alice/message/get_file_url", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{u.Key},
"type": FileNodeType[u.NodeType],
})
}, &r)
if err != nil {
return nil, err
}
downloadUrl = r.Data.FileUrls[0].MainURL
}
default:
return nil, errs.NotImplement
}
// 生成标准的Content-Disposition
contentDisposition := generateContentDisposition(u.Name)
return &model.Link{
URL: downloadUrl,
Header: http.Header{
"User-Agent": []string{UserAgent},
"Content-Disposition": []string{contentDisposition},
},
}, nil
}
return nil, errors.New("can't convert obj to URL")
}
func (d *Doubao) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
var r UploadNodeResp
_, err := d.request("/samantha/aispace/upload_node", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"node_list": []base.Json{
{
"local_id": uuid.New().String(),
"name": dirName,
"parent_id": parentDir.GetID(),
"node_type": 1,
},
},
})
}, &r)
return err
}
func (d *Doubao) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
var r UploadNodeResp
_, err := d.request("/samantha/aispace/move_node", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"node_list": []base.Json{
{"id": srcObj.GetID()},
},
"current_parent_id": srcObj.GetPath(),
"target_parent_id": dstDir.GetID(),
})
}, &r)
return err
}
func (d *Doubao) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
var r BaseResp
_, err := d.request("/samantha/aispace/rename_node", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"node_id": srcObj.GetID(),
"node_name": newName,
})
}, &r)
return err
}
func (d *Doubao) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// TODO copy obj, optional
return nil, errs.NotImplement
}
func (d *Doubao) Remove(ctx context.Context, obj model.Obj) error {
var r BaseResp
_, err := d.request("/samantha/aispace/delete_node", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{"node_list": []base.Json{{"id": obj.GetID()}}})
}, &r)
return err
}
func (d *Doubao) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
// 根据MIME类型确定数据类型
mimetype := file.GetMimetype()
dataType := FileDataType
switch {
case strings.HasPrefix(mimetype, "video/"):
dataType = VideoDataType
case strings.HasPrefix(mimetype, "audio/"):
dataType = VideoDataType // 音频与视频使用相同的处理方式
case strings.HasPrefix(mimetype, "image/"):
dataType = ImgDataType
}
// 获取上传配置
uploadConfig := UploadConfig{}
if err := d.getUploadConfig(&uploadConfig, dataType, file); err != nil {
return nil, err
}
// 根据文件大小选择上传方式
if file.GetSize() <= 1*utils.MB { // 小于1MB使用普通模式上传
return d.Upload(&uploadConfig, dstDir, file, up, dataType)
}
// 大文件使用分片上传
return d.UploadByMultipart(ctx, &uploadConfig, file.GetSize(), dstDir, file, up, dataType)
}
func (d *Doubao) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Doubao) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Doubao) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Doubao) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// return errs.NotImplement to use an internal archive tool
return nil, errs.NotImplement
}
//func (d *Doubao) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Doubao)(nil)

36
drivers/doubao/meta.go Normal file
View File

@@ -0,0 +1,36 @@
package doubao
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
// driver.RootPath
driver.RootID
// define other
Cookie string `json:"cookie" type:"text"`
UploadThread string `json:"upload_thread" default:"3"`
DownloadApi string `json:"download_api" type:"select" options:"get_file_url,get_download_info" default:"get_file_url"`
}
var config = driver.Config{
Name: "Doubao",
LocalSort: true,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Doubao{}
})
}

415
drivers/doubao/types.go Normal file
View File

@@ -0,0 +1,415 @@
package doubao
import (
"encoding/json"
"fmt"
"time"
"github.com/alist-org/alist/v3/internal/model"
)
type BaseResp struct {
Code int `json:"code"`
Msg string `json:"msg"`
}
type NodeInfoResp struct {
BaseResp
Data struct {
NodeInfo File `json:"node_info"`
Children []File `json:"children"`
NextCursor string `json:"next_cursor"`
HasMore bool `json:"has_more"`
} `json:"data"`
}
type File struct {
ID string `json:"id"`
Name string `json:"name"`
Key string `json:"key"`
NodeType int `json:"node_type"` // 0: 文件, 1: 文件夹
Size int64 `json:"size"`
Source int `json:"source"`
NameReviewStatus int `json:"name_review_status"`
ContentReviewStatus int `json:"content_review_status"`
RiskReviewStatus int `json:"risk_review_status"`
ConversationID string `json:"conversation_id"`
ParentID string `json:"parent_id"`
CreateTime int64 `json:"create_time"`
UpdateTime int64 `json:"update_time"`
}
type GetDownloadInfoResp struct {
BaseResp
Data struct {
DownloadInfos []struct {
NodeID string `json:"node_id"`
MainURL string `json:"main_url"`
BackupURL string `json:"backup_url"`
} `json:"download_infos"`
} `json:"data"`
}
type GetFileUrlResp struct {
BaseResp
Data struct {
FileUrls []struct {
URI string `json:"uri"`
MainURL string `json:"main_url"`
BackURL string `json:"back_url"`
} `json:"file_urls"`
} `json:"data"`
}
type GetVideoFileUrlResp struct {
BaseResp
Data struct {
MediaType string `json:"media_type"`
MediaInfo []struct {
Meta struct {
Height string `json:"height"`
Width string `json:"width"`
Format string `json:"format"`
Duration float64 `json:"duration"`
CodecType string `json:"codec_type"`
Definition string `json:"definition"`
} `json:"meta"`
MainURL string `json:"main_url"`
BackupURL string `json:"backup_url"`
} `json:"media_info"`
OriginalMediaInfo struct {
Meta struct {
Height string `json:"height"`
Width string `json:"width"`
Format string `json:"format"`
Duration float64 `json:"duration"`
CodecType string `json:"codec_type"`
Definition string `json:"definition"`
} `json:"meta"`
MainURL string `json:"main_url"`
BackupURL string `json:"backup_url"`
} `json:"original_media_info"`
PosterURL string `json:"poster_url"`
PlayableStatus int `json:"playable_status"`
} `json:"data"`
}
type UploadNodeResp struct {
BaseResp
Data struct {
NodeList []struct {
LocalID string `json:"local_id"`
ID string `json:"id"`
ParentID string `json:"parent_id"`
Name string `json:"name"`
Key string `json:"key"`
NodeType int `json:"node_type"` // 0: 文件, 1: 文件夹
} `json:"node_list"`
} `json:"data"`
}
type Object struct {
model.Object
Key string
NodeType int
}
type UserInfoResp struct {
Data UserInfo `json:"data"`
Message string `json:"message"`
}
type AppUserInfo struct {
BuiAuditInfo string `json:"bui_audit_info"`
}
type AuditInfo struct {
}
type Details struct {
}
type BuiAuditInfo struct {
AuditInfo AuditInfo `json:"audit_info"`
IsAuditing bool `json:"is_auditing"`
AuditStatus int `json:"audit_status"`
LastUpdateTime int `json:"last_update_time"`
UnpassReason string `json:"unpass_reason"`
Details Details `json:"details"`
}
type Connects struct {
Platform string `json:"platform"`
ProfileImageURL string `json:"profile_image_url"`
ExpiredTime int `json:"expired_time"`
ExpiresIn int `json:"expires_in"`
PlatformScreenName string `json:"platform_screen_name"`
UserID int64 `json:"user_id"`
PlatformUID string `json:"platform_uid"`
SecPlatformUID string `json:"sec_platform_uid"`
PlatformAppID int `json:"platform_app_id"`
ModifyTime int `json:"modify_time"`
AccessToken string `json:"access_token"`
OpenID string `json:"open_id"`
}
type OperStaffRelationInfo struct {
HasPassword int `json:"has_password"`
Mobile string `json:"mobile"`
SecOperStaffUserID string `json:"sec_oper_staff_user_id"`
RelationMobileCountryCode int `json:"relation_mobile_country_code"`
}
type UserInfo struct {
AppID int `json:"app_id"`
AppUserInfo AppUserInfo `json:"app_user_info"`
AvatarURL string `json:"avatar_url"`
BgImgURL string `json:"bg_img_url"`
BuiAuditInfo BuiAuditInfo `json:"bui_audit_info"`
CanBeFoundByPhone int `json:"can_be_found_by_phone"`
Connects []Connects `json:"connects"`
CountryCode int `json:"country_code"`
Description string `json:"description"`
DeviceID int `json:"device_id"`
Email string `json:"email"`
EmailCollected bool `json:"email_collected"`
Gender int `json:"gender"`
HasPassword int `json:"has_password"`
HmRegion int `json:"hm_region"`
IsBlocked int `json:"is_blocked"`
IsBlocking int `json:"is_blocking"`
IsRecommendAllowed int `json:"is_recommend_allowed"`
IsVisitorAccount bool `json:"is_visitor_account"`
Mobile string `json:"mobile"`
Name string `json:"name"`
NeedCheckBindStatus bool `json:"need_check_bind_status"`
OdinUserType int `json:"odin_user_type"`
OperStaffRelationInfo OperStaffRelationInfo `json:"oper_staff_relation_info"`
PhoneCollected bool `json:"phone_collected"`
RecommendHintMessage string `json:"recommend_hint_message"`
ScreenName string `json:"screen_name"`
SecUserID string `json:"sec_user_id"`
SessionKey string `json:"session_key"`
UseHmRegion bool `json:"use_hm_region"`
UserCreateTime int `json:"user_create_time"`
UserID int64 `json:"user_id"`
UserIDStr string `json:"user_id_str"`
UserVerified bool `json:"user_verified"`
VerifiedContent string `json:"verified_content"`
}
// UploadToken 上传令牌配置
type UploadToken struct {
Alice map[string]UploadAuthToken
Samantha MediaUploadAuthToken
}
// UploadAuthToken 多种类型的上传配置:图片/文件
type UploadAuthToken struct {
ServiceID string `json:"service_id"`
UploadPathPrefix string `json:"upload_path_prefix"`
Auth struct {
AccessKeyID string `json:"access_key_id"`
SecretAccessKey string `json:"secret_access_key"`
SessionToken string `json:"session_token"`
ExpiredTime time.Time `json:"expired_time"`
CurrentTime time.Time `json:"current_time"`
} `json:"auth"`
UploadHost string `json:"upload_host"`
}
// MediaUploadAuthToken 媒体上传配置
type MediaUploadAuthToken struct {
StsToken struct {
AccessKeyID string `json:"access_key_id"`
SecretAccessKey string `json:"secret_access_key"`
SessionToken string `json:"session_token"`
ExpiredTime time.Time `json:"expired_time"`
CurrentTime time.Time `json:"current_time"`
} `json:"sts_token"`
UploadInfo struct {
VideoHost string `json:"video_host"`
SpaceName string `json:"space_name"`
} `json:"upload_info"`
}
type UploadAuthTokenResp struct {
BaseResp
Data UploadAuthToken `json:"data"`
}
type MediaUploadAuthTokenResp struct {
BaseResp
Data MediaUploadAuthToken `json:"data"`
}
type ResponseMetadata struct {
RequestID string `json:"RequestId"`
Action string `json:"Action"`
Version string `json:"Version"`
Service string `json:"Service"`
Region string `json:"Region"`
Error struct {
CodeN int `json:"CodeN,omitempty"`
Code string `json:"Code,omitempty"`
Message string `json:"Message,omitempty"`
} `json:"Error,omitempty"`
}
type UploadConfig struct {
UploadAddress UploadAddress `json:"UploadAddress"`
FallbackUploadAddress FallbackUploadAddress `json:"FallbackUploadAddress"`
InnerUploadAddress InnerUploadAddress `json:"InnerUploadAddress"`
RequestID string `json:"RequestId"`
SDKParam interface{} `json:"SDKParam"`
}
type UploadConfigResp struct {
ResponseMetadata `json:"ResponseMetadata"`
Result UploadConfig `json:"Result"`
}
// StoreInfo 存储信息
type StoreInfo struct {
StoreURI string `json:"StoreUri"`
Auth string `json:"Auth"`
UploadID string `json:"UploadID"`
UploadHeader map[string]interface{} `json:"UploadHeader,omitempty"`
StorageHeader map[string]interface{} `json:"StorageHeader,omitempty"`
}
// UploadAddress 上传地址信息
type UploadAddress struct {
StoreInfos []StoreInfo `json:"StoreInfos"`
UploadHosts []string `json:"UploadHosts"`
UploadHeader map[string]interface{} `json:"UploadHeader"`
SessionKey string `json:"SessionKey"`
Cloud string `json:"Cloud"`
}
// FallbackUploadAddress 备用上传地址
type FallbackUploadAddress struct {
StoreInfos []StoreInfo `json:"StoreInfos"`
UploadHosts []string `json:"UploadHosts"`
UploadHeader map[string]interface{} `json:"UploadHeader"`
SessionKey string `json:"SessionKey"`
Cloud string `json:"Cloud"`
}
// UploadNode 上传节点信息
type UploadNode struct {
Vid string `json:"Vid"`
Vids []string `json:"Vids"`
StoreInfos []StoreInfo `json:"StoreInfos"`
UploadHost string `json:"UploadHost"`
UploadHeader map[string]interface{} `json:"UploadHeader"`
Type string `json:"Type"`
Protocol string `json:"Protocol"`
SessionKey string `json:"SessionKey"`
NodeConfig struct {
UploadMode string `json:"UploadMode"`
} `json:"NodeConfig"`
Cluster string `json:"Cluster"`
}
// AdvanceOption 高级选项
type AdvanceOption struct {
Parallel int `json:"Parallel"`
Stream int `json:"Stream"`
SliceSize int `json:"SliceSize"`
EncryptionKey string `json:"EncryptionKey"`
}
// InnerUploadAddress 内部上传地址
type InnerUploadAddress struct {
UploadNodes []UploadNode `json:"UploadNodes"`
AdvanceOption AdvanceOption `json:"AdvanceOption"`
}
// UploadPart 上传分片信息
type UploadPart struct {
UploadId string `json:"uploadid,omitempty"`
PartNumber string `json:"part_number,omitempty"`
Crc32 string `json:"crc32,omitempty"`
Etag string `json:"etag,omitempty"`
Mode string `json:"mode,omitempty"`
}
// UploadResp 上传响应体
type UploadResp struct {
Code int `json:"code"`
ApiVersion string `json:"apiversion"`
Message string `json:"message"`
Data UploadPart `json:"data"`
}
type VideoCommitUpload struct {
Vid string `json:"Vid"`
VideoMeta struct {
URI string `json:"Uri"`
Height int `json:"Height"`
Width int `json:"Width"`
OriginHeight int `json:"OriginHeight"`
OriginWidth int `json:"OriginWidth"`
Duration float64 `json:"Duration"`
Bitrate int `json:"Bitrate"`
Md5 string `json:"Md5"`
Format string `json:"Format"`
Size int `json:"Size"`
FileType string `json:"FileType"`
Codec string `json:"Codec"`
} `json:"VideoMeta"`
WorkflowInput struct {
TemplateID string `json:"TemplateId"`
} `json:"WorkflowInput"`
GetPosterMode string `json:"GetPosterMode"`
}
type VideoCommitUploadResp struct {
ResponseMetadata ResponseMetadata `json:"ResponseMetadata"`
Result struct {
RequestID string `json:"RequestId"`
Results []VideoCommitUpload `json:"Results"`
} `json:"Result"`
}
type CommonResp struct {
Code int `json:"code"`
Msg string `json:"msg,omitempty"`
Message string `json:"message,omitempty"` // 错误情况下的消息
Data json.RawMessage `json:"data,omitempty"` // 原始数据,稍后解析
Error *struct {
Code int `json:"code"`
Message string `json:"message"`
Locale string `json:"locale"`
} `json:"error,omitempty"`
}
// IsSuccess 判断响应是否成功
func (r *CommonResp) IsSuccess() bool {
return r.Code == 0
}
// GetError 获取错误信息
func (r *CommonResp) GetError() error {
if r.IsSuccess() {
return nil
}
// 优先使用message字段
errMsg := r.Message
if errMsg == "" {
errMsg = r.Msg
}
// 如果error对象存在且有详细消息,则使用error中的信息
if r.Error != nil && r.Error.Message != "" {
errMsg = r.Error.Message
}
return fmt.Errorf("[doubao] API error (code: %d): %s", r.Code, errMsg)
}
// UnmarshalData 将data字段解析为指定类型
func (r *CommonResp) UnmarshalData(v interface{}) error {
if !r.IsSuccess() {
return r.GetError()
}
if len(r.Data) == 0 {
return nil
}
return json.Unmarshal(r.Data, v)
}

970
drivers/doubao/util.go Normal file
View File

@@ -0,0 +1,970 @@
package doubao
import (
"context"
"crypto/hmac"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/errgroup"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/avast/retry-go"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
log "github.com/sirupsen/logrus"
"hash/crc32"
"io"
"math"
"math/rand"
"net/http"
"net/url"
"path/filepath"
"sort"
"strconv"
"strings"
"sync"
"time"
)
const (
DirectoryType = 1
FileType = 2
LinkType = 3
ImageType = 4
PagesType = 5
VideoType = 6
AudioType = 7
MeetingMinutesType = 8
)
var FileNodeType = map[int]string{
1: "directory",
2: "file",
3: "link",
4: "image",
5: "pages",
6: "video",
7: "audio",
8: "meeting_minutes",
}
const (
BaseURL = "https://www.doubao.com"
FileDataType = "file"
ImgDataType = "image"
VideoDataType = "video"
DefaultChunkSize = int64(5 * 1024 * 1024) // 5MB
MaxRetryAttempts = 3 // 最大重试次数
UserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36"
Region = "cn-north-1"
UploadTimeout = 3 * time.Minute
)
// do others that not defined in Driver interface
func (d *Doubao) request(path string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
reqUrl := BaseURL + path
req := base.RestyClient.R()
req.SetHeader("Cookie", d.Cookie)
if callback != nil {
callback(req)
}
var commonResp CommonResp
res, err := req.Execute(method, reqUrl)
log.Debugln(res.String())
if err != nil {
return nil, err
}
body := res.Body()
// 先解析为通用响应
if err = json.Unmarshal(body, &commonResp); err != nil {
return nil, err
}
// 检查响应是否成功
if !commonResp.IsSuccess() {
return body, commonResp.GetError()
}
if resp != nil {
if err = json.Unmarshal(body, resp); err != nil {
return body, err
}
}
return body, nil
}
func (d *Doubao) getFiles(dirId, cursor string) (resp []File, err error) {
var r NodeInfoResp
var body = base.Json{
"node_id": dirId,
}
// 如果有游标,则设置游标和大小
if cursor != "" {
body["cursor"] = cursor
body["size"] = 50
} else {
body["need_full_path"] = false
}
_, err = d.request("/samantha/aispace/node_info", http.MethodPost, func(req *resty.Request) {
req.SetBody(body)
}, &r)
if err != nil {
return nil, err
}
if r.Data.Children != nil {
resp = r.Data.Children
}
if r.Data.NextCursor != "-1" {
// 递归获取下一页
nextFiles, err := d.getFiles(dirId, r.Data.NextCursor)
if err != nil {
return nil, err
}
resp = append(r.Data.Children, nextFiles...)
}
return resp, err
}
func (d *Doubao) getUserInfo() (UserInfo, error) {
var r UserInfoResp
_, err := d.request("/passport/account/info/v2/", http.MethodGet, nil, &r)
if err != nil {
return UserInfo{}, err
}
return r.Data, err
}
// 签名请求
func (d *Doubao) signRequest(req *resty.Request, method, tokenType, uploadUrl string) error {
parsedUrl, err := url.Parse(uploadUrl)
if err != nil {
return fmt.Errorf("invalid URL format: %w", err)
}
var accessKeyId, secretAccessKey, sessionToken string
var serviceName string
if tokenType == VideoDataType {
accessKeyId = d.UploadToken.Samantha.StsToken.AccessKeyID
secretAccessKey = d.UploadToken.Samantha.StsToken.SecretAccessKey
sessionToken = d.UploadToken.Samantha.StsToken.SessionToken
serviceName = "vod"
} else {
accessKeyId = d.UploadToken.Alice[tokenType].Auth.AccessKeyID
secretAccessKey = d.UploadToken.Alice[tokenType].Auth.SecretAccessKey
sessionToken = d.UploadToken.Alice[tokenType].Auth.SessionToken
serviceName = "imagex"
}
// 当前时间,格式为 ISO8601
now := time.Now().UTC()
amzDate := now.Format("20060102T150405Z")
dateStamp := now.Format("20060102")
req.SetHeader("X-Amz-Date", amzDate)
if sessionToken != "" {
req.SetHeader("X-Amz-Security-Token", sessionToken)
}
// 计算请求体的SHA256哈希
var bodyHash string
if req.Body != nil {
bodyBytes, ok := req.Body.([]byte)
if !ok {
return fmt.Errorf("request body must be []byte")
}
bodyHash = hashSHA256(string(bodyBytes))
req.SetHeader("X-Amz-Content-Sha256", bodyHash)
} else {
bodyHash = hashSHA256("")
}
// 创建规范请求
canonicalURI := parsedUrl.Path
if canonicalURI == "" {
canonicalURI = "/"
}
// 查询参数按照字母顺序排序
canonicalQueryString := getCanonicalQueryString(req.QueryParam)
// 规范请求头
canonicalHeaders, signedHeaders := getCanonicalHeadersFromMap(req.Header)
canonicalRequest := method + "\n" +
canonicalURI + "\n" +
canonicalQueryString + "\n" +
canonicalHeaders + "\n" +
signedHeaders + "\n" +
bodyHash
algorithm := "AWS4-HMAC-SHA256"
credentialScope := fmt.Sprintf("%s/%s/%s/aws4_request", dateStamp, Region, serviceName)
stringToSign := algorithm + "\n" +
amzDate + "\n" +
credentialScope + "\n" +
hashSHA256(canonicalRequest)
// 计算签名密钥
signingKey := getSigningKey(secretAccessKey, dateStamp, Region, serviceName)
// 计算签名
signature := hmacSHA256Hex(signingKey, stringToSign)
// 构建授权头
authorizationHeader := fmt.Sprintf(
"%s Credential=%s/%s, SignedHeaders=%s, Signature=%s",
algorithm,
accessKeyId,
credentialScope,
signedHeaders,
signature,
)
req.SetHeader("Authorization", authorizationHeader)
return nil
}
func (d *Doubao) requestApi(url, method, tokenType string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"user-agent": UserAgent,
})
if method == http.MethodPost {
req.SetHeader("Content-Type", "text/plain;charset=UTF-8")
}
if callback != nil {
callback(req)
}
if resp != nil {
req.SetResult(resp)
}
// 使用自定义AWS SigV4签名
err := d.signRequest(req, method, tokenType, url)
if err != nil {
return nil, err
}
res, err := req.Execute(method, url)
if err != nil {
return nil, err
}
return res.Body(), nil
}
func (d *Doubao) initUploadToken() (*UploadToken, error) {
uploadToken := &UploadToken{
Alice: make(map[string]UploadAuthToken),
Samantha: MediaUploadAuthToken{},
}
fileAuthToken, err := d.getUploadAuthToken(FileDataType)
if err != nil {
return nil, err
}
imgAuthToken, err := d.getUploadAuthToken(ImgDataType)
if err != nil {
return nil, err
}
mediaAuthToken, err := d.getSamantaUploadAuthToken()
if err != nil {
return nil, err
}
uploadToken.Alice[FileDataType] = fileAuthToken
uploadToken.Alice[ImgDataType] = imgAuthToken
uploadToken.Samantha = mediaAuthToken
return uploadToken, nil
}
func (d *Doubao) getUploadAuthToken(dataType string) (ut UploadAuthToken, err error) {
var r UploadAuthTokenResp
_, err = d.request("/alice/upload/auth_token", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"scene": "bot_chat",
"data_type": dataType,
})
}, &r)
return r.Data, err
}
func (d *Doubao) getSamantaUploadAuthToken() (mt MediaUploadAuthToken, err error) {
var r MediaUploadAuthTokenResp
_, err = d.request("/samantha/media/get_upload_token", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{})
}, &r)
return r.Data, err
}
// getUploadConfig 获取上传配置信息
func (d *Doubao) getUploadConfig(upConfig *UploadConfig, dataType string, file model.FileStreamer) error {
tokenType := dataType
// 配置参数函数
configureParams := func() (string, map[string]string) {
var uploadUrl string
var params map[string]string
// 根据数据类型设置不同的上传参数
switch dataType {
case VideoDataType:
// 音频/视频类型 - 使用uploadToken.Samantha的配置
uploadUrl = d.UploadToken.Samantha.UploadInfo.VideoHost
params = map[string]string{
"Action": "ApplyUploadInner",
"Version": "2020-11-19",
"SpaceName": d.UploadToken.Samantha.UploadInfo.SpaceName,
"FileType": "video",
"IsInner": "1",
"NeedFallback": "true",
"FileSize": strconv.FormatInt(file.GetSize(), 10),
"s": randomString(),
}
case ImgDataType, FileDataType:
// 图片或其他文件类型 - 使用uploadToken.Alice对应配置
uploadUrl = "https://" + d.UploadToken.Alice[dataType].UploadHost
params = map[string]string{
"Action": "ApplyImageUpload",
"Version": "2018-08-01",
"ServiceId": d.UploadToken.Alice[dataType].ServiceID,
"NeedFallback": "true",
"FileSize": strconv.FormatInt(file.GetSize(), 10),
"FileExtension": filepath.Ext(file.GetName()),
"s": randomString(),
}
}
return uploadUrl, params
}
// 获取初始参数
uploadUrl, params := configureParams()
tokenRefreshed := false
var configResp UploadConfigResp
err := d._retryOperation("get upload_config", func() error {
configResp = UploadConfigResp{}
_, err := d.requestApi(uploadUrl, http.MethodGet, tokenType, func(req *resty.Request) {
req.SetQueryParams(params)
}, &configResp)
if err != nil {
return err
}
if configResp.ResponseMetadata.Error.Code == "" {
*upConfig = configResp.Result
return nil
}
// 100028 凭证过期
if configResp.ResponseMetadata.Error.CodeN == 100028 && !tokenRefreshed {
log.Debugln("[doubao] Upload token expired, re-fetching...")
newToken, err := d.initUploadToken()
if err != nil {
return fmt.Errorf("failed to refresh token: %w", err)
}
d.UploadToken = newToken
tokenRefreshed = true
uploadUrl, params = configureParams()
return retry.Error{errors.New("token refreshed, retry needed")}
}
return fmt.Errorf("get upload_config failed: %s", configResp.ResponseMetadata.Error.Message)
})
return err
}
// uploadNode 上传 文件信息
func (d *Doubao) uploadNode(uploadConfig *UploadConfig, dir model.Obj, file model.FileStreamer, dataType string) (UploadNodeResp, error) {
reqUuid := uuid.New().String()
var key string
var nodeType int
mimetype := file.GetMimetype()
switch dataType {
case VideoDataType:
key = uploadConfig.InnerUploadAddress.UploadNodes[0].Vid
if strings.HasPrefix(mimetype, "audio/") {
nodeType = AudioType // 音频类型
} else {
nodeType = VideoType // 视频类型
}
case ImgDataType:
key = uploadConfig.InnerUploadAddress.UploadNodes[0].StoreInfos[0].StoreURI
nodeType = ImageType // 图片类型
default: // FileDataType
key = uploadConfig.InnerUploadAddress.UploadNodes[0].StoreInfos[0].StoreURI
nodeType = FileType // 文件类型
}
var r UploadNodeResp
_, err := d.request("/samantha/aispace/upload_node", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"node_list": []base.Json{
{
"local_id": reqUuid,
"parent_id": dir.GetID(),
"name": file.GetName(),
"key": key,
"node_content": base.Json{},
"node_type": nodeType,
"size": file.GetSize(),
},
},
"request_id": reqUuid,
})
}, &r)
return r, err
}
// Upload 普通上传实现
func (d *Doubao) Upload(config *UploadConfig, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, dataType string) (model.Obj, error) {
data, err := io.ReadAll(file)
if err != nil {
return nil, err
}
// 计算CRC32
crc32Hash := crc32.NewIEEE()
crc32Hash.Write(data)
crc32Value := hex.EncodeToString(crc32Hash.Sum(nil))
// 构建请求路径
uploadNode := config.InnerUploadAddress.UploadNodes[0]
storeInfo := uploadNode.StoreInfos[0]
uploadUrl := fmt.Sprintf("https://%s/upload/v1/%s", uploadNode.UploadHost, storeInfo.StoreURI)
uploadResp := UploadResp{}
if _, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
req.SetHeaders(map[string]string{
"Content-Type": "application/octet-stream",
"Content-Crc32": crc32Value,
"Content-Length": fmt.Sprintf("%d", len(data)),
"Content-Disposition": fmt.Sprintf("attachment; filename=%s", url.QueryEscape(storeInfo.StoreURI)),
})
req.SetBody(data)
}, &uploadResp); err != nil {
return nil, err
}
if uploadResp.Code != 2000 {
return nil, fmt.Errorf("upload failed: %s", uploadResp.Message)
}
uploadNodeResp, err := d.uploadNode(config, dstDir, file, dataType)
if err != nil {
return nil, err
}
return &model.Object{
ID: uploadNodeResp.Data.NodeList[0].ID,
Name: uploadNodeResp.Data.NodeList[0].Name,
Size: file.GetSize(),
IsFolder: false,
}, nil
}
// UploadByMultipart 分片上传
func (d *Doubao) UploadByMultipart(ctx context.Context, config *UploadConfig, fileSize int64, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, dataType string) (model.Obj, error) {
// 构建请求路径
uploadNode := config.InnerUploadAddress.UploadNodes[0]
storeInfo := uploadNode.StoreInfos[0]
uploadUrl := fmt.Sprintf("https://%s/upload/v1/%s", uploadNode.UploadHost, storeInfo.StoreURI)
// 初始化分片上传
var uploadID string
err := d._retryOperation("Initialize multipart upload", func() error {
var err error
uploadID, err = d.initMultipartUpload(config, uploadUrl, storeInfo)
return err
})
if err != nil {
return nil, fmt.Errorf("failed to initialize multipart upload: %w", err)
}
// 准备分片参数
chunkSize := DefaultChunkSize
if config.InnerUploadAddress.AdvanceOption.SliceSize > 0 {
chunkSize = int64(config.InnerUploadAddress.AdvanceOption.SliceSize)
}
totalParts := (fileSize + chunkSize - 1) / chunkSize
// 创建分片信息组
parts := make([]UploadPart, totalParts)
// 缓存文件
tempFile, err := file.CacheFullInTempFile()
if err != nil {
return nil, fmt.Errorf("failed to cache file: %w", err)
}
defer tempFile.Close()
up(10.0) // 更新进度
// 设置并行上传
threadG, uploadCtx := errgroup.NewGroupWithContext(ctx, d.uploadThread,
retry.Attempts(1),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
var partsMutex sync.Mutex
// 并行上传所有分片
for partIndex := int64(0); partIndex < totalParts; partIndex++ {
if utils.IsCanceled(uploadCtx) {
break
}
partIndex := partIndex
partNumber := partIndex + 1 // 分片编号从1开始
threadG.Go(func(ctx context.Context) error {
// 计算此分片的大小和偏移
offset := partIndex * chunkSize
size := chunkSize
if partIndex == totalParts-1 {
size = fileSize - offset
}
limitedReader := driver.NewLimitedUploadStream(ctx, io.NewSectionReader(tempFile, offset, size))
// 读取数据到内存
data, err := io.ReadAll(limitedReader)
if err != nil {
return fmt.Errorf("failed to read part %d: %w", partNumber, err)
}
// 计算CRC32
crc32Value := calculateCRC32(data)
// 使用_retryOperation上传分片
var uploadPart UploadPart
if err = d._retryOperation(fmt.Sprintf("Upload part %d", partNumber), func() error {
var err error
uploadPart, err = d.uploadPart(config, uploadUrl, uploadID, partNumber, data, crc32Value)
return err
}); err != nil {
return fmt.Errorf("part %d upload failed: %w", partNumber, err)
}
// 记录成功上传的分片
partsMutex.Lock()
parts[partIndex] = UploadPart{
PartNumber: strconv.FormatInt(partNumber, 10),
Etag: uploadPart.Etag,
Crc32: crc32Value,
}
partsMutex.Unlock()
// 更新进度
progress := 10.0 + 90.0*float64(threadG.Success()+1)/float64(totalParts)
up(math.Min(progress, 95.0))
return nil
})
}
if err = threadG.Wait(); err != nil {
return nil, err
}
// 完成上传-分片合并
if err = d._retryOperation("Complete multipart upload", func() error {
return d.completeMultipartUpload(config, uploadUrl, uploadID, parts)
}); err != nil {
return nil, fmt.Errorf("failed to complete multipart upload: %w", err)
}
// 提交上传
if err = d._retryOperation("Commit upload", func() error {
return d.commitMultipartUpload(config)
}); err != nil {
return nil, fmt.Errorf("failed to commit upload: %w", err)
}
up(98.0) // 更新到98%
// 上传节点信息
var uploadNodeResp UploadNodeResp
if err = d._retryOperation("Upload node", func() error {
var err error
uploadNodeResp, err = d.uploadNode(config, dstDir, file, dataType)
return err
}); err != nil {
return nil, fmt.Errorf("failed to upload node: %w", err)
}
up(100.0) // 完成上传
return &model.Object{
ID: uploadNodeResp.Data.NodeList[0].ID,
Name: uploadNodeResp.Data.NodeList[0].Name,
Size: file.GetSize(),
IsFolder: false,
}, nil
}
// 统一上传请求方法
func (d *Doubao) uploadRequest(uploadUrl string, method string, storeInfo StoreInfo, callback base.ReqCallback, resp interface{}) ([]byte, error) {
client := resty.New()
client.SetTransport(&http.Transport{
DisableKeepAlives: true, // 禁用连接复用
ForceAttemptHTTP2: false, // 强制使用HTTP/1.1
})
client.SetTimeout(UploadTimeout)
req := client.R()
req.SetHeaders(map[string]string{
"Host": strings.Split(uploadUrl, "/")[2],
"Referer": BaseURL + "/",
"Origin": BaseURL,
"User-Agent": UserAgent,
"X-Storage-U": d.UserId,
"Authorization": storeInfo.Auth,
})
if method == http.MethodPost {
req.SetHeader("Content-Type", "text/plain;charset=UTF-8")
}
if callback != nil {
callback(req)
}
if resp != nil {
req.SetResult(resp)
}
res, err := req.Execute(method, uploadUrl)
if err != nil && err != io.EOF {
return nil, fmt.Errorf("upload request failed: %w", err)
}
return res.Body(), nil
}
// 初始化分片上传
func (d *Doubao) initMultipartUpload(config *UploadConfig, uploadUrl string, storeInfo StoreInfo) (uploadId string, err error) {
uploadResp := UploadResp{}
_, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"uploadmode": "part",
"phase": "init",
})
}, &uploadResp)
if err != nil {
return uploadId, err
}
if uploadResp.Code != 2000 {
return uploadId, fmt.Errorf("init upload failed: %s", uploadResp.Message)
}
return uploadResp.Data.UploadId, nil
}
// 分片上传实现
func (d *Doubao) uploadPart(config *UploadConfig, uploadUrl, uploadID string, partNumber int64, data []byte, crc32Value string) (resp UploadPart, err error) {
uploadResp := UploadResp{}
storeInfo := config.InnerUploadAddress.UploadNodes[0].StoreInfos[0]
_, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
req.SetHeaders(map[string]string{
"Content-Type": "application/octet-stream",
"Content-Crc32": crc32Value,
"Content-Length": fmt.Sprintf("%d", len(data)),
"Content-Disposition": fmt.Sprintf("attachment; filename=%s", url.QueryEscape(storeInfo.StoreURI)),
})
req.SetQueryParams(map[string]string{
"uploadid": uploadID,
"part_number": strconv.FormatInt(partNumber, 10),
"phase": "transfer",
})
req.SetBody(data)
req.SetContentLength(true)
}, &uploadResp)
if err != nil {
return resp, err
}
if uploadResp.Code != 2000 {
return resp, fmt.Errorf("upload part failed: %s", uploadResp.Message)
} else if uploadResp.Data.Crc32 != crc32Value {
return resp, fmt.Errorf("upload part failed: crc32 mismatch, expected %s, got %s", crc32Value, uploadResp.Data.Crc32)
}
return uploadResp.Data, nil
}
// 完成分片上传
func (d *Doubao) completeMultipartUpload(config *UploadConfig, uploadUrl, uploadID string, parts []UploadPart) error {
uploadResp := UploadResp{}
storeInfo := config.InnerUploadAddress.UploadNodes[0].StoreInfos[0]
body := _convertUploadParts(parts)
err := utils.Retry(MaxRetryAttempts, time.Second, func() (err error) {
_, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"uploadid": uploadID,
"phase": "finish",
"uploadmode": "part",
})
req.SetBody(body)
}, &uploadResp)
if err != nil {
return err
}
// 检查响应状态码 2000 成功 4024 分片合并中
if uploadResp.Code != 2000 && uploadResp.Code != 4024 {
return fmt.Errorf("finish upload failed: %s", uploadResp.Message)
}
return err
})
if err != nil {
return fmt.Errorf("failed to complete multipart upload: %w", err)
}
return nil
}
func (d *Doubao) commitMultipartUpload(uploadConfig *UploadConfig) error {
uploadUrl := d.UploadToken.Samantha.UploadInfo.VideoHost
params := map[string]string{
"Action": "CommitUploadInner",
"Version": "2020-11-19",
"SpaceName": d.UploadToken.Samantha.UploadInfo.SpaceName,
}
tokenType := VideoDataType
videoCommitUploadResp := VideoCommitUploadResp{}
jsonBytes, err := json.Marshal(base.Json{
"SessionKey": uploadConfig.InnerUploadAddress.UploadNodes[0].SessionKey,
"Functions": []base.Json{},
})
if err != nil {
return fmt.Errorf("failed to marshal request data: %w", err)
}
_, err = d.requestApi(uploadUrl, http.MethodPost, tokenType, func(req *resty.Request) {
req.SetHeader("Content-Type", "application/json")
req.SetQueryParams(params)
req.SetBody(jsonBytes)
}, &videoCommitUploadResp)
if err != nil {
return err
}
return nil
}
// 计算CRC32
func calculateCRC32(data []byte) string {
hash := crc32.NewIEEE()
hash.Write(data)
return hex.EncodeToString(hash.Sum(nil))
}
// _retryOperation 操作重试
func (d *Doubao) _retryOperation(operation string, fn func() error) error {
return retry.Do(
fn,
retry.Attempts(MaxRetryAttempts),
retry.Delay(500*time.Millisecond),
retry.DelayType(retry.BackOffDelay),
retry.MaxJitter(200*time.Millisecond),
retry.OnRetry(func(n uint, err error) {
log.Debugf("[doubao] %s retry #%d: %v", operation, n+1, err)
}),
)
}
// _convertUploadParts 将分片信息转换为字符串
func _convertUploadParts(parts []UploadPart) string {
if len(parts) == 0 {
return ""
}
var result strings.Builder
for i, part := range parts {
if i > 0 {
result.WriteString(",")
}
result.WriteString(fmt.Sprintf("%s:%s", part.PartNumber, part.Crc32))
}
return result.String()
}
// 获取规范查询字符串
func getCanonicalQueryString(query url.Values) string {
if len(query) == 0 {
return ""
}
keys := make([]string, 0, len(query))
for k := range query {
keys = append(keys, k)
}
sort.Strings(keys)
parts := make([]string, 0, len(keys))
for _, k := range keys {
values := query[k]
for _, v := range values {
parts = append(parts, urlEncode(k)+"="+urlEncode(v))
}
}
return strings.Join(parts, "&")
}
func urlEncode(s string) string {
s = url.QueryEscape(s)
s = strings.ReplaceAll(s, "+", "%20")
return s
}
// 获取规范头信息和已签名头列表
func getCanonicalHeadersFromMap(headers map[string][]string) (string, string) {
// 不可签名的头部列表
unsignableHeaders := map[string]bool{
"authorization": true,
"content-type": true,
"content-length": true,
"user-agent": true,
"presigned-expires": true,
"expect": true,
"x-amzn-trace-id": true,
}
headerValues := make(map[string]string)
var signedHeadersList []string
for k, v := range headers {
if len(v) == 0 {
continue
}
lowerKey := strings.ToLower(k)
// 检查是否可签名
if strings.HasPrefix(lowerKey, "x-amz-") || !unsignableHeaders[lowerKey] {
value := strings.TrimSpace(v[0])
value = strings.Join(strings.Fields(value), " ")
headerValues[lowerKey] = value
signedHeadersList = append(signedHeadersList, lowerKey)
}
}
sort.Strings(signedHeadersList)
var canonicalHeadersStr strings.Builder
for _, key := range signedHeadersList {
canonicalHeadersStr.WriteString(key)
canonicalHeadersStr.WriteString(":")
canonicalHeadersStr.WriteString(headerValues[key])
canonicalHeadersStr.WriteString("\n")
}
signedHeaders := strings.Join(signedHeadersList, ";")
return canonicalHeadersStr.String(), signedHeaders
}
// 计算HMAC-SHA256
func hmacSHA256(key []byte, data string) []byte {
h := hmac.New(sha256.New, key)
h.Write([]byte(data))
return h.Sum(nil)
}
// 计算HMAC-SHA256并返回十六进制字符串
func hmacSHA256Hex(key []byte, data string) string {
return hex.EncodeToString(hmacSHA256(key, data))
}
// 计算SHA256哈希并返回十六进制字符串
func hashSHA256(data string) string {
h := sha256.New()
h.Write([]byte(data))
return hex.EncodeToString(h.Sum(nil))
}
// 获取签名密钥
func getSigningKey(secretKey, dateStamp, region, service string) []byte {
kDate := hmacSHA256([]byte("AWS4"+secretKey), dateStamp)
kRegion := hmacSHA256(kDate, region)
kService := hmacSHA256(kRegion, service)
kSigning := hmacSHA256(kService, "aws4_request")
return kSigning
}
// generateContentDisposition 生成符合RFC 5987标准的Content-Disposition头部
func generateContentDisposition(filename string) string {
// 按照RFC 2047进行编码用于filename部分
encodedName := urlEncode(filename)
// 按照RFC 5987进行编码用于filename*部分
encodedNameRFC5987 := encodeRFC5987(filename)
return fmt.Sprintf("attachment; filename=\"%s\"; filename*=utf-8''%s",
encodedName, encodedNameRFC5987)
}
// encodeRFC5987 按照RFC 5987规范编码字符串适用于HTTP头部参数中的非ASCII字符
func encodeRFC5987(s string) string {
var buf strings.Builder
for _, r := range []byte(s) {
// 根据RFC 5987只有字母、数字和部分特殊符号可以不编码
if (r >= 'a' && r <= 'z') ||
(r >= 'A' && r <= 'Z') ||
(r >= '0' && r <= '9') ||
r == '-' || r == '.' || r == '_' || r == '~' {
buf.WriteByte(r)
} else {
// 其他字符都需要百分号编码
fmt.Fprintf(&buf, "%%%02X", r)
}
}
return buf.String()
}
func randomString() string {
const charset = "0123456789abcdefghijklmnopqrstuvwxyz"
const length = 11 // 11位随机字符串
var sb strings.Builder
sb.Grow(length)
for i := 0; i < length; i++ {
sb.WriteByte(charset[rand.Intn(len(charset))])
}
return sb.String()
}

View File

@@ -0,0 +1,177 @@
package doubao_share
import (
"context"
"errors"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/go-resty/resty/v2"
"net/http"
)
type DoubaoShare struct {
model.Storage
Addition
RootFiles []RootFileList
}
func (d *DoubaoShare) Config() driver.Config {
return config
}
func (d *DoubaoShare) GetAddition() driver.Additional {
return &d.Addition
}
func (d *DoubaoShare) Init(ctx context.Context) error {
// 初始化 虚拟分享列表
if err := d.initShareList(); err != nil {
return err
}
return nil
}
func (d *DoubaoShare) Drop(ctx context.Context) error {
return nil
}
func (d *DoubaoShare) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
// 检查是否为根目录
if dir.GetID() == "" && dir.GetPath() == "/" {
return d.listRootDirectory(ctx)
}
// 非根目录,处理不同情况
if fo, ok := dir.(*FileObject); ok {
if fo.ShareID == "" {
// 虚拟目录,需要列出子目录
return d.listVirtualDirectoryContent(dir)
} else {
// 具有分享ID的目录获取此分享下的文件
shareId, relativePath, err := d._findShareAndPath(dir)
if err != nil {
return nil, err
}
return d.getFilesInPath(ctx, shareId, dir.GetID(), relativePath)
}
}
// 使用通用方法
shareId, relativePath, err := d._findShareAndPath(dir)
if err != nil {
return nil, err
}
// 获取指定路径下的文件
return d.getFilesInPath(ctx, shareId, dir.GetID(), relativePath)
}
func (d *DoubaoShare) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
var downloadUrl string
if u, ok := file.(*FileObject); ok {
switch u.NodeType {
case VideoType, AudioType:
var r GetVideoFileUrlResp
_, err := d.request("/samantha/media/get_play_info", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"key": u.Key,
"share_id": u.ShareID,
"node_id": file.GetID(),
})
}, &r)
if err != nil {
return nil, err
}
downloadUrl = r.Data.OriginalMediaInfo.MainURL
default:
var r GetFileUrlResp
_, err := d.request("/alice/message/get_file_url", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{u.Key},
"type": FileNodeType[u.NodeType],
})
}, &r)
if err != nil {
return nil, err
}
downloadUrl = r.Data.FileUrls[0].MainURL
}
// 生成标准的Content-Disposition
contentDisposition := generateContentDisposition(u.Name)
return &model.Link{
URL: downloadUrl,
Header: http.Header{
"User-Agent": []string{UserAgent},
"Content-Disposition": []string{contentDisposition},
},
}, nil
}
return nil, errors.New("can't convert obj to URL")
}
func (d *DoubaoShare) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
// TODO create folder, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// TODO move obj, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
// TODO rename obj, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// TODO copy obj, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) Remove(ctx context.Context, obj model.Obj) error {
// TODO remove obj, optional
return errs.NotImplement
}
func (d *DoubaoShare) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
// TODO upload file, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// return errs.NotImplement to use an internal archive tool
return nil, errs.NotImplement
}
//func (d *DoubaoShare) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*DoubaoShare)(nil)

View File

@@ -0,0 +1,32 @@
package doubao_share
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootPath
Cookie string `json:"cookie" type:"text"`
ShareIds string `json:"share_ids" type:"text" required:"true"`
}
var config = driver.Config{
Name: "DoubaoShare",
LocalSort: true,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: true,
NeedMs: false,
DefaultRoot: "/",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &DoubaoShare{}
})
}

View File

@@ -0,0 +1,207 @@
package doubao_share
import (
"encoding/json"
"fmt"
"github.com/alist-org/alist/v3/internal/model"
)
type BaseResp struct {
Code int `json:"code"`
Msg string `json:"msg"`
}
type NodeInfoData struct {
Share ShareInfo `json:"share,omitempty"`
Creator CreatorInfo `json:"creator,omitempty"`
NodeList []File `json:"node_list,omitempty"`
NodeInfo File `json:"node_info,omitempty"`
Children []File `json:"children,omitempty"`
Path FilePath `json:"path,omitempty"`
NextCursor string `json:"next_cursor,omitempty"`
HasMore bool `json:"has_more,omitempty"`
}
type NodeInfoResp struct {
BaseResp
NodeInfoData `json:"data"`
}
type RootFileList struct {
ShareID string
VirtualPath string
NodeInfo NodeInfoData
Child *[]RootFileList
}
type File struct {
ID string `json:"id"`
Name string `json:"name"`
Key string `json:"key"`
NodeType int `json:"node_type"`
Size int64 `json:"size"`
Source int `json:"source"`
NameReviewStatus int `json:"name_review_status"`
ContentReviewStatus int `json:"content_review_status"`
RiskReviewStatus int `json:"risk_review_status"`
ConversationID string `json:"conversation_id"`
ParentID string `json:"parent_id"`
CreateTime int64 `json:"create_time"`
UpdateTime int64 `json:"update_time"`
}
type FileObject struct {
model.Object
ShareID string
Key string
NodeID string
NodeType int
}
type ShareInfo struct {
ShareID string `json:"share_id"`
FirstNode struct {
ID string `json:"id"`
Name string `json:"name"`
Key string `json:"key"`
NodeType int `json:"node_type"`
Size int `json:"size"`
Source int `json:"source"`
Content struct {
LinkFileType string `json:"link_file_type"`
ImageWidth int `json:"image_width"`
ImageHeight int `json:"image_height"`
AiSkillStatus int `json:"ai_skill_status"`
} `json:"content"`
NameReviewStatus int `json:"name_review_status"`
ContentReviewStatus int `json:"content_review_status"`
RiskReviewStatus int `json:"risk_review_status"`
ConversationID string `json:"conversation_id"`
ParentID string `json:"parent_id"`
CreateTime int `json:"create_time"`
UpdateTime int `json:"update_time"`
} `json:"first_node"`
NodeCount int `json:"node_count"`
CreateTime int `json:"create_time"`
Channel string `json:"channel"`
InfluencerType int `json:"influencer_type"`
}
type CreatorInfo struct {
EntityID string `json:"entity_id"`
UserName string `json:"user_name"`
NickName string `json:"nick_name"`
Avatar struct {
OriginURL string `json:"origin_url"`
TinyURL string `json:"tiny_url"`
URI string `json:"uri"`
} `json:"avatar"`
}
type FilePath []struct {
ID string `json:"id"`
Name string `json:"name"`
Key string `json:"key"`
NodeType int `json:"node_type"`
Size int `json:"size"`
Source int `json:"source"`
NameReviewStatus int `json:"name_review_status"`
ContentReviewStatus int `json:"content_review_status"`
RiskReviewStatus int `json:"risk_review_status"`
ConversationID string `json:"conversation_id"`
ParentID string `json:"parent_id"`
CreateTime int `json:"create_time"`
UpdateTime int `json:"update_time"`
}
type GetFileUrlResp struct {
BaseResp
Data struct {
FileUrls []struct {
URI string `json:"uri"`
MainURL string `json:"main_url"`
BackURL string `json:"back_url"`
} `json:"file_urls"`
} `json:"data"`
}
type GetVideoFileUrlResp struct {
BaseResp
Data struct {
MediaType string `json:"media_type"`
MediaInfo []struct {
Meta struct {
Height string `json:"height"`
Width string `json:"width"`
Format string `json:"format"`
Duration float64 `json:"duration"`
CodecType string `json:"codec_type"`
Definition string `json:"definition"`
} `json:"meta"`
MainURL string `json:"main_url"`
BackupURL string `json:"backup_url"`
} `json:"media_info"`
OriginalMediaInfo struct {
Meta struct {
Height string `json:"height"`
Width string `json:"width"`
Format string `json:"format"`
Duration float64 `json:"duration"`
CodecType string `json:"codec_type"`
Definition string `json:"definition"`
} `json:"meta"`
MainURL string `json:"main_url"`
BackupURL string `json:"backup_url"`
} `json:"original_media_info"`
PosterURL string `json:"poster_url"`
PlayableStatus int `json:"playable_status"`
} `json:"data"`
}
type CommonResp struct {
Code int `json:"code"`
Msg string `json:"msg,omitempty"`
Message string `json:"message,omitempty"` // 错误情况下的消息
Data json.RawMessage `json:"data,omitempty"` // 原始数据,稍后解析
Error *struct {
Code int `json:"code"`
Message string `json:"message"`
Locale string `json:"locale"`
} `json:"error,omitempty"`
}
// IsSuccess 判断响应是否成功
func (r *CommonResp) IsSuccess() bool {
return r.Code == 0
}
// GetError 获取错误信息
func (r *CommonResp) GetError() error {
if r.IsSuccess() {
return nil
}
// 优先使用message字段
errMsg := r.Message
if errMsg == "" {
errMsg = r.Msg
}
// 如果error对象存在且有详细消息,则使用error中的信息
if r.Error != nil && r.Error.Message != "" {
errMsg = r.Error.Message
}
return fmt.Errorf("[doubao] API error (code: %d): %s", r.Code, errMsg)
}
// UnmarshalData 将data字段解析为指定类型
func (r *CommonResp) UnmarshalData(v interface{}) error {
if !r.IsSuccess() {
return r.GetError()
}
if len(r.Data) == 0 {
return nil
}
return json.Unmarshal(r.Data, v)
}

View File

@@ -0,0 +1,744 @@
package doubao_share
import (
"context"
"encoding/json"
"fmt"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/model"
"github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus"
"net/http"
"net/url"
"path"
"regexp"
"strings"
"time"
)
const (
DirectoryType = 1
FileType = 2
LinkType = 3
ImageType = 4
PagesType = 5
VideoType = 6
AudioType = 7
MeetingMinutesType = 8
)
var FileNodeType = map[int]string{
1: "directory",
2: "file",
3: "link",
4: "image",
5: "pages",
6: "video",
7: "audio",
8: "meeting_minutes",
}
const (
BaseURL = "https://www.doubao.com"
FileDataType = "file"
ImgDataType = "image"
VideoDataType = "video"
UserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36"
)
func (d *DoubaoShare) request(path string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
reqUrl := BaseURL + path
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"Cookie": d.Cookie,
"User-Agent": UserAgent,
})
req.SetQueryParams(map[string]string{
"version_code": "20800",
"device_platform": "web",
})
if callback != nil {
callback(req)
}
var commonResp CommonResp
res, err := req.Execute(method, reqUrl)
log.Debugln(res.String())
if err != nil {
return nil, err
}
body := res.Body()
// 先解析为通用响应
if err = json.Unmarshal(body, &commonResp); err != nil {
return nil, err
}
// 检查响应是否成功
if !commonResp.IsSuccess() {
return body, commonResp.GetError()
}
if resp != nil {
if err = json.Unmarshal(body, resp); err != nil {
return body, err
}
}
return body, nil
}
func (d *DoubaoShare) getFiles(dirId, nodeId, cursor string) (resp []File, err error) {
var r NodeInfoResp
var body = base.Json{
"share_id": dirId,
"node_id": nodeId,
}
// 如果有游标,则设置游标和大小
if cursor != "" {
body["cursor"] = cursor
body["size"] = 50
} else {
body["need_full_path"] = false
}
_, err = d.request("/samantha/aispace/share/node_info", http.MethodPost, func(req *resty.Request) {
req.SetBody(body)
}, &r)
if err != nil {
return nil, err
}
if r.NodeInfoData.Children != nil {
resp = r.NodeInfoData.Children
}
if r.NodeInfoData.NextCursor != "-1" {
// 递归获取下一页
nextFiles, err := d.getFiles(dirId, nodeId, r.NodeInfoData.NextCursor)
if err != nil {
return nil, err
}
resp = append(r.NodeInfoData.Children, nextFiles...)
}
return resp, err
}
func (d *DoubaoShare) getShareOverview(shareId, cursor string) (resp []File, err error) {
return d.getShareOverviewWithHistory(shareId, cursor, make(map[string]bool))
}
func (d *DoubaoShare) getShareOverviewWithHistory(shareId, cursor string, cursorHistory map[string]bool) (resp []File, err error) {
var r NodeInfoResp
var body = base.Json{
"share_id": shareId,
}
// 如果有游标,则设置游标和大小
if cursor != "" {
body["cursor"] = cursor
body["size"] = 50
} else {
body["need_full_path"] = false
}
_, err = d.request("/samantha/aispace/share/overview", http.MethodPost, func(req *resty.Request) {
req.SetBody(body)
}, &r)
if err != nil {
return nil, err
}
if r.NodeInfoData.NodeList != nil {
resp = r.NodeInfoData.NodeList
}
if r.NodeInfoData.NextCursor != "-1" {
// 检查游标是否重复出现,防止无限循环
if cursorHistory[r.NodeInfoData.NextCursor] {
return resp, nil
}
// 记录当前游标
cursorHistory[r.NodeInfoData.NextCursor] = true
// 递归获取下一页
nextFiles, err := d.getShareOverviewWithHistory(shareId, r.NodeInfoData.NextCursor, cursorHistory)
if err != nil {
return nil, err
}
resp = append(resp, nextFiles...)
}
return resp, nil
}
func (d *DoubaoShare) initShareList() error {
if d.Addition.ShareIds == "" {
return fmt.Errorf("share_ids is empty")
}
// 解析分享配置
shareConfigs, rootShares, err := d._parseShareConfigs()
if err != nil {
return err
}
// 检查路径冲突
if err := d._detectPathConflicts(shareConfigs); err != nil {
return err
}
// 构建树形结构
rootMap := d._buildTreeStructure(shareConfigs, rootShares)
// 提取顶级节点
topLevelNodes := d._extractTopLevelNodes(rootMap, rootShares)
if len(topLevelNodes) == 0 {
return fmt.Errorf("no valid share_ids found")
}
// 存储结果
d.RootFiles = topLevelNodes
return nil
}
// 从配置中解析分享ID和路径
func (d *DoubaoShare) _parseShareConfigs() (map[string]string, []string, error) {
shareConfigs := make(map[string]string) // 路径 -> 分享ID
rootShares := make([]string, 0) // 根目录显示的分享ID
lines := strings.Split(strings.TrimSpace(d.Addition.ShareIds), "\n")
if len(lines) == 0 {
return nil, nil, fmt.Errorf("no share_ids found")
}
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" {
continue
}
// 解析分享ID和路径
parts := strings.Split(line, "|")
var shareId, sharePath string
if len(parts) == 1 {
// 无路径分享,直接在根目录显示
shareId = _extractShareId(parts[0])
if shareId != "" {
rootShares = append(rootShares, shareId)
}
continue
} else if len(parts) >= 2 {
shareId = _extractShareId(parts[0])
sharePath = strings.Trim(parts[1], "/")
}
if shareId == "" {
log.Warnf("[doubao_share] Invalid Share_id Format: %s", line)
continue
}
// 空路径也加入根目录显示
if sharePath == "" {
rootShares = append(rootShares, shareId)
continue
}
// 添加到路径映射
shareConfigs[sharePath] = shareId
}
return shareConfigs, rootShares, nil
}
// 检测路径冲突
func (d *DoubaoShare) _detectPathConflicts(shareConfigs map[string]string) error {
// 检查直接路径冲突
pathToShareIds := make(map[string][]string)
for sharePath, id := range shareConfigs {
pathToShareIds[sharePath] = append(pathToShareIds[sharePath], id)
}
for sharePath, ids := range pathToShareIds {
if len(ids) > 1 {
return fmt.Errorf("路径冲突: 路径 '%s' 被多个不同的分享ID使用: %s",
sharePath, strings.Join(ids, ", "))
}
}
// 检查层次冲突
for path1, id1 := range shareConfigs {
for path2, id2 := range shareConfigs {
if path1 == path2 || id1 == id2 {
continue
}
// 检查前缀冲突
if strings.HasPrefix(path2, path1+"/") || strings.HasPrefix(path1, path2+"/") {
return fmt.Errorf("路径冲突: 路径 '%s' (ID: %s) 与路径 '%s' (ID: %s) 存在层次冲突",
path1, id1, path2, id2)
}
}
}
return nil
}
// 构建树形结构
func (d *DoubaoShare) _buildTreeStructure(shareConfigs map[string]string, rootShares []string) map[string]*RootFileList {
rootMap := make(map[string]*RootFileList)
// 添加所有分享节点
for sharePath, shareId := range shareConfigs {
children := make([]RootFileList, 0)
rootMap[sharePath] = &RootFileList{
ShareID: shareId,
VirtualPath: sharePath,
NodeInfo: NodeInfoData{},
Child: &children,
}
}
// 构建父子关系
for sharePath, node := range rootMap {
if sharePath == "" {
continue
}
pathParts := strings.Split(sharePath, "/")
if len(pathParts) > 1 {
parentPath := strings.Join(pathParts[:len(pathParts)-1], "/")
// 确保所有父级路径都已创建
_ensurePathExists(rootMap, parentPath)
// 添加当前节点到父节点
if parent, exists := rootMap[parentPath]; exists {
*parent.Child = append(*parent.Child, *node)
}
}
}
return rootMap
}
// 提取顶级节点
func (d *DoubaoShare) _extractTopLevelNodes(rootMap map[string]*RootFileList, rootShares []string) []RootFileList {
var topLevelNodes []RootFileList
// 添加根目录分享
for _, shareId := range rootShares {
children := make([]RootFileList, 0)
topLevelNodes = append(topLevelNodes, RootFileList{
ShareID: shareId,
VirtualPath: "",
NodeInfo: NodeInfoData{},
Child: &children,
})
}
// 添加顶级目录
for rootPath, node := range rootMap {
if rootPath == "" {
continue
}
isTopLevel := true
pathParts := strings.Split(rootPath, "/")
if len(pathParts) > 1 {
parentPath := strings.Join(pathParts[:len(pathParts)-1], "/")
if _, exists := rootMap[parentPath]; exists {
isTopLevel = false
}
}
if isTopLevel {
topLevelNodes = append(topLevelNodes, *node)
}
}
return topLevelNodes
}
// 确保路径存在,创建所有必要的中间节点
func _ensurePathExists(rootMap map[string]*RootFileList, path string) {
if path == "" {
return
}
// 如果路径已存在,不需要再处理
if _, exists := rootMap[path]; exists {
return
}
// 创建当前路径节点
children := make([]RootFileList, 0)
rootMap[path] = &RootFileList{
ShareID: "",
VirtualPath: path,
NodeInfo: NodeInfoData{},
Child: &children,
}
// 处理父路径
pathParts := strings.Split(path, "/")
if len(pathParts) > 1 {
parentPath := strings.Join(pathParts[:len(pathParts)-1], "/")
// 确保父路径存在
_ensurePathExists(rootMap, parentPath)
// 将当前节点添加为父节点的子节点
if parent, exists := rootMap[parentPath]; exists {
*parent.Child = append(*parent.Child, *rootMap[path])
}
}
}
// _extractShareId 从URL或直接ID中提取分享ID
func _extractShareId(input string) string {
input = strings.TrimSpace(input)
if strings.HasPrefix(input, "http") {
regex := regexp.MustCompile(`/drive/s/([a-zA-Z0-9]+)`)
if matches := regex.FindStringSubmatch(input); len(matches) > 1 {
return matches[1]
}
return ""
}
return input // 直接返回ID
}
// _findRootFileByShareID 查找指定ShareID的配置
func _findRootFileByShareID(rootFiles []RootFileList, shareID string) *RootFileList {
for i, rf := range rootFiles {
if rf.ShareID == shareID {
return &rootFiles[i]
}
if rf.Child != nil && len(*rf.Child) > 0 {
if found := _findRootFileByShareID(*rf.Child, shareID); found != nil {
return found
}
}
}
return nil
}
// _findNodeByPath 查找指定路径的节点
func _findNodeByPath(rootFiles []RootFileList, path string) *RootFileList {
for i, rf := range rootFiles {
if rf.VirtualPath == path {
return &rootFiles[i]
}
if rf.Child != nil && len(*rf.Child) > 0 {
if found := _findNodeByPath(*rf.Child, path); found != nil {
return found
}
}
}
return nil
}
// _findShareByPath 根据路径查找分享和相对路径
func _findShareByPath(rootFiles []RootFileList, path string) (*RootFileList, string) {
// 完全匹配或子路径匹配
for i, rf := range rootFiles {
if rf.VirtualPath == path {
return &rootFiles[i], ""
}
if rf.VirtualPath != "" && strings.HasPrefix(path, rf.VirtualPath+"/") {
relPath := strings.TrimPrefix(path, rf.VirtualPath+"/")
// 先检查子节点
if rf.Child != nil && len(*rf.Child) > 0 {
if child, childPath := _findShareByPath(*rf.Child, path); child != nil {
return child, childPath
}
}
return &rootFiles[i], relPath
}
// 递归检查子节点
if rf.Child != nil && len(*rf.Child) > 0 {
if child, childPath := _findShareByPath(*rf.Child, path); child != nil {
return child, childPath
}
}
}
// 检查根目录分享
for i, rf := range rootFiles {
if rf.VirtualPath == "" && rf.ShareID != "" {
parts := strings.SplitN(path, "/", 2)
if len(parts) > 0 && parts[0] == rf.ShareID {
if len(parts) > 1 {
return &rootFiles[i], parts[1]
}
return &rootFiles[i], ""
}
}
}
return nil, ""
}
// _findShareAndPath 根据给定路径查找对应的ShareID和相对路径
func (d *DoubaoShare) _findShareAndPath(dir model.Obj) (string, string, error) {
dirPath := dir.GetPath()
// 如果是根目录,返回空值表示需要列出所有分享
if dirPath == "/" || dirPath == "" {
return "", "", nil
}
// 检查是否是 FileObject 类型,并获取 ShareID
if fo, ok := dir.(*FileObject); ok && fo.ShareID != "" {
// 直接使用对象中存储的 ShareID
// 计算相对路径(移除前导斜杠)
relativePath := strings.TrimPrefix(dirPath, "/")
// 递归查找对应的 RootFile
found := _findRootFileByShareID(d.RootFiles, fo.ShareID)
if found != nil {
if found.VirtualPath != "" {
// 如果此分享配置了路径前缀,需要考虑相对路径的计算
if strings.HasPrefix(relativePath, found.VirtualPath) {
return fo.ShareID, strings.TrimPrefix(relativePath, found.VirtualPath+"/"), nil
}
}
return fo.ShareID, relativePath, nil
}
// 如果找不到对应的 RootFile 配置,仍然使用对象中的 ShareID
return fo.ShareID, relativePath, nil
}
// 移除开头的斜杠
cleanPath := strings.TrimPrefix(dirPath, "/")
// 先检查是否有直接匹配的根目录分享
for _, rootFile := range d.RootFiles {
if rootFile.VirtualPath == "" && rootFile.ShareID != "" {
// 检查是否匹配当前路径的第一部分
parts := strings.SplitN(cleanPath, "/", 2)
if len(parts) > 0 && parts[0] == rootFile.ShareID {
if len(parts) > 1 {
return rootFile.ShareID, parts[1], nil
}
return rootFile.ShareID, "", nil
}
}
}
// 查找匹配此路径的分享或虚拟目录
share, relPath := _findShareByPath(d.RootFiles, cleanPath)
if share != nil {
return share.ShareID, relPath, nil
}
log.Warnf("[doubao_share] No matching share path found: %s", dirPath)
return "", "", fmt.Errorf("no matching share path found: %s", dirPath)
}
// convertToFileObject 将File转换为FileObject
func (d *DoubaoShare) convertToFileObject(file File, shareId string, relativePath string) *FileObject {
// 构建文件对象
obj := &FileObject{
Object: model.Object{
ID: file.ID,
Name: file.Name,
Size: file.Size,
Modified: time.Unix(file.UpdateTime, 0),
Ctime: time.Unix(file.CreateTime, 0),
IsFolder: file.NodeType == DirectoryType,
Path: path.Join(relativePath, file.Name),
},
ShareID: shareId,
Key: file.Key,
NodeID: file.ID,
NodeType: file.NodeType,
}
return obj
}
// getFilesInPath 获取指定分享和路径下的文件
func (d *DoubaoShare) getFilesInPath(ctx context.Context, shareId, nodeId, relativePath string) ([]model.Obj, error) {
var (
files []File
err error
)
// 调用overview接口获取分享链接信息 nodeId
if nodeId == "" {
files, err = d.getShareOverview(shareId, "")
if err != nil {
return nil, fmt.Errorf("failed to get share link information: %w", err)
}
result := make([]model.Obj, 0, len(files))
for _, file := range files {
result = append(result, d.convertToFileObject(file, shareId, "/"))
}
return result, nil
} else {
files, err = d.getFiles(shareId, nodeId, "")
if err != nil {
return nil, fmt.Errorf("failed to get share file: %w", err)
}
result := make([]model.Obj, 0, len(files))
for _, file := range files {
result = append(result, d.convertToFileObject(file, shareId, path.Join("/", relativePath)))
}
return result, nil
}
}
// listRootDirectory 处理根目录的内容展示
func (d *DoubaoShare) listRootDirectory(ctx context.Context) ([]model.Obj, error) {
objects := make([]model.Obj, 0)
// 分组处理:直接显示的分享内容 vs 虚拟目录
var directShareIDs []string
addedDirs := make(map[string]bool)
// 处理所有根节点
for _, rootFile := range d.RootFiles {
if rootFile.VirtualPath == "" && rootFile.ShareID != "" {
// 无路径分享记录ShareID以便后续获取内容
directShareIDs = append(directShareIDs, rootFile.ShareID)
} else {
// 有路径的分享,显示第一级目录
parts := strings.SplitN(rootFile.VirtualPath, "/", 2)
firstLevel := parts[0]
// 避免重复添加同名目录
if _, exists := addedDirs[firstLevel]; exists {
continue
}
// 创建虚拟目录对象
obj := &FileObject{
Object: model.Object{
ID: "",
Name: firstLevel,
Modified: time.Now(),
Ctime: time.Now(),
IsFolder: true,
Path: path.Join("/", firstLevel),
},
ShareID: rootFile.ShareID,
Key: "",
NodeID: "",
NodeType: DirectoryType,
}
objects = append(objects, obj)
addedDirs[firstLevel] = true
}
}
// 处理直接显示的分享内容
for _, shareID := range directShareIDs {
shareFiles, err := d.getFilesInPath(ctx, shareID, "", "")
if err != nil {
log.Warnf("[doubao_share] Failed to get list of files in share %s: %s", shareID, err)
continue
}
objects = append(objects, shareFiles...)
}
return objects, nil
}
// listVirtualDirectoryContent 列出虚拟目录的内容
func (d *DoubaoShare) listVirtualDirectoryContent(dir model.Obj) ([]model.Obj, error) {
dirPath := strings.TrimPrefix(dir.GetPath(), "/")
objects := make([]model.Obj, 0)
// 递归查找此路径的节点
node := _findNodeByPath(d.RootFiles, dirPath)
if node != nil && node.Child != nil {
// 显示此节点的所有子节点
for _, child := range *node.Child {
// 计算显示名称(取路径的最后一部分)
displayName := child.VirtualPath
if child.VirtualPath != "" {
parts := strings.Split(child.VirtualPath, "/")
displayName = parts[len(parts)-1]
} else if child.ShareID != "" {
displayName = child.ShareID
}
obj := &FileObject{
Object: model.Object{
ID: "",
Name: displayName,
Modified: time.Now(),
Ctime: time.Now(),
IsFolder: true,
Path: path.Join("/", child.VirtualPath),
},
ShareID: child.ShareID,
Key: "",
NodeID: "",
NodeType: DirectoryType,
}
objects = append(objects, obj)
}
}
return objects, nil
}
// generateContentDisposition 生成符合RFC 5987标准的Content-Disposition头部
func generateContentDisposition(filename string) string {
// 按照RFC 2047进行编码用于filename部分
encodedName := urlEncode(filename)
// 按照RFC 5987进行编码用于filename*部分
encodedNameRFC5987 := encodeRFC5987(filename)
return fmt.Sprintf("attachment; filename=\"%s\"; filename*=utf-8''%s",
encodedName, encodedNameRFC5987)
}
// encodeRFC5987 按照RFC 5987规范编码字符串适用于HTTP头部参数中的非ASCII字符
func encodeRFC5987(s string) string {
var buf strings.Builder
for _, r := range []byte(s) {
// 根据RFC 5987只有字母、数字和部分特殊符号可以不编码
if (r >= 'a' && r <= 'z') ||
(r >= 'A' && r <= 'Z') ||
(r >= '0' && r <= '9') ||
r == '-' || r == '.' || r == '_' || r == '~' {
buf.WriteByte(r)
} else {
// 其他字符都需要百分号编码
fmt.Fprintf(&buf, "%%%02X", r)
}
}
return buf.String()
}
func urlEncode(s string) string {
s = url.QueryEscape(s)
s = strings.ReplaceAll(s, "+", "%20")
return s
}

View File

@@ -191,7 +191,7 @@ func (d *Dropbox) Put(ctx context.Context, dstDir model.Obj, stream model.FileSt
}
url := d.contentBase + "/2/files/upload_session/append_v2"
reader := io.LimitReader(stream, PartSize)
reader := driver.NewLimitedUploadStream(ctx, io.LimitReader(stream, PartSize))
req, err := http.NewRequest(http.MethodPost, url, reader)
if err != nil {
log.Errorf("failed to update file when append to upload session, err: %+v", err)
@@ -219,13 +219,8 @@ func (d *Dropbox) Put(ctx context.Context, dstDir model.Obj, stream model.FileSt
return err
}
_ = res.Body.Close()
if count > 0 {
up(float64(i+1) * 100 / float64(count))
}
offset += byteSize
}
// 3.finish
toPath := dstDir.GetPath() + "/" + stream.GetName()

View File

@@ -3,6 +3,7 @@ package febbox
import (
"encoding/json"
"errors"
"fmt"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/op"
"github.com/go-resty/resty/v2"
@@ -135,6 +136,9 @@ func (d *FebBox) getDownloadLink(id string, ip string) (string, error) {
if err = json.Unmarshal(res, &fileDownloadResp); err != nil {
return "", err
}
if len(fileDownloadResp.Data) == 0 {
return "", fmt.Errorf("can not get download link, code:%d, msg:%s", fileDownloadResp.Code, fileDownloadResp.Msg)
}
return fileDownloadResp.Data[0].DownloadURL, nil
}

View File

@@ -114,13 +114,15 @@ func (d *FTP) Remove(ctx context.Context, obj model.Obj) error {
}
}
func (d *FTP) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
func (d *FTP) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) error {
if err := d.login(); err != nil {
return err
}
// TODO: support cancel
path := stdpath.Join(dstDir.GetPath(), stream.GetName())
return d.conn.Stor(encode(path, d.Encoding), stream)
path := stdpath.Join(dstDir.GetPath(), s.GetName())
return d.conn.Stor(encode(path, d.Encoding), driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: s,
UpdateProgress: up,
}))
}
var _ driver.Driver = (*FTP)(nil)

View File

@@ -3,7 +3,6 @@ package github
import (
"context"
"encoding/base64"
"errors"
"fmt"
"io"
"net/http"
@@ -12,12 +11,14 @@ import (
"sync"
"text/template"
"github.com/ProtonMail/go-crypto/openpgp"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
@@ -33,6 +34,7 @@ type Github struct {
moveMsgTmpl *template.Template
isOnBranch bool
commitMutex sync.Mutex
pgpEntity *openpgp.Entity
}
func (d *Github) Config() driver.Config {
@@ -84,10 +86,13 @@ func (d *Github) Init(ctx context.Context) error {
}
d.client = base.NewRestyClient().
SetHeader("Accept", "application/vnd.github.object+json").
SetHeader("Authorization", "Bearer "+d.Token).
SetHeader("X-GitHub-Api-Version", "2022-11-28").
SetLogger(log.StandardLogger()).
SetDebug(false)
token := strings.TrimSpace(d.Token)
if token != "" {
d.client = d.client.SetHeader("Authorization", "Bearer "+token)
}
if d.Ref == "" {
repo, err := d.getRepo()
if err != nil {
@@ -99,6 +104,26 @@ func (d *Github) Init(ctx context.Context) error {
_, err = d.getBranchHead()
d.isOnBranch = err == nil
}
if d.GPGPrivateKey != "" {
if d.CommitterName == "" || d.AuthorName == "" {
user, e := d.getAuthenticatedUser()
if e != nil {
return e
}
if d.CommitterName == "" {
d.CommitterName = user.Name
d.CommitterEmail = user.Email
}
if d.AuthorName == "" {
d.AuthorName = user.Name
d.AuthorEmail = user.Email
}
}
d.pgpEntity, err = loadPrivateKey(d.GPGPrivateKey, d.GPGKeyPassphrase)
if err != nil {
return err
}
}
return nil
}
@@ -148,8 +173,13 @@ func (d *Github) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
if obj.Type == "submodule" {
return nil, errors.New("cannot download a submodule")
}
url := obj.DownloadURL
ghProxy := strings.TrimSpace(d.Addition.GitHubProxy)
if ghProxy != "" {
url = strings.Replace(url, "https://raw.githubusercontent.com", ghProxy, 1)
}
return &model.Link{
URL: obj.DownloadURL,
URL: url,
}, nil
}
@@ -166,10 +196,39 @@ func (d *Github) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
if parent.Entries == nil {
return errs.NotFolder
}
// if parent folder contains .gitkeep only, mark it and delete .gitkeep later
gitKeepSha := ""
subDirSha, err := d.newTree("", []interface{}{
map[string]string{
"path": ".gitkeep",
"mode": "100644",
"type": "blob",
"content": "",
},
})
if err != nil {
return err
}
newTree := make([]interface{}, 0, 2)
newTree = append(newTree, TreeObjReq{
Path: dirName,
Mode: "040000",
Type: "tree",
Sha: subDirSha,
})
if len(parent.Entries) == 1 && parent.Entries[0].Name == ".gitkeep" {
gitKeepSha = parent.Entries[0].Sha
newTree = append(newTree, TreeObjReq{
Path: ".gitkeep",
Mode: "100644",
Type: "blob",
Sha: nil,
})
}
newSha, err := d.newTree(parent.Sha, newTree)
if err != nil {
return err
}
rootSha, err := d.renewParentTrees(parentDir.GetPath(), parent.Sha, newSha, "/")
if err != nil {
return err
}
commitMessage, err := getMessage(d.mkdirMsgTmpl, &MessageTemplateVars{
@@ -182,13 +241,7 @@ func (d *Github) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
if err != nil {
return err
}
if err = d.createGitKeep(stdpath.Join(parentDir.GetPath(), dirName), commitMessage); err != nil {
return err
}
if gitKeepSha != "" {
err = d.delete(stdpath.Join(parentDir.GetPath(), ".gitkeep"), gitKeepSha, commitMessage)
}
return err
return d.commit(commitMessage, rootSha)
}
func (d *Github) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
@@ -631,33 +684,15 @@ func (d *Github) get(path string) (*Object, error) {
return &resp, err
}
func (d *Github) createGitKeep(path, message string) error {
body := map[string]interface{}{
"message": message,
"content": "",
"branch": d.Ref,
}
d.addCommitterAndAuthor(&body)
res, err := d.client.R().SetBody(body).Put(d.getContentApiUrl(stdpath.Join(path, ".gitkeep")))
if err != nil {
return err
}
if res.StatusCode() != 200 && res.StatusCode() != 201 {
return toErr(res)
}
return nil
}
func (d *Github) putBlob(ctx context.Context, stream model.FileStreamer, up driver.UpdateProgress) (string, error) {
func (d *Github) putBlob(ctx context.Context, s model.FileStreamer, up driver.UpdateProgress) (string, error) {
beforeContent := "{\"encoding\":\"base64\",\"content\":\""
afterContent := "\"}"
length := int64(len(beforeContent)) + calculateBase64Length(stream.GetSize()) + int64(len(afterContent))
length := int64(len(beforeContent)) + calculateBase64Length(s.GetSize()) + int64(len(afterContent))
beforeContentReader := strings.NewReader(beforeContent)
contentReader, contentWriter := io.Pipe()
go func() {
encoder := base64.NewEncoder(base64.StdEncoding, contentWriter)
if _, err := utils.CopyWithBuffer(encoder, stream); err != nil {
if _, err := utils.CopyWithBuffer(encoder, s); err != nil {
_ = contentWriter.CloseWithError(err)
return
}
@@ -667,23 +702,29 @@ func (d *Github) putBlob(ctx context.Context, stream model.FileStreamer, up driv
afterContentReader := strings.NewReader(afterContent)
req, err := http.NewRequestWithContext(ctx, http.MethodPost,
fmt.Sprintf("https://api.github.com/repos/%s/%s/git/blobs", d.Owner, d.Repo),
&ReaderWithProgress{
driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: &driver.SimpleReaderWithSize{
Reader: io.MultiReader(beforeContentReader, contentReader, afterContentReader),
Length: length,
Progress: up,
})
Size: length,
},
UpdateProgress: up,
}))
if err != nil {
return "", err
}
req.Header.Set("Accept", "application/vnd.github+json")
req.Header.Set("Authorization", "Bearer "+d.Token)
req.Header.Set("X-GitHub-Api-Version", "2022-11-28")
token := strings.TrimSpace(d.Token)
if token != "" {
req.Header.Set("Authorization", "Bearer "+token)
}
req.ContentLength = length
res, err := base.HttpClient.Do(req)
if err != nil {
return "", err
}
defer res.Body.Close()
resBody, err := io.ReadAll(res.Body)
if err != nil {
return "", err
@@ -703,23 +744,6 @@ func (d *Github) putBlob(ctx context.Context, stream model.FileStreamer, up driv
return resp.Sha, nil
}
func (d *Github) delete(path, sha, message string) error {
body := map[string]interface{}{
"message": message,
"sha": sha,
"branch": d.Ref,
}
d.addCommitterAndAuthor(&body)
res, err := d.client.R().SetBody(body).Delete(d.getContentApiUrl(path))
if err != nil {
return err
}
if res.StatusCode() != 200 {
return toErr(res)
}
return nil
}
func (d *Github) renewParentTrees(path, prevSha, curSha, until string) (string, error) {
for path != until {
path = stdpath.Dir(path)
@@ -781,11 +805,11 @@ func (d *Github) getTreeDirectly(path string) (*TreeResp, string, error) {
}
func (d *Github) newTree(baseSha string, tree []interface{}) (string, error) {
res, err := d.client.R().
SetBody(&TreeReq{
BaseTree: baseSha,
Trees: tree,
}).
body := &TreeReq{Trees: tree}
if baseSha != "" {
body.BaseTree = baseSha
}
res, err := d.client.R().SetBody(body).
Post(fmt.Sprintf("https://api.github.com/repos/%s/%s/git/trees", d.Owner, d.Repo))
if err != nil {
return "", err
@@ -808,6 +832,13 @@ func (d *Github) commit(message, treeSha string) error {
"parents": []string{oldCommit},
}
d.addCommitterAndAuthor(&body)
if d.pgpEntity != nil {
signature, e := signCommit(&body, d.pgpEntity)
if e != nil {
return e
}
body["signature"] = signature
}
res, err := d.client.R().SetBody(body).Post(fmt.Sprintf("https://api.github.com/repos/%s/%s/git/commits", d.Owner, d.Repo))
if err != nil {
return err
@@ -911,6 +942,21 @@ func (d *Github) getRepo() (*RepoResp, error) {
return &resp, nil
}
func (d *Github) getAuthenticatedUser() (*UserResp, error) {
res, err := d.client.R().Get("https://api.github.com/user")
if err != nil {
return nil, err
}
if res.StatusCode() != 200 {
return nil, toErr(res)
}
resp := &UserResp{}
if err = utils.Json.Unmarshal(res.Body(), resp); err != nil {
return nil, err
}
return resp, nil
}
func (d *Github) addCommitterAndAuthor(m *map[string]interface{}) {
if d.CommitterName != "" {
committer := map[string]string{

View File

@@ -11,6 +11,9 @@ type Addition struct {
Owner string `json:"owner" type:"string" required:"true"`
Repo string `json:"repo" type:"string" required:"true"`
Ref string `json:"ref" type:"string" help:"A branch, a tag or a commit SHA, main branch by default."`
GitHubProxy string `json:"gh_proxy" type:"string" help:"GitHub proxy, e.g. https://ghproxy.net/raw.githubusercontent.com or https://gh-proxy.com/raw.githubusercontent.com"`
GPGPrivateKey string `json:"gpg_private_key" type:"text"`
GPGKeyPassphrase string `json:"gpg_key_passphrase" type:"string"`
CommitterName string `json:"committer_name" type:"string"`
CommitterEmail string `json:"committer_email" type:"string"`
AuthorName string `json:"author_name" type:"string"`

View File

@@ -79,7 +79,7 @@ type TreeResp struct {
}
type TreeReq struct {
BaseTree string `json:"base_tree"`
BaseTree interface{} `json:"base_tree,omitempty"`
Trees []interface{} `json:"tree"`
}
@@ -100,3 +100,8 @@ type UpdateRefReq struct {
type RepoResp struct {
DefaultBranch string `json:"default_branch"`
}
type UserResp struct {
Name string `json:"name"`
Email string `json:"email"`
}

View File

@@ -1,32 +1,21 @@
package github
import (
"bytes"
"context"
"errors"
"fmt"
"strings"
"text/template"
"time"
"github.com/ProtonMail/go-crypto/openpgp"
"github.com/ProtonMail/go-crypto/openpgp/armor"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"io"
"math"
"strings"
"text/template"
)
type ReaderWithProgress struct {
Reader io.Reader
Length int64
Progress func(percentage float64)
offset int64
}
func (r *ReaderWithProgress) Read(p []byte) (int, error) {
n, err := r.Reader.Read(p)
r.offset += int64(n)
r.Progress(math.Min(100.0, float64(r.offset)/float64(r.Length)*100.0))
return n, err
}
type MessageTemplateVars struct {
UserName string
ObjName string
@@ -113,3 +102,65 @@ func getUsername(ctx context.Context) string {
}
return user.Username
}
func loadPrivateKey(key, passphrase string) (*openpgp.Entity, error) {
entityList, err := openpgp.ReadArmoredKeyRing(strings.NewReader(key))
if err != nil {
return nil, err
}
if len(entityList) < 1 {
return nil, fmt.Errorf("no keys found in key ring")
}
entity := entityList[0]
pass := []byte(passphrase)
if entity.PrivateKey != nil && entity.PrivateKey.Encrypted {
if err = entity.PrivateKey.Decrypt(pass); err != nil {
return nil, fmt.Errorf("password incorrect: %+v", err)
}
}
for _, subKey := range entity.Subkeys {
if subKey.PrivateKey != nil && subKey.PrivateKey.Encrypted {
if err = subKey.PrivateKey.Decrypt(pass); err != nil {
return nil, fmt.Errorf("password incorrect: %+v", err)
}
}
}
return entity, nil
}
func signCommit(m *map[string]interface{}, entity *openpgp.Entity) (string, error) {
var commit strings.Builder
commit.WriteString(fmt.Sprintf("tree %s\n", (*m)["tree"].(string)))
parents := (*m)["parents"].([]string)
for _, p := range parents {
commit.WriteString(fmt.Sprintf("parent %s\n", p))
}
now := time.Now()
_, offset := now.Zone()
hour := offset / 3600
author := (*m)["author"].(map[string]string)
commit.WriteString(fmt.Sprintf("author %s <%s> %d %+03d00\n", author["name"], author["email"], now.Unix(), hour))
author["date"] = now.Format(time.RFC3339)
committer := (*m)["committer"].(map[string]string)
commit.WriteString(fmt.Sprintf("committer %s <%s> %d %+03d00\n", committer["name"], committer["email"], now.Unix(), hour))
committer["date"] = now.Format(time.RFC3339)
commit.WriteString(fmt.Sprintf("\n%s", (*m)["message"].(string)))
data := commit.String()
var sigBuffer bytes.Buffer
err := openpgp.DetachSign(&sigBuffer, entity, strings.NewReader(data), nil)
if err != nil {
return "", fmt.Errorf("signing failed: %v", err)
}
var armoredSig bytes.Buffer
armorWriter, err := armor.Encode(&armoredSig, "PGP SIGNATURE", nil)
if err != nil {
return "", err
}
if _, err = utils.CopyWithBuffer(armorWriter, &sigBuffer); err != nil {
return "", err
}
_ = armorWriter.Close()
return armoredSig.String(), nil
}

View File

@@ -4,9 +4,8 @@ import (
"context"
"fmt"
"net/http"
"time"
"strings"
"sync"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
@@ -18,7 +17,7 @@ type GithubReleases struct {
model.Storage
Addition
releases []Release
points []MountPoint
}
func (d *GithubReleases) Config() driver.Config {
@@ -30,86 +29,138 @@ func (d *GithubReleases) GetAddition() driver.Additional {
}
func (d *GithubReleases) Init(ctx context.Context) error {
SetHeader(d.Addition.Token)
repos, err := ParseRepos(d.Addition.RepoStructure, d.Addition.ShowAllVersion)
if err != nil {
return err
}
d.releases = repos
d.ParseRepos(d.Addition.RepoStructure)
return nil
}
func (d *GithubReleases) Drop(ctx context.Context) error {
ClearCache()
return nil
}
// processPoint 处理单个挂载点的文件列表
func (d *GithubReleases) processPoint(point *MountPoint, path string, args model.ListArgs) []File {
var pointFiles []File
if !d.Addition.ShowAllVersion { // latest
point.RequestLatestRelease(d.GetRequest, args.Refresh)
pointFiles = d.processLatestVersion(point, path)
} else { // all version
point.RequestReleases(d.GetRequest, args.Refresh)
pointFiles = d.processAllVersions(point, path)
}
return pointFiles
}
// processLatestVersion 处理最新版本的逻辑
func (d *GithubReleases) processLatestVersion(point *MountPoint, path string) []File {
var pointFiles []File
if point.Point == path { // 与仓库路径相同
pointFiles = append(pointFiles, point.GetLatestRelease()...)
if d.Addition.ShowReadme {
files := point.GetOtherFile(d.GetRequest, false)
pointFiles = append(pointFiles, files...)
}
} else if strings.HasPrefix(point.Point, path) { // 仓库目录的父目录
nextDir := GetNextDir(point.Point, path)
if nextDir != "" {
dirFile := File{
Path: path + "/" + nextDir,
FileName: nextDir,
Size: point.GetLatestSize(),
UpdateAt: point.Release.PublishedAt,
CreateAt: point.Release.CreatedAt,
Type: "dir",
Url: "",
}
pointFiles = append(pointFiles, dirFile)
}
}
return pointFiles
}
// processAllVersions 处理所有版本的逻辑
func (d *GithubReleases) processAllVersions(point *MountPoint, path string) []File {
var pointFiles []File
if point.Point == path { // 与仓库路径相同
pointFiles = append(pointFiles, point.GetAllVersion()...)
if d.Addition.ShowReadme {
files := point.GetOtherFile(d.GetRequest, false)
pointFiles = append(pointFiles, files...)
}
} else if strings.HasPrefix(point.Point, path) { // 仓库目录的父目录
nextDir := GetNextDir(point.Point, path)
if nextDir != "" {
dirFile := File{
FileName: nextDir,
Path: path + "/" + nextDir,
Size: point.GetAllVersionSize(),
UpdateAt: (*point.Releases)[0].PublishedAt,
CreateAt: (*point.Releases)[0].CreatedAt,
Type: "dir",
Url: "",
}
pointFiles = append(pointFiles, dirFile)
}
} else if strings.HasPrefix(path, point.Point) { // 仓库目录的子目录
tagName := GetNextDir(path, point.Point)
if tagName != "" {
pointFiles = append(pointFiles, point.GetReleaseByTagName(tagName)...)
}
}
return pointFiles
}
// mergeFiles 合并文件列表,处理重复目录
func (d *GithubReleases) mergeFiles(files *[]File, newFiles []File) {
for _, newFile := range newFiles {
if newFile.Type == "dir" {
hasSameDir := false
for index := range *files {
if (*files)[index].GetName() == newFile.GetName() && (*files)[index].Type == "dir" {
hasSameDir = true
(*files)[index].Size += newFile.Size
break
}
}
if !hasSameDir {
*files = append(*files, newFile)
}
} else {
*files = append(*files, newFile)
}
}
}
func (d *GithubReleases) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
files := make([]File, 0)
path := fmt.Sprintf("/%s", strings.Trim(dir.GetPath(), "/"))
for _, repo := range d.releases {
if repo.Path == path { // 与仓库路径相同
resp, err := GetRepoReleaseInfo(repo.RepoName, repo.ID, path, d.Storage.CacheExpiration)
if err != nil {
return nil, err
}
files = append(files, resp.Files...)
if d.Addition.ConcurrentRequests && d.Addition.Token != "" { // 并发处理
var mu sync.Mutex
var wg sync.WaitGroup
if d.Addition.ShowReadme {
resp, err := GetGithubOtherFile(repo.RepoName, path, d.Storage.CacheExpiration)
if err != nil {
return nil, err
}
files = append(files, *resp...)
}
for i := range d.points {
wg.Add(1)
go func(point *MountPoint) {
defer wg.Done()
pointFiles := d.processPoint(point, path, args)
} else if strings.HasPrefix(repo.Path, path) { // 仓库路径是目录的子目录
nextDir := GetNextDir(repo.Path, path)
if nextDir == "" {
continue
}
if d.Addition.ShowAllVersion {
files = append(files, File{
FileName: nextDir,
Size: 0,
CreateAt: time.Time{},
UpdateAt: time.Time{},
Url: "",
Type: "dir",
Path: fmt.Sprintf("%s/%s", path, nextDir),
})
continue
}
repo, _ := GetRepoReleaseInfo(repo.RepoName, repo.Version, path, d.Storage.CacheExpiration)
hasSameDir := false
for index, file := range files {
if file.FileName == nextDir {
hasSameDir = true
files[index].Size += repo.Size
files[index].UpdateAt = func(a time.Time, b time.Time) time.Time {
if a.After(b) {
return a
}
return b
}(files[index].UpdateAt, repo.UpdateAt)
break
}
}
if !hasSameDir {
files = append(files, File{
FileName: nextDir,
Size: repo.Size,
CreateAt: repo.CreateAt,
UpdateAt: repo.UpdateAt,
Url: repo.Url,
Type: "dir",
Path: fmt.Sprintf("%s/%s", path, nextDir),
})
mu.Lock()
d.mergeFiles(&files, pointFiles)
mu.Unlock()
}(&d.points[i])
}
wg.Wait()
} else { // 串行处理
for i := range d.points {
point := &d.points[i]
pointFiles := d.processPoint(point, path, args)
d.mergeFiles(&files, pointFiles)
}
}
@@ -119,35 +170,41 @@ func (d *GithubReleases) List(ctx context.Context, dir model.Obj, args model.Lis
}
func (d *GithubReleases) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
url := file.GetID()
gh_proxy := strings.TrimSpace(d.Addition.GitHubProxy)
if gh_proxy != "" {
url = strings.Replace(url, "https://github.com", gh_proxy, 1)
}
link := model.Link{
URL: file.GetID(),
URL: url,
Header: http.Header{},
}
return &link, nil
}
func (d *GithubReleases) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
// TODO create folder, optional
return nil, errs.NotImplement
}
func (d *GithubReleases) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// TODO move obj, optional
return nil, errs.NotImplement
}
func (d *GithubReleases) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
// TODO rename obj, optional
return nil, errs.NotImplement
}
func (d *GithubReleases) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// TODO copy obj, optional
return nil, errs.NotImplement
}
func (d *GithubReleases) Remove(ctx context.Context, obj model.Obj) error {
// TODO remove obj, optional
return errs.NotImplement
}
func (d *GithubReleases) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
return nil, errs.NotImplement
}
var _ driver.Driver = (*GithubReleases)(nil)

View File

@@ -7,10 +7,12 @@ import (
type Addition struct {
driver.RootID
RepoStructure string `json:"repo_structure" type:"text" required:"true" default:"/path/to/alist-gh:alistGo/alist\n/path/to2/alist-web-gh:AlistGo/alist-web" help:"structure:[path:]org/repo"`
RepoStructure string `json:"repo_structure" type:"text" required:"true" default:"alistGo/alist" help:"structure:[path:]org/repo"`
ShowReadme bool `json:"show_readme" type:"bool" default:"true" help:"show README、LICENSE file"`
Token string `json:"token" type:"string" required:"false" help:"GitHub token, if you want to access private repositories or increase the rate limit"`
ShowAllVersion bool `json:"show_all_version" type:"bool" default:"false" help:"show all versions"`
ConcurrentRequests bool `json:"concurrent_requests" type:"bool" default:"false" help:"To concurrently request the GitHub API, you must enter a GitHub token"`
GitHubProxy string `json:"gh_proxy" type:"string" default:"" help:"GitHub proxy, e.g. https://ghproxy.net/github.com or https://gh-proxy.com/github.com "`
}
var config = driver.Config{

View File

@@ -0,0 +1,86 @@
package github_releases
type Release struct {
Url string `json:"url"`
AssetsUrl string `json:"assets_url"`
UploadUrl string `json:"upload_url"`
HtmlUrl string `json:"html_url"`
Id int `json:"id"`
Author User `json:"author"`
NodeId string `json:"node_id"`
TagName string `json:"tag_name"`
TargetCommitish string `json:"target_commitish"`
Name string `json:"name"`
Draft bool `json:"draft"`
Prerelease bool `json:"prerelease"`
CreatedAt string `json:"created_at"`
PublishedAt string `json:"published_at"`
Assets []Asset `json:"assets"`
TarballUrl string `json:"tarball_url"`
ZipballUrl string `json:"zipball_url"`
Body string `json:"body"`
Reactions Reactions `json:"reactions"`
}
type User struct {
Login string `json:"login"`
Id int `json:"id"`
NodeId string `json:"node_id"`
AvatarUrl string `json:"avatar_url"`
GravatarId string `json:"gravatar_id"`
Url string `json:"url"`
HtmlUrl string `json:"html_url"`
FollowersUrl string `json:"followers_url"`
FollowingUrl string `json:"following_url"`
GistsUrl string `json:"gists_url"`
StarredUrl string `json:"starred_url"`
SubscriptionsUrl string `json:"subscriptions_url"`
OrganizationsUrl string `json:"organizations_url"`
ReposUrl string `json:"repos_url"`
EventsUrl string `json:"events_url"`
ReceivedEventsUrl string `json:"received_events_url"`
Type string `json:"type"`
UserViewType string `json:"user_view_type"`
SiteAdmin bool `json:"site_admin"`
}
type Asset struct {
Url string `json:"url"`
Id int `json:"id"`
NodeId string `json:"node_id"`
Name string `json:"name"`
Label string `json:"label"`
Uploader User `json:"uploader"`
ContentType string `json:"content_type"`
State string `json:"state"`
Size int64 `json:"size"`
DownloadCount int `json:"download_count"`
CreatedAt string `json:"created_at"`
UpdatedAt string `json:"updated_at"`
BrowserDownloadUrl string `json:"browser_download_url"`
}
type Reactions struct {
Url string `json:"url"`
TotalCount int `json:"total_count"`
PlusOne int `json:"+1"`
MinusOne int `json:"-1"`
Laugh int `json:"laugh"`
Hooray int `json:"hooray"`
Confused int `json:"confused"`
Heart int `json:"heart"`
Rocket int `json:"rocket"`
Eyes int `json:"eyes"`
}
type FileInfo struct {
Name string `json:"name"`
Path string `json:"path"`
Sha string `json:"sha"`
Size int64 `json:"size"`
Url string `json:"url"`
HtmlUrl string `json:"html_url"`
GitUrl string `json:"git_url"`
DownloadUrl string `json:"download_url"`
Type string `json:"type"`
}

View File

@@ -1,19 +1,181 @@
package github_releases
import (
"encoding/json"
"strings"
"time"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
type MountPoint struct {
Point string // 挂载点
Repo string // 仓库名 owner/repo
Release *Release // Release 指针 latest
Releases *[]Release // []Release 指针
OtherFile *[]FileInfo // 仓库根目录下的其他文件
}
// 请求最新版本
func (m *MountPoint) RequestLatestRelease(get func(url string) (*resty.Response, error), refresh bool) {
if m.Repo == "" {
return
}
if m.Release == nil || refresh {
resp, _ := get("https://api.github.com/repos/" + m.Repo + "/releases/latest")
m.Release = new(Release)
json.Unmarshal(resp.Body(), m.Release)
}
}
// 请求所有版本
func (m *MountPoint) RequestReleases(get func(url string) (*resty.Response, error), refresh bool) {
if m.Repo == "" {
return
}
if m.Releases == nil || refresh {
resp, _ := get("https://api.github.com/repos/" + m.Repo + "/releases")
m.Releases = new([]Release)
json.Unmarshal(resp.Body(), m.Releases)
}
}
// 获取最新版本
func (m *MountPoint) GetLatestRelease() []File {
files := make([]File, 0)
for _, asset := range m.Release.Assets {
files = append(files, File{
Path: m.Point + "/" + asset.Name,
FileName: asset.Name,
Size: asset.Size,
Type: "file",
UpdateAt: asset.UpdatedAt,
CreateAt: asset.CreatedAt,
Url: asset.BrowserDownloadUrl,
})
}
return files
}
// 获取最新版本大小
func (m *MountPoint) GetLatestSize() int64 {
size := int64(0)
for _, asset := range m.Release.Assets {
size += asset.Size
}
return size
}
// 获取所有版本
func (m *MountPoint) GetAllVersion() []File {
files := make([]File, 0)
for _, release := range *m.Releases {
file := File{
Path: m.Point + "/" + release.TagName,
FileName: release.TagName,
Size: m.GetSizeByTagName(release.TagName),
Type: "dir",
UpdateAt: release.PublishedAt,
CreateAt: release.CreatedAt,
Url: release.HtmlUrl,
}
for _, asset := range release.Assets {
file.Size += asset.Size
}
files = append(files, file)
}
return files
}
// 根据版本号获取版本
func (m *MountPoint) GetReleaseByTagName(tagName string) []File {
for _, item := range *m.Releases {
if item.TagName == tagName {
files := make([]File, 0)
for _, asset := range item.Assets {
files = append(files, File{
Path: m.Point + "/" + tagName + "/" + asset.Name,
FileName: asset.Name,
Size: asset.Size,
Type: "file",
UpdateAt: asset.UpdatedAt,
CreateAt: asset.CreatedAt,
Url: asset.BrowserDownloadUrl,
})
}
return files
}
}
return nil
}
// 根据版本号获取版本大小
func (m *MountPoint) GetSizeByTagName(tagName string) int64 {
if m.Releases == nil {
return 0
}
for _, item := range *m.Releases {
if item.TagName == tagName {
size := int64(0)
for _, asset := range item.Assets {
size += asset.Size
}
return size
}
}
return 0
}
// 获取所有版本大小
func (m *MountPoint) GetAllVersionSize() int64 {
if m.Releases == nil {
return 0
}
size := int64(0)
for _, release := range *m.Releases {
for _, asset := range release.Assets {
size += asset.Size
}
}
return size
}
func (m *MountPoint) GetOtherFile(get func(url string) (*resty.Response, error), refresh bool) []File {
if m.OtherFile == nil || refresh {
resp, _ := get("https://api.github.com/repos/" + m.Repo + "/contents")
m.OtherFile = new([]FileInfo)
json.Unmarshal(resp.Body(), m.OtherFile)
}
files := make([]File, 0)
defaultTime := "1970-01-01T00:00:00Z"
for _, file := range *m.OtherFile {
if strings.HasSuffix(file.Name, ".md") || strings.HasPrefix(file.Name, "LICENSE") {
files = append(files, File{
Path: m.Point + "/" + file.Name,
FileName: file.Name,
Size: file.Size,
Type: "file",
UpdateAt: defaultTime,
CreateAt: defaultTime,
Url: file.DownloadUrl,
})
}
}
return files
}
type File struct {
FileName string `json:"name"`
Size int64 `json:"size"`
CreateAt time.Time `json:"time"`
UpdateAt time.Time `json:"chtime"`
Url string `json:"url"`
Type string `json:"type"`
Path string `json:"path"`
Path string // 文件路径
FileName string // 文件名
Size int64 // 文件大小
Type string // 文件类型
UpdateAt string // 更新时间 eg:"2025-01-27T16:10:16Z"
CreateAt string // 创建时间
Url string // 下载链接
}
func (f File) GetHash() utils.HashInfo {
@@ -33,11 +195,13 @@ func (f File) GetName() string {
}
func (f File) ModTime() time.Time {
return f.UpdateAt
t, _ := time.Parse(time.RFC3339, f.CreateAt)
return t
}
func (f File) CreateTime() time.Time {
return f.CreateAt
t, _ := time.Parse(time.RFC3339, f.CreateAt)
return t
}
func (f File) IsDir() bool {
@@ -47,22 +211,3 @@ func (f File) IsDir() bool {
func (f File) GetID() string {
return f.Url
}
func (f File) Thumb() string {
return ""
}
type ReleasesData struct {
Files []File `json:"files"`
Size int64 `json:"size"`
UpdateAt time.Time `json:"chtime"`
CreateAt time.Time `json:"time"`
Url string `json:"url"`
}
type Release struct {
Path string // 挂载路径
RepoName string // 仓库名称
Version string // 版本号, tag
ID string // 版本ID
}

View File

@@ -2,28 +2,36 @@ package github_releases
import (
"fmt"
"regexp"
"path/filepath"
"strings"
"sync"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go"
log "github.com/sirupsen/logrus"
)
var (
cache = make(map[string]*resty.Response)
created = make(map[string]time.Time)
mu sync.Mutex
req *resty.Request
)
// 发送 GET 请求
func (d *GithubReleases) GetRequest(url string) (*resty.Response, error) {
req := base.RestyClient.R()
req.SetHeader("Accept", "application/vnd.github+json")
req.SetHeader("X-GitHub-Api-Version", "2022-11-28")
if d.Addition.Token != "" {
req.SetHeader("Authorization", fmt.Sprintf("Bearer %s", d.Addition.Token))
}
res, err := req.Get(url)
if err != nil {
return nil, err
}
if res.StatusCode() != 200 {
utils.Log.Warnf("failed to get request: %s %d %s", url, res.StatusCode(), res.String())
}
return res, nil
}
// 解析仓库列表
func ParseRepos(text string, allVersion bool) ([]Release, error) {
// 解析挂载结构
func (d *GithubReleases) ParseRepos(text string) ([]MountPoint, error) {
lines := strings.Split(text, "\n")
var repos []Release
points := make([]MountPoint, 0)
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" {
@@ -41,177 +49,37 @@ func ParseRepos(text string, allVersion bool) ([]Release, error) {
return nil, fmt.Errorf("invalid format: %s", line)
}
if allVersion {
releases, _ := GetAllVersion(repo, path)
repos = append(repos, *releases...)
} else {
repos = append(repos, Release{
Path: path,
RepoName: repo,
Version: "latest",
ID: "latest",
points = append(points, MountPoint{
Point: path,
Repo: repo,
Release: nil,
Releases: nil,
})
}
}
return repos, nil
d.points = points
return points, nil
}
// 获取下一级目录
func GetNextDir(wholePath string, basePath string) string {
if !strings.HasSuffix(basePath, "/") {
basePath += "/"
}
basePath = fmt.Sprintf("%s/", strings.TrimRight(basePath, "/"))
if !strings.HasPrefix(wholePath, basePath) {
return ""
}
remainingPath := strings.TrimLeft(strings.TrimPrefix(wholePath, basePath), "/")
if remainingPath != "" {
parts := strings.Split(remainingPath, "/")
return parts[0]
nextDir := parts[0]
if strings.HasPrefix(wholePath, strings.TrimRight(basePath, "/")+"/"+nextDir) {
return nextDir
}
}
return ""
}
// 发送 GET 请求
func GetRequest(url string, cacheExpiration int) (*resty.Response, error) {
mu.Lock()
if res, ok := cache[url]; ok && time.Now().Before(created[url].Add(time.Duration(cacheExpiration)*time.Minute)) {
mu.Unlock()
return res, nil
}
mu.Unlock()
res, err := req.Get(url)
if err != nil {
return nil, err
}
if res.StatusCode() != 200 {
log.Warn("failed to get request: ", res.StatusCode(), res.String())
}
mu.Lock()
cache[url] = res
created[url] = time.Now()
mu.Unlock()
return res, nil
}
// 获取 README、LICENSE 等文件
func GetGithubOtherFile(repo string, basePath string, cacheExpiration int) (*[]File, error) {
url := fmt.Sprintf("https://api.github.com/repos/%s/contents/", strings.Trim(repo, "/"))
res, _ := GetRequest(url, cacheExpiration)
body := jsoniter.Get(res.Body())
var files []File
for i := 0; i < body.Size(); i++ {
filename := body.Get(i, "name").ToString()
re := regexp.MustCompile(`(?i)^(.*\.md|LICENSE)$`)
if !re.MatchString(filename) {
continue
}
files = append(files, File{
FileName: filename,
Size: body.Get(i, "size").ToInt64(),
CreateAt: time.Time{},
UpdateAt: time.Now(),
Url: body.Get(i, "download_url").ToString(),
Type: body.Get(i, "type").ToString(),
Path: fmt.Sprintf("%s/%s", basePath, filename),
})
}
return &files, nil
}
// 获取 GitHub Release 详细信息
func GetRepoReleaseInfo(repo string, version string, basePath string, cacheExpiration int) (*ReleasesData, error) {
url := fmt.Sprintf("https://api.github.com/repos/%s/releases/%s", strings.Trim(repo, "/"), version)
res, _ := GetRequest(url, cacheExpiration)
body := res.Body()
if jsoniter.Get(res.Body(), "status").ToInt64() != 0 {
return &ReleasesData{}, fmt.Errorf("%s", res.String())
}
assets := jsoniter.Get(res.Body(), "assets")
var files []File
for i := 0; i < assets.Size(); i++ {
filename := assets.Get(i, "name").ToString()
files = append(files, File{
FileName: filename,
Size: assets.Get(i, "size").ToInt64(),
Url: assets.Get(i, "browser_download_url").ToString(),
Type: assets.Get(i, "content_type").ToString(),
Path: fmt.Sprintf("%s/%s", basePath, filename),
CreateAt: func() time.Time {
t, _ := time.Parse(time.RFC3339, assets.Get(i, "created_at").ToString())
return t
}(),
UpdateAt: func() time.Time {
t, _ := time.Parse(time.RFC3339, assets.Get(i, "updated_at").ToString())
return t
}(),
})
}
return &ReleasesData{
Files: files,
Url: jsoniter.Get(body, "html_url").ToString(),
Size: func() int64 {
size := int64(0)
for _, file := range files {
size += file.Size
}
return size
}(),
UpdateAt: func() time.Time {
t, _ := time.Parse(time.RFC3339, jsoniter.Get(body, "published_at").ToString())
return t
}(),
CreateAt: func() time.Time {
t, _ := time.Parse(time.RFC3339, jsoniter.Get(body, "created_at").ToString())
return t
}(),
}, nil
}
// 获取所有的版本号
func GetAllVersion(repo string, path string) (*[]Release, error) {
url := fmt.Sprintf("https://api.github.com/repos/%s/releases", strings.Trim(repo, "/"))
res, _ := GetRequest(url, 0)
body := jsoniter.Get(res.Body())
releases := make([]Release, 0)
for i := 0; i < body.Size(); i++ {
version := body.Get(i, "tag_name").ToString()
releases = append(releases, Release{
Path: fmt.Sprintf("%s/%s", path, version),
Version: version,
RepoName: repo,
ID: body.Get(i, "id").ToString(),
})
}
return &releases, nil
}
func ClearCache() {
mu.Lock()
cache = make(map[string]*resty.Response)
created = make(map[string]time.Time)
mu.Unlock()
}
func SetHeader(token string) {
req = base.RestyClient.R()
if token != "" {
req.SetHeader("Authorization", fmt.Sprintf("Bearer %s", token))
}
req.SetHeader("Accept", "application/vnd.github+json")
req.SetHeader("X-GitHub-Api-Version", "2022-11-28")
// 判断当前目录是否是目标目录的祖先目录
func IsAncestorDir(parentDir string, targetDir string) bool {
absTargetDir, _ := filepath.Abs(targetDir)
absParentDir, _ := filepath.Abs(parentDir)
return strings.HasPrefix(absTargetDir, absParentDir)
}

View File

@@ -158,7 +158,8 @@ func (d *GoogleDrive) Put(ctx context.Context, dstDir model.Obj, stream model.Fi
putUrl := res.Header().Get("location")
if stream.GetSize() < d.ChunkSize*1024*1024 {
_, err = d.request(putUrl, http.MethodPut, func(req *resty.Request) {
req.SetHeader("Content-Length", strconv.FormatInt(stream.GetSize(), 10)).SetBody(stream)
req.SetHeader("Content-Length", strconv.FormatInt(stream.GetSize(), 10)).
SetBody(driver.NewLimitedUploadStream(ctx, stream))
}, nil)
} else {
err = d.chunkUpload(ctx, stream, putUrl)

View File

@@ -11,10 +11,10 @@ import (
"strconv"
"time"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/golang-jwt/jwt/v4"
@@ -126,8 +126,7 @@ func (d *GoogleDrive) refreshToken() error {
}
d.AccessToken = resp.AccessToken
return nil
}
if gdsaFileErr != nil && os.IsExist(gdsaFileErr) {
} else if os.IsExist(gdsaFileErr) {
return gdsaFileErr
}
url := "https://www.googleapis.com/oauth2/v4/token"
@@ -229,6 +228,7 @@ func (d *GoogleDrive) chunkUpload(ctx context.Context, stream model.FileStreamer
if err != nil {
return err
}
reader = driver.NewLimitedUploadStream(ctx, reader)
_, err = d.request(url, http.MethodPut, func(req *resty.Request) {
req.SetHeaders(map[string]string{
"Content-Length": strconv.FormatInt(chunkSize, 10),

View File

@@ -124,7 +124,7 @@ func (d *GooglePhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fi
}
resp, err := d.request(postUrl, http.MethodPost, func(req *resty.Request) {
req.SetBody(stream).SetContext(ctx)
req.SetBody(driver.NewLimitedUploadStream(ctx, stream)).SetContext(ctx)
}, nil, postHeaders)
if err != nil {

View File

@@ -392,10 +392,11 @@ func (d *HalalCloud) put(ctx context.Context, dstDir model.Obj, fileStream model
if fileStream.GetSize() > s3manager.MaxUploadParts*s3manager.DefaultUploadPartSize {
uploader.PartSize = fileStream.GetSize() / (s3manager.MaxUploadParts - 1)
}
reader := driver.NewLimitedUploadStream(ctx, fileStream)
_, err = uploader.UploadWithContext(ctx, &s3manager.UploadInput{
Bucket: aws.String(result.Bucket),
Key: aws.String(result.Key),
Body: io.TeeReader(fileStream, driver.NewProgress(fileStream.GetSize(), up)),
Body: io.TeeReader(reader, driver.NewProgress(fileStream.GetSize(), up)),
})
return nil, err

Some files were not shown because too many files have changed in this diff Show More