前言
目前大多项目我们都会使用各种存储服务,例如oss、cos、minio等。当然,因各种原因,可能需要在不同存储服务间进行数据迁移工作,所以今天就给大家介绍一个比较通用的数据迁移工具Rclone。
一、Rclone是什么?
Rclone是一个命令行程序,用于管理云存储上的文件。它是云供应商Web存储界面的功能丰富的替代方案。超过40种云存储产品支持rclone,包括S3对象存储,业务和消费者文件存储服务以及标准传输协议。详细推荐直接官网学习:英文官网、中文网站
二、Rclone能做什么?
备份(和加密)文件到云存储。 从云存储还原(和解密)文件。 将云数据镜像到其他云服务或本地。 将数据迁移到云,或在云存储供应商之间迁移。 将多个加密的,缓存的或多样化的云存储作为磁盘挂载。
三、安装及使用步骤
1.安装Rclone
curl https://rclone.org/install.sh | sudo bash
如果出现如下错误:
[root@ ~]# curl https://rclone.org/install.sh | sudo bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4734 100 4734 0 0 3152 0 0:00:01 0:00:01 --:--:-- 3151
None of the supported tools for extracting zip archives (unzip 7z busybox) were found. Please install one of them and try again.
就是没有解压工具运行下面的命令后再重新执行安装即可:
yum install unzip zip -y
2.生成配置文件
配置可以随便选择,选择完成后重新修改就好了,文章后面会有修改方法。
[root@172-233-85-227 ~]# rclone config
2024/05/21 10:47:06 NOTICE: Config file "/root/.config/rclone/rclone.conf" not found - using defaults
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n #选择n创建新的
Enter name for new remote.
name> xjp #自己定义一个名字
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
1 / 1Fichier
\ (fichier)
2 / Akamai NetStorage
\ (netstorage)
3 / Alias for an existing remote
\ (alias)
4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others
\ (s3)
5 / Backblaze B2
\ (b2)
····
55 / seafile
\ (seafile)
Storage> 4 #选择存储类型因为我是linode的对象存储所以我选择4
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Amazon Web Services (AWS) S3
\ (AWS)
2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
\ (Alibaba)
···
15 / Leviia Object Storage
\ (Leviia)
16 / Liara Object Storage
\ (Liara)
17 / Linode Object Storage
\ (Linode)
18 / Minio Object Storage
\ (Minio)
···
31 / Any other S3 compatible provider
\ (Other)
provider> 17 #选择存储供应商我是linode的对象存储所以我选择17
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1 #选择1在下一步输入密钥
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> KIYDIS9QSTASIMP2QL6E #输入供应商生成的Access Key
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> wejlKS7wsYkzQOfQUMKEOQESS8liYlmyi0RXnW #输入供应商生成的Secret Key
Option endpoint.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Atlanta, GA (USA), us-southeast-1
\ (us-southeast-1.linodeobjects.com)
2 / Chicago, IL (USA), us-ord-1
\ (us-ord-1.linodeobjects.com)
···
8 / Singapore ap-south-1
\ (ap-south-1.linodeobjects.com)
9 / Stockholm (Sweden), se-sto-1
\ (se-sto-1.linodeobjects.com)
10 / Washington, DC, (USA), us-iad-1
\ (us-iad-1.linodeobjects.com)
endpoint> 8 #选择存储桶所在地区
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
2 | The AllUsers group gets READ access.
\ (public-read)
/ Owner gets FULL_CONTROL.
3 | The AllUsers group gets READ and WRITE access.
| Granting this on a bucket is generally not recommended.
\ (public-read-write)
/ Owner gets FULL_CONTROL.
4 | The AuthenticatedUsers group gets READ access.
\ (authenticated-read)
/ Object owner gets FULL_CONTROL.
5 | Bucket owner gets READ access.
| If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ (bucket-owner-read)
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ (bucket-owner-full-control)
acl> #这里直接默认回车回就好
Edit advanced config?
y) Yes
n) No (default)
y/n> #默认回车
Configuration complete.
Options:
- type: s3
- provider: Linode
- access_key_id: KIY8NV9QSTTHQMP2QL6E
- secret_access_key: wejlKX2wCskEkzQOfQUMKEOQESS8liYlmyi0RXnW
- endpoint: ap-south-1.linodeobjects.com
Keep this "xjp" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y #按Y保存
Current remotes:
Name Type
==== ====
xjp s3
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q #q退出编辑
剩下的添加就不使用rclone config
命令添加了这个命令添加太麻烦直接去编辑配置文件会快得多。
3、查看生成的配置文件
在路径/root/.config/rclone/rclone.conf(rclone.conf为配置文件生成时配置的名称)
cd /root/.config/rclone
4、修改配置文件
vim /root/.config/rclone/rclone.conf
修改配置如下,请根据自己服务配置修改部分参数:
[xjp] #名字
type = s3 #类型
provider = Linode #供应商
access_key_id = KIY8NV9QSTTHGTS2QL6E
secret_access_key = wejlKX2wCskEkzQOfQJSYALQESS8liYlmyi0RXnW
endpoint = ap-south-1.linodeobjects.com #链接点
[jp]
type = s3
provider = Linode
access_key_id = KIY8NV9QSTTHGTS2QL6E
secret_access_key = wejlKX2wCskEkzQOfQJSYALQESS8liYlmyi0RXnW
endpoint = jp-osa-1.linodeobjects.com
5、进行数据同步
xjp同步到jp
rclone sync xjp:test jp:test
或者 jp同步到xjp
rclone sync jp:test xjp:test
具体格式为:
rclone sync 源(配置文件名称): 源数据Bucket 目标源名称:目标bucket
6.常见命令
rclone config - 以控制会话的形式添加rclone的配置,配置保存在.rclone.conf文件中。
rclone copy - 将文件从源复制到目的地址,跳过已复制完成的。
rclone sync - 将源数据同步到目的地址,只更新目的地址的数据。
rclone move - 将源数据移动到目的地址。
rclone delete - 删除指定路径下的文件内容。
rclone purge - 清空指定路径下所有文件数据。
rclone mkdir - 创建一个新目录。
rclone rmdir - 删除空目录。
rclone check - 检查源和目的地址数据是否匹配。
rclone ls - 列出指定路径下所有的文件以及文件大小和路径。
rclone lsd - 列出指定路径下所有的目录/容器/桶。
rclone lsl - 列出指定路径下所有文件以及修改时间、文件大小和路径。
rclone md5sum - 为指定路径下的所有文件产生一个md5sum文件。
rclone sha1sum - 为指定路径下的所有文件产生一个sha1sum文件。
rclone size - 获取指定路径下,文件内容的总大小。.
rclone version - 查看当前版本。
rclone cleanup - 清空remote。
rclone dedupe - 交互式查找重复文件,进行删除/重命名操作。
7.常用操作
**rclone lsd:**列出指定path下,所有的目录、容器、桶。
# remote 为配置文件中的[name]
rclone lsd remote:path
如
rclone lsd xjp:/
rclone copy:将文件从源复制到目的地址,跳过已复制完成的。
# `rclone copy` 复制指定路径下文件
rclone copy source:sourcepath dest:destpath
如
rclone copy xjp:/mp4 jp:/mp4
**rclone sync:**同步的始终是 path 目录下的数据(空目录将不会被同步),而不是 path 目录。同步数据时,可能会删除目的地址的数据;建议先使用–dry-run 标志来检查要复制、删除的数据。同步数据出错时,不会删除任何目的地址的数据。
rclone sync source:path dest:path
如
rclone sync xjp:/mp4 jp:/mp4
**rclone move:**同步数据时,可能会删除目的地址的数据;建议先使用–dry-run 标志来检查要复制、删除的数据。
rclone move source:path dest:path
如
rclone move xjp:/mp4 jp:/mp4
**rclone purge:**清空 path 目录和数据。
rclone purge remote:path
如
rclone purge xjp:/mp4
**rclone mkdir:**创建 path 目录。
rclone mkdir remote:path
如
rclone mkdir xjp:/mp4
总结
以上就是今天介绍的全部内容,这里介绍的rclone sync的方式数据同步不能做到同步过程实时同步,即同步过程中,如果有新的文件上传到已同步完成的文件下,该文件会丢失。当然Rclone也提供了其他方法解决该问题,有兴趣的朋友可以到官网直接学习英文官网、中文网站
参考链接: https://p3terx.com/archives/rclone-advanced-user-manual-common-command-parameters.html https://softlns.github.io/2016/11/28/rclone-guide/
评论区