# 删除备份文件
[root@clickhouse-01 backup]# clickhouse-backup delete local test20201019[root@clickhouse-01 backup]#
[root@clickhouse-01 backup]# clickhouse-backup list
Local backups:
- \'ch_bk_20201020\' (created at 20-10-2020 14:20:35)
- \'2020-10-20T06-27-08\' (created at 20-10-2020 14:27:08)
- \'ch_test_customer\' (created at 20-10-2020 15:17:13)
- \'ch_bak_2tab\' (created at 20-10-2020 15:33:41)
# 清除shadow下的临时备份文件
[root@clickhouse-01 shadow]# clickhouse-backup clean2020/10/20 14:19:13 Clean /data/clickhouse/data/shadow
# 数据恢复
语法:
clickhouse-backup restore 备份名
[root@clickhouse-01 ~]# clickhouse-backup restore -helpNAME:
clickhouse-backup restore - Create schema and restore data from backup
USAGE:
clickhouse-backup restore [--schema] [--data] [-t, --tables=<db>.<table>] <backup_name>
OPTIONS:
--config FILE, -c FILE Config FILE name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG]
--table value, --tables value, -t value
--schema, -s Restore schema only
--data, -d Restore data only
一些参数:
--table 只恢复特定表,可使用正则。
如针对特定的数据库:--table=dbname.*
--schema 只还原表结构
--data 只还原数据
# 备份到远程目标
Clickhouse-backup 支持从远程对象存储(例如S3,GCS或IBM的COS)上载和下载备份。
例如 AWS 的 S3, 修改配置文件/etc/clickhouse-backup/config.yml
s3:access_key: <AWS访问密钥>
secret_key: <AWS SECRET KEY>
bucket: <存储桶BUCKET名称>
region: us-east-1
path: "/some/path/in/bucket" <备份路径>
然后即可以上传备份:
$ clickhouse-backup upload 2020-07-06T20-13-022020/07/07 15:22:32 Upload backup \'2020-07-06T20-13-02\'
2020/07/07 15:22:49 Done.
或者下载备份:
$ sudo clickhouse-backup download 2020-07-06T20-13-022020/07/07 15:27:16 Done.
# 备份保留策略
general:下的2个参数来控制备份的保留策略
backups_to_keep_local: 0 # 本地备份保留个数
backups_to_keep_remote: 0 # 远程备份保留个数
默认为0,即不自动做备份清理。
可以设置为:
backups_to_keep_local: 7
backups_to_keep_remote: 31
使用clickhouse-backup upload 上传备份可以使用参数 --diff-from
将文件与以前的本地备份进行比较,仅上载新的/更改的文件。
必须保留先前的备份,以便从新备份中进行还原。
# 备份恢复测试:
测试库有3张表,数据量一样
dba-docker :) show tables;SHOW TABLES
┌─name─┐
│ ch1 │ # 数据量 8990020
│ ch2 │ # 数据量 8990020
│ ch3 │ # 数据量 8990020
└──────┘
做个备份:bk_3_tab
clickhouse-backup create bk_3_tab进行数据破坏:
truncate table ch1;insert into ch2 select * from ch3;
drop table ch3;