Tool for easy ClickHouse backup and restore using object storage for backup files.
OTHER License
Bot releases are visible (Hide)
Published by github-actions[bot] about 2 years ago
Published by github-actions[bot] about 2 years ago
IMPROVEMENTS
restore
command on multi-shard cluster, fix 474
use_custom_storage_class
(S3_USE_CUSTOM_STORAGE_CLASS
) option to s3
section, thanks @realwhiteBUG FIXES
{uuid}
marcos during restore for ReplicatedMergeTree
table and ClickHouse server 22.5+, fix 466
Published by github-actions[bot] over 2 years ago
BUG FIXES
error can't acquire semaphore during Download: context canceled
, and error can't acquire semaphore during Upload: context canceled
all 1.4.x users recommends upgrade to 1.4.6Published by github-actions[bot] over 2 years ago
IMPROVEMENTS
CLICKHOUSE_FREEZE_BY_PART_WHERE
option which allow freeze by part with WHERE condition, thanks @vahid-sohrablooPublished by github-actions[bot] over 2 years ago
BUG FIXES
Published by github-actions[bot] over 2 years ago
IMPROVEMENTS
S3_ALLOW_MULTIPART_DOWNLOAD
to config, to improve download speed, fix 431
clickhouse_backup_number_backups_remote
, clickhouse_backup_number_backups_local
, clickhouse_backup_number_backups_remote_expected
,clickhouse_backup_number_backups_local_expected
prometheus metric, fix 437
system.macros
values to path
field in all types of remote_storage
, fix 438
upload_by_part: true
fix #400
BUG FIXES
Published by github-actions[bot] over 2 years ago
IMPROVEMENTS
S3_MAX_PARTS_COUNT
and AZBLOB_MAX_PARTS_COUNT
for properly calculate buffer sizes during upload and downloadBUG FIXES
path
for S3, GCS for case when it begins from "/"backups_keep_remote
optionPublished by github-actions[bot] over 2 years ago
IMPROVEMENTS
POST /backup/tables/all
, fix POST /backup/tables
to respect CLICKHOUSE_SKIP_TABLES
BUG FIXES
Published by github-actions[bot] over 2 years ago
IMPROVEMENTS
BUG FIXES
DOWNLOAD_BY_PART
trueBACKUPS_TO_KEEP_REMOTE
Published by github-actions[bot] over 2 years ago
IMPROVEMENTS
API_ALLOW_PARALLEL
to support multiple parallel execution calls for, WARNING, control command names don't try to execute multiple same commands and be careful, it could allocate much memory--partitions
on create, upload, download, restore CLI commands and API endpoint fix #378 properly implementation--diff-from-remote
for upload
command and properly handle required
on download command, fix #289
print-config
cli command fix #366
UPLOAD_BY_PART
(default: true) option for improve upload/download concurrency fix #324
SFTP_DEBUG
option, fix #335
ON CLUSTER
, fix #145
last_backup_size_remote
metric calculation to make it async during REST API startup and after download/upload,list remote
speed via local metadata cache in $TEMP/.clickhouse-backup.$REMOTE_STORAGE
, fix #318
CLICKHOUSE_IGNORE_NOT_EXISTS_ERROR_DURING_FREEZE
option, fix #319
clean
cli command and API POST /backup/clean
endpoint, fix #379
BUG FIXES
GCP_PATH
REMOTE_STORAGE=none
error handleshadow
if create
fail during moveShadow
upload
, for backups created with --partitions
, fix bug after #356
restore --rm
behavior for 20.12+ for tables which have dependent objects (like dictionary)FTP
creation directories during upload, reduce connection pool usage--schema
parameter for show local backup size after download
EXPERIMENTAL
MaterializedMySQL
and MaterializedPostgeSQL
tables, restore MySQL tables not impossible now without replace table_name.json
to Engine=MergeTree
,Published by github-actions[bot] almost 3 years ago
IMPROVEMENTS
POST /backup/tables/all
, fix POST /backup/tables
to respect CLICKHOUSE_SKIP_TABLES
BUG FIXES
Published by github-actions[bot] almost 3 years ago
INCOMPATIBLE CHANGES
/backup/status
now return only latest executed command with status and error messageIMPROVEMENTS
/backup/list/local
and /backup/list/remote
to allow list backup types separately/backup/create
, during avoid list remote backups for update metrics valuesystem.tables
when set table
query string parameter or --tables
cli parameterlast
and filter
query string parameters to REST API /backup/actions
, to avoid pass to client long JSON documentsFTP
remote storage parallel upload / downloadFTP_CONCURRENCY
to allow, by default MAX_CPU / 2FTP_DEBUG
setting, to allow debug FTP commandsFTP
to CI/CD on any commitBUG FIXES
LOG_LEVEL
now apply to clickhouse-backup server
properly/backup/create
, /backup/upload
, /backup/download
S3_PART_SIZE
back, but calculates it smartlylast
and filter
query string parametersREMOTE_STORAGE=none
FTP
serverPublished by github-actions[bot] almost 3 years ago
BUG FIXES
system.backup_list
integration table after add required field
in https://github.com/AlexAkulov/clickhouse-backup/pull/263
SFTP_PASSWORD
environment usage