Version 3.8.0

3.4.1. User-defined scripts that extend functionality

In this type of backup, an external script is executed and passed to the program via the key "dump_cmd".

By default, at the completion of this command, it is expected that:

  • A complete backup file with data will be collected
  • The stdout will send data in json format, like:
{
  "full_path": "/abs/path/to/backup.file"
}

Requirements:

  • You have to ensure that all the utilities that are necessary for your script to run are installed on the system
  • As a result of the work of your script, text in json format should be sent to stdout and contain only one single key - full_path, which should contain the path to the archive collected by your script
  • The successfully completed program should finish with exit code 0
  • There should be enough space in the directory where the temporary backup is created.

Here is an example of configs for creating clickhouse backup using a custom user script on python /root/clickhouse-backup.py:

job_name: external
type: external
safety_backup: false
deferred_copying: false
dump_cmd: /root/clickhouse-backup.py
sources: []

storages_options:
  - storage_name: local
	backup_path: /var/backup/dump
	retention:
  	days: 7
  	weeks: 5
  	months: 5

If you want to use remote storage to copy your backups for long storage, you should use the corresponding storage name and its retention parameters:

SSH/SCP storage:

storages_options:
- storage_name: ssh
  backup_path: /databases/clickhouse
  retention:
    days: 30
    weeks: 0
    months: 12

S3 storage:

storages_options:
- storage_name: s3
  backup_path: /databases/clickhouse
  retention:
    days: 30
    weeks: 0
    months: 12

The storage name used must be declared and configured in the main config in the `storage_connects` block in nxs-backup.conf:

/etc/nxs-backup/nxs-backup.conf

storage_connects:
- name: s3
  s3_params:
    bucket_name: backups_bucket
    access_key_id: my_s3_ak_id
    secret_access_key: ENV:S3_SECRET_KEY
    endpoint: s3.amazonaws.com
    region: us-east-1

If you need another storage go to Storage

Reference for external.conf:

# Job unique name
job_name: string # Required

# Type of the backup job. `external`
type: string # Required

# Path to the directory used to temporarily store the backup during the creation process
tmp_dir: string # Required

# The option defines the order of rotation (deletion of old) backups, before creating a new copy or afterwards
safety_backup: bool # Default: false

# The option allows you to postpone copying until all backup parts of the job are complete before its delivery
deferred_copying: bool # Default: false

# Skip backup rotation on storages 
skip_backup_rotate: bool # Default: false

# Full command to run an external script.
dump_cmd: string

# List of storages to which backups should be delivered and their rotation rules
storages_options: list 
    # The name of the storage added to the main config. The name `local` is used to store a copy on host.
  - storage_name: string # Reuired
    # Path to directory where backups will be stored
    backup_path: string # Reuired
    # Set of rules to store and rotate the backups.
    retention: map 
      # Use backup retention count instead of retention period.
      count_instead_of_period: bool # Default: false
      # Count of days/daily copies to store backups. Multiple copies can be created in one day.
      days: numeric # Default: 7
      # Count of weeks/weekly copies to store backups created on sunday.
      weeks: numeric # Default: 5
      # Count of months/montly copies to store backups created on the first day of month.
      months: numeric # Default: 12