Scheduled backups to Backblaze B2 of a Ghost blog running in a Docker container

This blog is powered by Ghost, and it's installed on a Debian server through Coolify in two Docker containers. One container is the Ghost application itself and the other container is MySQL. The Ghost application also uses a volume in Docker for its content (files, etc). I wanted to do daily backups to Backblaze B2 Cloud Storage, since it's a cheap S3 compatible cloud storage. In this blog post I will show how I configured this on my Debian server.

New user in MySQL for doing the backups

I decided to create a dedicated user for performing the MySQL database backups. To create a user in MySQL, first connect as root to the MySQL container:

docker exec -it <mysql-container-name> mysql -uroot -p

Create a user with the name backup:

CREATE USER 'backup'@'%' IDENTIFIED BY 'STRONG_RANDOM_PASSWORD';

GRANT SELECT, LOCK TABLES, EVENT, TRIGGER, PROCESS ON *.* TO 'backup'@'%';

FLUSH PRIVILEGES;

Rclone for uploading backups to Backblaze B2

First install Rclone, then configure it with the details for your Backblaze B2 bucket. Rclone supports a wide array of providers, such as Amazon S3, Dropbox, Google Drive, etc, so you are by no means limited to Backblaze B2 if you want to store your backups somewhere else.

apt install rclone
rclone config

Bash script that does the backup

Then create the script that performs the backup:

/usr/local/bin/ghost-backup.sh:

#!/bin/bash
set -euo pipefail
BACKUP_DIR=/var/backups/ghost
mkdir -p "$BACKUP_DIR"

# MySQL dump
docker exec -i <mysql-container-name> \
  mysqldump -ubackup -pSTRONG_RANDOM_PASSWORD \
    --single-transaction --routines --triggers ghost \
  | gzip > "$BACKUP_DIR/$(date +\%F)-ghost.sql.gz"

# Static volume
tar -C /var/lib/docker/volumes/<ghost-content-data-volume>/_data -czf \
  "$BACKUP_DIR/$(date +\%F)-static.tar.gz" .

# Upload to Backblaze B2
rclone copy "$BACKUP_DIR" <remote-name>:<bucket-name>/

# Keep the latest 60 files (regardless of age)
find "$BACKUP_DIR" -maxdepth 1 -type f -printf '%T@ %p\0' | \
  sort -z -nr | \
  tail -z -n +61 | \
  cut -z -d' ' -f2- | \
  xargs -0 rm -f

Make the script executable:

chmod +x /usr/local/bin/ghost-backup.sh

This script connects to the MySQL Docker container, exports the data and compresses it with Gzip. Next, it compresses the Docker volume's folder with static files to a .tar.gz archive. Then, it uploads the files to Backblaze B2 using Rclone. Finally, it deletes old backup files.

Systemd file for scheduling the backup

Then to run the backup by a schedule, I used Systemd. An alternative would have been to schedule it with Cron instead, but I went with Systemd. Here follows the two Systemd files:

/etc/systemd/system/ghost-backup.service:

[Unit]
Description=Nightly backup of Ghost (DB + static files)

[Service]
Type=oneshot
User=root
ExecStart=/usr/local/bin/ghost-backup.sh

/etc/systemd/system/ghost-backup.timer:

[Unit]
Description=Daily timer for Ghost backup service

[Timer]
OnCalendar=--* 02:00
Persistent=true
RandomizedDelaySec=5m

[Install]
WantedBy=timers.target

Next, enable and start the timer:

systemctl daemon-reload
systemctl enable --now ghost-backup.timer

Verify that the timer is working:

# Check timer schedule
systemctl list-timers ghost-backup.timer

# Check service logs after the next run
journalctl -u ghost-backup.service

To run the Systemd service now and check log output:

systemctl start ghost-backup.service

journalctl -u ghost-backup.service

If it works, it will print something like the following:

Aug 19 14:47:49 server systemd[1]: Starting ghost-backup.service - Nightly backup of Ghost (DB + static files)...
Aug 19 14:47:49 server ghost-backup.sh[3880342]: mysqldump: [Warning] Using a password on the command line interface can be insecure.
Aug 19 14:47:56 server systemd[1]: ghost-backup.service: Deactivated successfully.
Aug 19 14:47:56 server systemd[1]: Finished ghost-backup.service - Nightly backup of Ghost (DB + static files).
Aug 19 14:47:56 server systemd[1]: ghost-backup.service: Consumed 2.101s CPU time.