A Simple and Reliable Backup Strategy for Your Linux Docker Home Server
📑 Table of Contents
- 📘 Introduction
- 🤔 What Should You Back Up?
- ⚙️ Setting Up Restic With Backblaze B2
- 🧰 The Backup Script
- 🛠️ Running the Backup Script (Manually or Automatically)
- ✅ Final Thoughts
📘 Introduction
Before setting up a proper backup strategy, I didn’t back up my home server at all. Like many people running a small home server, I kept putting it off and underestimated how fragile things can be, especially when it comes to databases.
My Ghost blog runs on this server, and the MySQL database behind it contains all my posts and content I’ve spent hours writing. Losing that would mean losing the entire blog. The same is true for many self-hosted services: the database is often the most valuable part.
One thing I learned quickly is that you can’t back up a MySQL database by copying the raw files from a Docker volume. Unless the database is shut down cleanly, you’ll almost certainly end up with corrupted or incomplete data.
To fix this, I created a simple automated backup script. It generates a proper SQL dump, collects my docker-compose.yml files, configuration folders, and .env secrets, and stores everything in a clean timestamped folder. On top of that, Restic uploads the backup to Backblaze B2, encrypted, deduplicated, and automatically rotated.
In this post, I’ll show what to back up, how the script works, and how to integrate it with Restic so your data stays safe even if your server fails.
🤔 What Should You Back Up?
Before we get into the script, it’s good to understand what actually matters in a home-server backup. Your Linux system and Docker itself are easy to reinstall, the real value is in the files that define and power your services. The list below contains common examples, but every setup is different. What matters most is that you know exactly which pieces you would need to fully restore your environment.
Docker Compose files
Your docker-compose.yml (and any extra .yml files) describe how each service runs. Without them, you’d be guessing ports, volumes, image versions, and container settings.
Configuration folders
Many apps store important data in directories like config, configs, conf, or content. These often include themes, metadata, uploads, reverse-proxy settings, and other files your services depend on.
.env files
These contain secrets: passwords, tokens, API keys, database credentials, etc. If you lose them, your containers may not even start again. They’re one of the most critical files to back up.
Database dumps
If your services rely on MySQL, MariaDB, or PostgreSQL, the database is probably the most valuable part of your entire setup. For me, this is where all my blog posts live. Using proper dumps (like mysqldump) ensures the data is safe and consistent, unlike copying raw DB files.
These are just examples, but the idea is simple:
Back up anything you would absolutely need in order to rebuild your services from scratch.
⚙️ Setting Up Restic With Backblaze B2
Now that we know what needs to be backed up, it’s time to set up the offsite part of the solution. The local backup script creates a clean snapshot of your data, but if that snapshot only lives on the same server, it won’t help much in case of hardware failure, disk corruption, or accidental deletion. That’s where Restic comes in.
Restic is an encrypted, fast, deduplicated backup tool that works perfectly for home servers. You point it at a folder, and it takes care of uploading it securely to your storage provider. In my case, I chose Backblaze B2 because it’s cheap, simple to use, and works great with Restic.
Below are the steps I used to set everything up.
1. Install Restic
On most Linux distros, this is as simple as:
sudo apt install restic
You can also download the latest binary from the official website if you prefer.
2. Create a Backblaze B2 bucket
Log into your Backblaze account and create a new bucket:
- Name it something like
home-server-backups - Make sure it’s private
- Enable Default Encryption
- Region doesn’t matter much for personal use
This bucket will store your encrypted Restic snapshots.
3. Create an App Key for Restic
In Backblaze:
- Go to App Keys
- Create a new key
- Give it access to only the bucket you just created
You’ll get:
- Key ID
- Application Key
Keep these safe, they are your credentials.
4. Create a .restic-env file
Restic uses environment variables, which is perfect because we can load them automatically in the script.
Create a file at:
~/.restic-env
And add:
export RESTIC_REPOSITORY="b2:home-server-backups:/"
export B2_ACCOUNT_ID="YOUR_KEY_ID"
export B2_ACCOUNT_KEY="YOUR_APPLICATION_KEY"
export RESTIC_PASSWORD="your-strong-encryption-password"
Why use a file?
It keeps secrets out of your script, avoids hardcoding passwords, and makes automation safe and clean.
Make the file readable only by you:
chmod 600 ~/.restic-env
5. Initialize the Restic repository
Before your first backup, Restic must initialize the bucket:
restic init
This creates the encrypted structure inside your B2 bucket.
6. Test the connection
Run a quick check:
restic snapshots
If everything is set correctly, Restic will connect to B2 (and show an empty list for now).
🧰 The Backup Script
Below is the full backup script I use on my home server. It collects the most important parts of each Docker project: your compose files, configuration folders, .env secrets, and proper database dumps, and then (if configured) sends everything to Backblaze B2 using Restic.
This script is intentionally simple and general. You can easily customize it for your own setup by adding or removing project folders, adjusting your database containers, or extending it with additional paths. For a small hobby server, a clean and predictable backup like this is usually more valuable than trying to perfectly curate every single file.
📄 Full Backup Script
#!/usr/bin/env bash
set -euo pipefail
#############################################
# 0. BASIC SETTINGS
#############################################
# Where all backups will be stored
BACKUP_BASE="/your/backups/path"
# Folders where your docker-compose projects live (edit these!)
PROJECT_DIRS=()
# MySQL / MariaDB containers to dump (optional)
MYSQL_CONTAINERS=()
POSTGRES_CONTAINERS=()
MYSQL_USER="your_root_user"
MYSQL_PASSWORD="your_root_password"
POSTGRES_USER="your_postgres_user"
POSTGRES_PASSWORD="your_postgres_password"
# How many days of backups to keep (set 0 to disable cleanup)
RETENTION_DAYS=30
#############################################
# 1. LOAD RESTIC ENV (Backblaze B2)
#############################################
RESTIC_ENABLED=false
RESTIC_ENV_PATH="/your/path/.restic-env"
if [ -f ${RESTIC_ENV_PATH} ]; then
echo "Loading Restic environment from ~/.restic-env..."
RESTIC_ENABLED=true
set -a
. "${RESTIC_ENV_PATH}"
set +a
else
echo "WARN: ~/.restic-env not found, Restic offsite backup will be skipped."
fi
#############################################
# 2. PREPARE BACKUP FOLDERS
#############################################
TS="$(date +%Y%m%d-%H%M%S)"
BACKUP_DIR="${BACKUP_BASE}/${TS}"
PROJECTS_BASE="${BACKUP_DIR}/projects"
DB_DUMP_DIR="${BACKUP_DIR}/db_dumps"
echo "Creating backup at: ${BACKUP_DIR}"
mkdir -p "${PROJECTS_BASE}" "${DB_DUMP_DIR}"
#############################################
# 2a. BACKUP PER-PROJECT CONFIGS (+ nginx + content)
#############################################
echo "Backing up docker-compose files and project configs..."
for dir in "${PROJECT_DIRS[@]}"; do
if [ -d "$dir" ]; then
proj_name="$(basename "$dir")"
proj_dir="${PROJECTS_BASE}/${proj_name}"
proj_config_dir="${proj_dir}/configs"
proj_env_dir="${proj_dir}/env_files"
mkdir -p "$proj_config_dir" "$proj_env_dir"
echo "== Project: ${proj_name} =="
# Copy docker-compose.yml and any *.yml/yaml files
find "$dir" -maxdepth 2 -type f \
\( -name "docker-compose.yml" -o -name "*.yml" -o -name "*.yaml" \) \
-exec cp {} "$proj_config_dir" \;
# Copy known config-like subfolders if present
# Includes 'content' so Ghost's content/ is backed up too
for sub in config configs conf nginx content; do
if [ -d "$dir/$sub" ]; then
echo " - copying ${sub} from ${dir}"
cp -r "$dir/$sub" "$proj_config_dir/"
fi
done
# Back up .env files for this project (keep them grouped per project)
echo " - copying .env files from ${dir}"
find "$dir" -maxdepth 3 -type f -name ".env" -exec cp {} "$proj_env_dir" \;
else
echo "WARN: Project dir not found: $dir"
fi
done
# Optional: remove any "logs" directories from copied configs to keep backups lean.
# If you decide you don't care about logs, you can comment this out.
echo "Removing logs directories from configs..."
find "${PROJECTS_BASE}" -type d -name logs -prune -exec rm -rf {} +
#############################################
# 3. BACKUP DATABASES
#############################################
# MySQL / MariaDB
if [ "${#MYSQL_CONTAINERS[@]}" -gt 0 ]; then
echo "Dumping MySQL/MariaDB databases..."
for c in "${MYSQL_CONTAINERS[@]}"; do
echo " - dumping from container: $c"
OUT_FILE="${DB_DUMP_DIR}/mysql-${c}-${TS}.sql"
if [ -n "$MYSQL_PASSWORD" ]; then
docker exec "$c" sh -c \
"mysqldump -u\"${MYSQL_USER}\" -p\"${MYSQL_PASSWORD}\" --all-databases" \
> "$OUT_FILE"
else
docker exec "$c" sh -c \
"mysqldump -u\"${MYSQL_USER}\" --all-databases" \
> "$OUT_FILE"
fi
done
fi
# PostgreSQL
if [ "${#POSTGRES_CONTAINERS[@]}" -gt 0 ]; then
echo "Dumping PostgreSQL databases..."
for c in "${POSTGRES_CONTAINERS[@]}"; do
echo " - dumping from container: $c"
OUT_FILE="${DB_DUMP_DIR}/postgres-${c}-${TS}.sql"
docker exec "$c" sh -c \
"PGPASSWORD=\"${POSTGRES_PASSWORD:-}\" pg_dumpall -U \"${POSTGRES_USER}\"" \
> "$OUT_FILE"
done
fi
#############################################
# 4. OFFSITE BACKUP WITH RESTIC (Backblaze B2)
#############################################
if [ "${RESTIC_ENABLED}" = true ]; then
echo "=== Starting Restic backup to Backblaze B2 ==="
echo "Backing up folder: ${BACKUP_DIR}"
# Create a Restic snapshot of this specific backup folder
restic backup "${BACKUP_DIR}" \
--hostname "home-server" \
--tag "docker-backup"
echo "Applying Restic retention policy (7 daily, 4 weekly, 6 monthly)..."
restic forget \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 6 \
--prune
echo "=== Restic backup completed ==="
else
echo "Skipping Restic backup (RESTIC_ENABLED=false or ~/.restic-env missing)."
fi
#############################################
# 5. TAR + COMPRESS WHOLE BACKUP
#############################################
echo "Creating compressed archive..."
cd "${BACKUP_BASE}"
tar czf "${TS}.tar.gz" "${TS}"
# Optionally delete the uncompressed directory:
rm -rf "${BACKUP_DIR}"
#############################################
# 6. CLEANUP OLD BACKUPS
#############################################
if [ "$RETENTION_DAYS" -gt 0 ]; then
echo "Cleaning up backups older than ${RETENTION_DAYS} days..."
find "${BACKUP_BASE}" -maxdepth 1 -type f -name "*.tar.gz" -mtime +"${RETENTION_DAYS}" -print -delete
fi
echo "✅ Backup completed: ${BACKUP_BASE}/${TS}.tar.gz"
What you should change for your own setup
This script is meant to be a simple starting point, so adapting it to your environment only requires a few small changes:
-
BACKUP_BASE
Set this to the folder where your backups should be stored.
It’s best to use an external hard drive or a separate disk so that a system failure doesn’t wipe out both your server and your backups. -
PROJECT_DIRS
Replace these paths with the folders where your own Docker projects live.
Each folder should contain adocker-compose.ymlfile and any configuration directories you want backed up. -
MYSQL_CONTAINERS/POSTGRES_CONTAINERS
Add the names of your database containers here if you want the script to create dumps for them.
Example:"nextcloud_mysql","wordpress_db". -
MYSQL_USER,MYSQL_PASSWORD,POSTGRES_USER,POSTGRES_PASSWORD
Update these credentials so the script can access your database containers and generate clean dumps. -
Retention policy
AdjustRETENTION_DAYSto control how long local.tar.gzarchives are kept before being automatically removed. -
Restic environment file
Restic loads its credentials from~/.restic-env. Make sure this file exists and contains your Backblaze details and Restic password if you want offsite backups enabled.
Everything else can stay exactly as it is unless you want to extend the script with extra folders, services, or custom logic.
A quick note on backup strategy (3-2-1 rule)
A widely recommended approach to backups is the 3-2-1 rule:
- 3 copies of your important data
- 2 different storage types
- 1 offsite copy
With this setup, you naturally follow that pattern:
- Your running server environment counts as the first copy
- The compressed archive on your external hard drive serves as the second copy
- Restic’s encrypted backup to Backblaze B2 provides the third, offsite copy
Restic also applies a retention policy by default:
- 7 daily snapshots
- 4 weekly snapshots
- 6 monthly snapshots
This keeps your cloud backups tidy and manageable without any manual cleanup.
If you want to reduce cloud storage usage, you can adjust these values or even remove the daily snapshots completely.
Overall, this approach offers a solid, hobbyist-friendly backup system that stays simple while still following proven best practices.
🛠️ Running the Backup Script (Manually or Automatically)
Once you’ve saved the backup script and made it executable with:
chmod +x backup.sh
you can run it manually anytime by simply calling:
./backup.sh
This will create a full snapshot of your Docker projects, database dumps, and configuration files. If Restic is configured correctly, it will also upload the backup to Backblaze B2.
Automating it with cron
You can schedule the script using cron, which lets you run backups automatically at a chosen time.
Edit your cron jobs with:
crontab -e
To run the backup once a week (Monday at 03:00) and keep a simple log of only the latest run, add:
0 3 * * 1 /your/path/backup-script.sh > /your/path/backup.log 2>&1
This has two advantages:
-
You always have a log of the most recent backup.
Helpful for debugging if something fails. -
The log never grows endlessly.
Each new backup overwrites the previousbackup.log, keeping things clean.
You can change the schedule however you like — weekly, daily, monthly, or custom times depending on how frequently your data changes.
Checking your active cron jobs
To see all cron jobs for your user:
crontab -l
This helps you verify that your automation is correctly installed.
Verifying your backups
No backup is complete unless it can actually be restored. Even though I’m not covering full restore steps in this post, it’s a good idea to occasionally test your backups. For example, you can spin up a temporary MySQL or PostgreSQL container and import your dumps, or restore your compose files on a spare machine or VM. A quick verification once in a while gives you confidence that everything works when you need it most.
✅ Final Thoughts
A good backup strategy doesn’t need to be complicated. For most home servers, the goal is simply to make sure you can recover your important data without stress — your blog posts, your configs, your secrets, your databases, and the setup that ties everything together.
By combining a lightweight backup script with Restic and Backblaze B2, you get a system that is:
- simple to understand
- easy to maintain
- secure and encrypted
- automated and predictable
- resilient against hardware failure or accidental deletion
This setup follows the core idea behind the 3-2-1 rule: your live system, a local backup copy, and an offsite encrypted copy. For a small hobby server, that’s more than enough to sleep comfortably at night.
This script doesn’t try to handle every possible folder or Docker volume, and that’s okay. You can keep things as simple as I did, or extend it with more paths and services if your setup grows. That flexibility is the beauty of running your own server.