Backup & Restore
Regular backups are not optional — they are essential. A server can fail at any time, an update can go wrong, or a misconfiguration can corrupt data. Without a backup, there is no second chance. This page shows how to reliably back up FediSuite — manually, automatically, and externally — and how to fully restore it in an emergency.
On this page
What needs to be backed up?
./postgres/
— PostgreSQL database
All user accounts, scheduled posts, connected Fediverse accounts, settings, analytics data. By far the most important.
Method: pg_dump (not a directory copy)
./uploads/
— User uploads
Media files uploaded by users — images and other attachments for posts.
Method: tar archive
.env
— Configuration file
All passwords, secrets, and settings. Without the .env, recovery without manual reconfiguration is not possible.
Method: encrypted copy
./plugins/
— Installed plugins
Self-installed plugins. Can be re-downloaded if needed — still worth backing up.
Method: tar archive
docker-compose.yml
— Compose configuration
The stack definition. Can be re-cloned from the repository at any time — still useful to back up your own customizations.
Method: simple copy
The three principles
3-2-1
The 3-2-1 rule
At least 3 copies, on 2 different media/systems, with 1 at a different physical location. A backup on the same server doesn't protect against server loss.
∞
Automation
A backup that must be created manually will sooner or later be forgotten. Set up a cron job that backs up automatically every day — and let it run without thinking about it.
✓
Test your restores
An untested backup is not a backup. Regularly verify that you can actually restore from your backups — before an emergency strikes.
Back up the database (pg_dump)
The database must not be backed up by simply copying the ./postgres/ directory while PostgreSQL is running. The data files can be in an inconsistent state at that point and may be corrupt after a restore.
The correct approach is pg_dump — the official PostgreSQL tool for logical backups. It creates a consistent snapshot of the database as an SQL dump while the database is running, independent of the internal file structure.
Simple dump (uncompressed)
Creates a readable SQL file — good for manually inspecting the contents.
docker compose exec -T db \
pg_dump -U fedisuite fedisuite \
> backup_$(date +%Y-%m-%d).sql
Compressed dump recommended
Gzip-compressed — significantly smaller, directly suitable for external storage.
docker compose exec -T db \
pg_dump -U fedisuite fedisuite \
| gzip > backup_$(date +%Y-%m-%d_%H-%M-%S).sql.gz
Verify backup contents
Shows the first lines of the dump without fully extracting it.
gunzip -c backup_2026-05-01_12-00-00.sql.gz | head -30
./postgres/? PostgreSQL keeps data on disk in a state that is not guaranteed to be consistent for external observers during operation. A snapshot of the directory may contain incomplete transactions, half-written pages, or orphaned files. pg_dump queries the database via the normal PostgreSQL protocol, thereby obtaining guaranteed consistent data.
Back up files
In addition to the database, uploads and configuration must be backed up. Uploads can take up a lot of storage depending on usage intensity — factor this in when choosing your backup destination.
Archive uploads
Creates a compressed archive of the entire uploads directory.
tar -czf uploads_$(date +%Y-%m-%d_%H-%M-%S).tar.gz ./uploads/
Back up .env
The .env contains passwords and secrets — back it up separately and encrypted.
# Simple copy (only if the backup destination itself is encrypted)
cp .env backups/env_$(date +%Y-%m-%d)
# Encrypted with GPG (recommended for external storage)
gpg --symmetric --cipher-algo AES256 \
--output backups/env_$(date +%Y-%m-%d).gpg .env
Back up plugins & compose file
Optional, but recommended if you have custom modifications.
tar -czf plugins_$(date +%Y-%m-%d).tar.gz ./plugins/
cp docker-compose.yml backups/docker-compose_$(date +%Y-%m-%d).yml
Automated backup script
The following script backs up the database, uploads, and configuration in one step, timestamps each backup, and automatically cleans up old backups. Adjust FEDISUITE_DIR and BACKUP_DIR to your actual paths.
#!/bin/bash
set -euo pipefail
# ── Configuration ────────────────────────────────────────────────
FEDISUITE_DIR="/opt/fedisuite" # Path to the FediSuite directory
BACKUP_DIR="/var/backups/fedisuite" # Destination for backups
KEEP_DAYS=7 # Delete backups older than X days
DB_NAME="fedisuite"
DB_USER="fedisuite"
DATE=$(date +%Y-%m-%d_%H-%M-%S)
LOG_PREFIX="[fedisuite-backup][$DATE]"
# ── Preparation ──────────────────────────────────────────────────
mkdir -p "$BACKUP_DIR"
cd "$FEDISUITE_DIR"
echo "$LOG_PREFIX Start"
# ── 1. Database (pg_dump) ────────────────────────────────────────
echo "$LOG_PREFIX Backing up database..."
docker compose exec -T db \
pg_dump -U "$DB_USER" "$DB_NAME" \
| gzip > "$BACKUP_DIR/db_${DATE}.sql.gz"
echo "$LOG_PREFIX ✓ Database: db_${DATE}.sql.gz ($(du -sh "$BACKUP_DIR/db_${DATE}.sql.gz" | cut -f1))"
# ── 2. Uploads ───────────────────────────────────────────────────
echo "$LOG_PREFIX Backing up uploads..."
tar -czf "$BACKUP_DIR/uploads_${DATE}.tar.gz" \
-C "$FEDISUITE_DIR" uploads/
echo "$LOG_PREFIX ✓ Uploads: uploads_${DATE}.tar.gz ($(du -sh "$BACKUP_DIR/uploads_${DATE}.tar.gz" | cut -f1))"
# ── 3. Configuration ─────────────────────────────────────────────
echo "$LOG_PREFIX Backing up configuration..."
cp "$FEDISUITE_DIR/.env" "$BACKUP_DIR/env_${DATE}"
cp "$FEDISUITE_DIR/docker-compose.yml" "$BACKUP_DIR/compose_${DATE}.yml"
echo "$LOG_PREFIX ✓ .env and docker-compose.yml backed up"
# ── 4. Clean up old backups ──────────────────────────────────────
echo "$LOG_PREFIX Cleaning up backups older than ${KEEP_DAYS} days..."
find "$BACKUP_DIR" -maxdepth 1 -type f -mtime "+$KEEP_DAYS" -delete
echo "$LOG_PREFIX ✓ Cleanup complete"
echo "$LOG_PREFIX Backup completed successfully."
echo "$LOG_PREFIX Backup directory: $BACKUP_DIR"
ls -lh "$BACKUP_DIR" | tail -10
Install and make the script executable
# Create the script
nano /usr/local/bin/fedisuite-backup.sh
# Make it executable
chmod +x /usr/local/bin/fedisuite-backup.sh
# Test it once
/usr/local/bin/fedisuite-backup.sh
Set up a cron job
A cron job runs the backup script automatically and regularly. Open the root user's crontab and add an entry:
crontab -e
Add one of the following lines — depending on the desired frequency:
# Daily at 03:00 (recommended)
0 3 * * * /usr/local/bin/fedisuite-backup.sh >> /var/log/fedisuite-backup.log 2>&1
# Twice daily: 03:00 and 15:00
0 3,15 * * * /usr/local/bin/fedisuite-backup.sh >> /var/log/fedisuite-backup.log 2>&1
# Hourly (for very active instances)
0 * * * * /usr/local/bin/fedisuite-backup.sh >> /var/log/fedisuite-backup.log 2>&1
Check the backup log
Regularly verify that backups are actually completing successfully.
tail -50 /var/log/fedisuite-backup.log
Backup rotation
Without rotation, the backup directory grows without limit. The script above already deletes backups older than KEEP_DAYS days. For a more granular strategy — daily for 7 days, weekly for 4 weeks, monthly for 3 months — you can extend the rotation in the script as follows:
# Daily backups: keep for 7 days
find "$BACKUP_DIR/daily" -maxdepth 1 -type f -mtime +7 -delete
# Weekly backups: keep for 4 weeks (copy every Sunday)
if [ "$(date +%u)" = "7" ]; then
cp "$BACKUP_DIR/db_${DATE}.sql.gz" "$BACKUP_DIR/weekly/"
find "$BACKUP_DIR/weekly" -maxdepth 1 -type f -mtime +28 -delete
fi
# Monthly backups: keep for 3 months (copy on the 1st of each month)
if [ "$(date +%d)" = "01" ]; then
cp "$BACKUP_DIR/db_${DATE}.sql.gz" "$BACKUP_DIR/monthly/"
find "$BACKUP_DIR/monthly" -maxdepth 1 -type f -mtime +90 -delete
fi
Store backups externally
A backup on the same server doesn't protect against data loss due to server failure, theft, or data center problems. Transfer backups regularly to an external destination. rclone is the recommended tool for this — it supports over 70 cloud providers with a unified interface.
Install rclone
curl https://rclone.org/install.sh | sudo bash
Configure a remote destination
Interactive wizard — choose your provider (e.g. S3, Backblaze B2, SFTP, Hetzner Storage Box).
rclone config
Upload backups
Syncs the local backup directory with the remote destination. Only new files are transferred.
# Example: Hetzner Storage Box via SFTP
rclone sync /var/backups/fedisuite hetzner:fedisuite-backups --progress
# Example: Backblaze B2
rclone sync /var/backups/fedisuite b2:my-bucket/fedisuite --progress
# Example: any SFTP server
rclone sync /var/backups/fedisuite sftp-backup:backups/fedisuite --progress
Append the rclone upload at the end of the backup script so that every backup is automatically stored externally:
# ── 5. Transfer externally ───────────────────────────────────────
echo "$LOG_PREFIX Transferring backups externally..."
rclone sync "$BACKUP_DIR" hetzner:fedisuite-backups \
--quiet --log-level ERROR
echo "$LOG_PREFIX ✓ External transfer complete"
.env contains passwords and the JWT secret — encrypt it before external transfer. Use GPG or make sure the external destination itself is fully encrypted.
Restore: step by step
In an emergency — whether a failed update, a corrupted filesystem, or a server migration — restore FediSuite completely as follows. Execute the steps in exactly this order.
Stop the stack
docker compose down
Restore configuration (if needed)
# Copy .env back from backup
cp /var/backups/fedisuite/env_DATE .env
# Or decrypt if saved with GPG
gpg --decrypt /var/backups/fedisuite/env_DATE.gpg > .env
Start only the database container
PostgreSQL must be running before you can restore data. The app starts in step 6.
docker compose up -d db
# Wait until the health check passes
docker compose ps
Drop the existing database
docker compose exec -T db \
psql -U fedisuite postgres \
-c "DROP DATABASE IF EXISTS fedisuite;"
docker compose exec -T db \
psql -U fedisuite postgres \
-c "CREATE DATABASE fedisuite;"
Restore the database
# Restore from compressed dump
gunzip -c /var/backups/fedisuite/db_DATE.sql.gz \
| docker compose exec -T db \
psql -U fedisuite fedisuite
# Restore from uncompressed dump
docker compose exec -T db \
psql -U fedisuite fedisuite \
< /var/backups/fedisuite/db_DATE.sql
Restore uploads
# Back up existing uploads folder (if present)
[ -d "./uploads" ] && mv ./uploads ./uploads.old
# Restore from backup
tar -xzf /var/backups/fedisuite/uploads_DATE.tar.gz -C ./
Start the full stack
docker compose up -d
docker compose logs -f app
The app detects the restored database, runs init-db.js (idempotent — only creates missing structures, leaves existing data untouched) and then starts normally.
Test the restore
A backup that has never been tested might not work when you need it — and you'd only find out when it's too late. Test at least once a month whether your backups can actually be restored.
Quick integration test of the database backup
You can restore the dump into a temporary container without touching your production database:
# Start a temporary PostgreSQL container
docker run -d --name fedisuite-restore-test \
-e POSTGRES_USER=fedisuite \
-e POSTGRES_PASSWORD=test \
-e POSTGRES_DB=fedisuite \
postgres:15-alpine
# Restore the dump
gunzip -c /var/backups/fedisuite/db_DATE.sql.gz \
| docker exec -i fedisuite-restore-test \
psql -U fedisuite fedisuite
# Check tables — should list all FediSuite tables
docker exec -it fedisuite-restore-test \
psql -U fedisuite fedisuite -c "\dt"
# Clean up test container
docker rm -f fedisuite-restore-test
Monthly restore test checklist
- Backup script runs without errors (check the log)
- Database dump is not empty (head -5 backup.sql.gz | gunzip)
- Dump can be restored into a test container (\dt shows tables)
- Uploads archive can be extracted (tar -tzf uploads.tar.gz | head)
- rclone transfer to external destination succeeded
- .env backup is present and readable (or decryptable)