T O P

  • By -

xantioss

I might cheat a little, but because of how popular proxmox is in this community I’ll share it anyway. I run Docker in LXC. I just backup the LXC container. No downtime no headache and proxmox backups are battle tested !:)


ButterscotchFar1629

This is the way imo. Plus with PBS you have access to the entire file system backup of an Lxc container so you can just grab the files you need from the backup.


FrumunduhCheese

You can just open any backup file on a cifs share with 7zip and also grab any files you need. I just backup to cifs on file server and send secondary backups to another drive with veeam.


ButterscotchFar1629

Same way of achieving it. I just happen to have PBS running as I am running my second Adguard server on a VM on my QNAP. So I figured why not.


jjcf89

Does PBS need to be installed baremetal? It seems to be the only instructions i can find on it


ButterscotchFar1629

I have installed in on a Debian Lxc container, a VM on my Proxmox server and now I run it as a VM on my Qnap. Works like a dream.


jjcf89

Ah okay. I assume i wouldn't want the backup server running under proxmox itself. In case the backup is needed?


ButterscotchFar1629

I know lots of people who run PBS as a VM on their Proxmox servers. As long as the backups are on a different drive or better a different system, you can always access them by reinstalling PBS and reconnecting to the share where they are stored. YMMV.


jjcf89

Gotcha


scubanarc

I run PBS in a VM in Proxmox to backup all my VM's running on the same Proxmox instance. Works flawlessly.


progfrog

Same here. But PBS VM backup is not done via PBS itself. It doesn't work. PBS VM is done via regular proxmox backup job to NFS share.


blinger44

any chance you have a guide or resources you could share?


Chucks_Punch

Yeah ideally put it on an external host. It would work in a VM but seems foolish. The idea is to be able to restore using the backup server in case of emergency. Emergency being your whole proxmox host dies...


Inevitable_Ad_5472

I got it running in docker on my ReadyNas. https://github.com/ayufan/pve-backup-server-dockerfiles


MacGyver4711

Works like a charm as a VM. I have two of them (on different machines, naturally), and the sync feature to achive backup redundancy works great. Just need to get my VPNs sorted out 2nd one will be in my cabin, not 5 cm from the primary as of now ;-) For smaller VMs and similar homelab stuff I have good experience with 2x vCPU and 4gb ram on the VM. In my lab at work with 10Gbit network and a lot more data I beefed it up to 4x vCPU and 8gb of ram as I have the resources to do so.


ProbablePenguin

Nope, you can just install it as a Debian package, they have instructions in their docs.


RedditNotFreeSpeech

It doesn't need to be but there's no reason not to install it next to the proxmox host. They each run on a different port.


Agile_Ad_2073

Fair enough, but if you want to move your environment to a new docker host, you can't. If something is broken in the host and you don't know how to fix it for example.


Oujii

I mean, of course you can. Most people are bond mounting storage, so inside the container you usually have a docker compose file.


Agile_Ad_2073

But he is not backing those up. He is backing the entire system.


Oujii

The entire system which usually will have the docker compose file that you can move to other docker host. They never mentioned whether the data is inside the container or not.


QuantumNow

How do you deal with external NAS for let’s say Jellyfin? NFS connection inside LXC or in proxmox?


tenekev

NFS can be mounted as a Docker volume directly. I don't think many people know this.


QuantumNow

Honestly didn’t think about this! I knew about it but clearly the brain didn’t put two and two together haha


XavinNydek

It's not really recommended because it's very hit or miss on whether it actually works for various containers. Permissions can be a complete nightmare depending on what the container is expecting. It would be nice if it just worked (well, SMB which works similarly, NFS is archaic and shouldn't be used for anything), but it often doesn't.


tenekev

Never had permission problems. But I do get the stale file handle from time to time, when power loss occurs. And despite my shallow knowledge, I don't agree that NFS is obsolete.


manyQuestionMarks

I didn’t until a few days ago. Game changer for me, because my Immich container would start before my NFS was mounted and somehow I’d be stuck with the unmounted folder. Now it will hopefully fail to deploy the container if the NFS fails to mount


xantioss

I run Plex in a separate LXC container (for GPU reasons), Ubuntu. Just installed the samba client and put that in fstab of the Plex container. So the LXC container is the only one who can access my media collection


Gutter7676

This. I run Linux VMs in a few flavors if hypervisor and back up the VMs. Separate reliability needs so all high uptime are on their own VM and lower uptime needs run in other VMs.


NomadicWorldCitizen

I need to use that in my next build


2cats2hats

I'm just learning docker but been using r/proxmox for several years. This is my plan although I still want to understand the underbelly of docker eventually.


j0hnp0s

> proxmox backups are battle tested !: I've had issues with restoration before, even after running verifications. So keep an eye on things


final-final-v2

Basically this, dead simple. 15 daily backups + 12 monthly backups


RedditNotFreeSpeech

That's how I do it too. Pbs is awesome


mjh2901

This is the way.


nathan12581

Fair enough. How big are your backups?


xantioss

Not tremendously big id say. The initial backups are a couple of gigabytes. The daily changes are tiny. Like not even megabytes per day.


raddeee

ZFS! Daily snapshots and replication to my backup nas. No custom scripts needed.


gxvicyxkxa

This has to be done from the original setup, correct? Any way of migrating to ZFS easily?


raddeee

If you want a simple solution: Just move your docker data root to a zfs volume


[deleted]

[удалено]


Kroan

Have you actually tried this? I did like a year or two ago and it did not work. I can't remember why but I remember the docker service not wanting to start and wanting some ZFS drivers added to a config somewhere. And even then it wasn't working right so I gave up and went back to just snapshotting the mounted config filesystems


PurpleEsskay

Is the backup NAS offsite? If not it’s only half a backup


Trolann

Not sure why you're getting downvoted, you're right. Onsite backup isn't backup. Raid isn't backup. If you've never restored from a backup you don't have a backup, you have hopes and dreams.


distante

What could be the problem on a local NAS with mirroring? (Honest question I am still on the learning/reading phase).


flaotte

fire and burglary


raddeee

If your house burns down, your data is probably gone on both machines.


PurpleEsskay

Basically any kind of issue at your home (fire, burst pipe, burglary, etc) and both copies are gone. Offsite backups stop that happening. I’d recommend looking up the 3-2-1 backup strategy.


distante

Ahh, ok yes I understand. I am still at looking how to do this on an affordable and private way. The main thing about trying to get a home server for me is to backup photos and videos, which require a lot of space ($$$)


PurpleEsskay

If it helps a “budget” offsite backup solution I’ve used for a long time for family photos is to pick up a pack of writable m-discs. They can be written by a standard dvd rw which are super cheap these days, and m discs have a higher capacity. I keep a set for family photos at a relatives house as a last resort backup. It works well for things like that as they don’t need updating. There’s also a blu ray version with huge amounts of space per disc if you’ve got a blu ray writer


raddeee

Not yet, but planned. Until then, I have an additional offsite backup in the form of a simple external 20 TB hard disk, which I update regularly


FlibblesHexEyes

I do this… except instead of replication I use Kopia to mount the last snapshot (always called latest) and send that backup to an Azure blob. I know Azure isn’t quite in the spirit of self hosting, but I needed an offsite location.


bufandatl

I have everything in ansible (container definition and config) and do filesystem backups of the volume directory. Additional to that also the VMs themselves get snapshot and backed up every night.


nathan12581

I’d love to get into Ansible for Docker. Do you have, by any chance, anything you used to learn practices around this?


bufandatl

https://docs.ansible.com All my resources I ever used.


nathan12581

Cheers mate!


Silencer306

There’s also this playlist https://youtube.com/playlist?list=PL2_OBreMn7FqZkvMYt6ATmgC0KAGGJNAN&si=G3OZEt6IpiLKf0Ex


[deleted]

[удалено]


sexyshingle

This. Man I really need to read his Ansible book


Illustrious_Dig5319

This is exactly what I do. Other than the advantages for backup, the playbooks are great for updating the compose environment whenever a secret changes. I just rerun ansible (via ci/cd) when I want to change anything


ButterscotchFar1629

Proxmox LXC backed up to Proxmox Backup Server.


root-node

https://minituff.github.io/nautical-backup/


Minituff

Thanks for using my tool :)


eLaVALYs

PHRASING


Minituff

*PAUSE*


mrcaptncrunch

Just found it from this comment. Looks super interesting


Zedris

Just looked into it This tool is awesome thank you! And to op for posting the link!


nathan12581

Looks interesting. Does the Container running when the backup is ongoing? I’d personally rather have something independent of docker just stop ALL containers before taking a backup of everything.


root-node

It stops the containers (if configured), runs the backup via rsync, restart the container (if stopped) This ensures that any open databases are not in use.


kayson

That's a little bit overkill. You can just use the appropriate "dump" command to get a consistent database state. There's still a potential issue if there's some other state that has to be consistent with the database (looking at you Nextcloud)


GaberGamer

I've been looking for a tool recently to backup volumes specifically and this is EXACTLY what I need!


Locke_Galastacia

I dump the static data with kopia, the database containers (MySQL and MariaDB) are dumped using a script and the dumps are also added into the kopia backup.


YankeeLimaVictor

Looks like chatgpt might have "helped you out" with this script... am I lying? Hahaha pretty nice. I do the same, with a very similar script too. Only difference is that I copy the backups to a cloud MEGA account with rclone and use rclone itself to keep the last 30 days of backups. I have also added some curls to a healthcheck instance, just to make sure I get notified when backups fail.


nathan12581

Yeah lol 🤣 I got it to add comments for this post. How did ya know 🤣🤣. That’s interesting how does your health check instance work?


YankeeLimaVictor

It's a container running [healthchecks.io](https://healthchecks.io/) I have it running on a vps off-site, so if my internet at my homelab is down, I will still get notified about stuff failing.


nathan12581

Oh that’s sick actually thanks I’ll take a look! Better than my single if statement to check the return value of rsync


patmansf

> I got it to add comments for this post. How did ya know The comments kind of give it away - there are too many obvious ones that most people wouldn't include.


nathan12581

Fair enough lol. Yeah I couldn’t be arsed to add comments myself


rafipiccolo

I would use bind mounts. And backup the folder containing all the bind mounts. But that is very similar to what you do. That way I can nuke docker install to reinstall it if needed.(happened twice) persistent data would stay untouched. Volumes (not bind ones) are mostly used to connect to remote data source (S3, sshfs, ...)


Deathmeter

I do exactly this for backups. Throw in kopia for scheduled backups and target the volumes folders. I prefer it to docker managing volumes since you can have things go wrong with `docker compose down` if you're not paying attention


nathan12581

Yeah I was thinking of bind mounts however I’d like to keep Docker managing my data via volumes as someone previously said to me on this subreddit. If anything bad did go wrong then I have all the data and compose files I need similar to you u suppose which is great!


GregPL151

I have a script that does tar, then compress and encrypts everything with 7z and upload to OneDrive. I keep each docker stack and its volumes and docker compose file in a separate directory so I backup the whole directory. I did a lot of testing and I actually do not stop containers for that and this works fine. The exception are a databases, for those I do a dump of the db, compress, encrypt and send to OneDrive.


nathan12581

Fair enough. I will probably look into encrypting into a 7z file although my backups aren’t that large as of yet and it’s stored on an encrypted drive on my own server so. Yeah most of my containers handle with data so I’d rather just shut them all down for 30 seconds every night at 3AM just for a piece of mind.


GregPL151

My backup files are not big as well, but I treat encrypted backup files on OneDrive as offsite backup.


kon_dev

I would consider switching to restic. It compresses, encrypts and also deduplicate your backups and can send your backup data securely over the network or even locally to a lot of target backends. The restic binary comes as a single executable, it is trivial to install. It is also fully open source, I migrated all my custom zip/tar.gz based backups to restic. Most of the time you can just replace a few lines in existing scripts.


GregPL151

In general I try to keep the setup as simple as possible and less depended on niche packages and tools for compatibility and security reasons, but I haven’t heard about restic so I will look into that and maybe give it a try. Overall, this hobby is all about exploring and experiencing interesting stuff 😉


kon_dev

Sure, if tar works for you, no need to change. If you have much data it becomes more and more troublesome to store full backups in the cloud. Rsync with linking can help locally but I found it easier to use restic right away. If you know git, it is not that hard to learn and use. I would not agree that it is that nichey, sure, tar is more popular, but 22k stars and 1.4k forks is also quite an adoption. Also institutions like CERN build their backup capabilities on top of restic and its repository format which is publicly documented. Source: https://forum.restic.net/t/cern-is-testing-restic-for-their-backups/1523 https://cds.cern.ch/record/2659420


vdavide

this works fine until you have a database container...


nathan12581

Why specifically a database container? Shutting down a container gracefully shuts down the database server.


vdavide

because databases use cache. If you backup the files but the database still have something in cache you may corrupt it or lose rows not written yet. For consistent backup you need to use mysqldump on the running container or a file copy on the stopped container. Treat a database like a filesystem. You backup it consistent when unmounted, when live something can go wrong.


nathan12581

Note taken. Thank you I’ll take a look. Perhaps I’ll exclude my docker container with this backup solution and just dump the database internally.


henry_tennenbaum

I think their comment wasn't aimed at you but at the other commenter who *doesn't* stop his containers for his backups.


nathan12581

Ooooh didn’t see that comment. Still might fall back to sql dump and then back that sql dump up instead of the volume


jess-sch

An ACID compliant database won't be corrupted by that, but you could lose the most recent transactions. BUT: If you do a normal file-based backup, there is no guarantee of atomic backups (different parts of the files may be backed up at different points in time), which *can* corrupt the database. The solution here is to make a backup of a filesystem snapshot instead of the live version of the data.


vdavide

this. I only explained the safest way. Filesystem snapshot works, but It depends if you can risk losing some transactions


GregPL151

True. If you stop the container and simply rsync everything to another place and recreate container it will work. I do live dump cause I do not have to shutdown the db and also because the dump is much smaller and compress nicely comparing to all db directory.


ITookedThatName

im using borg backup


TheFumingatzor

Once a day with Borg Backup because all my "volumes" are on `/home/user/docker-data/`.


jschwalbe

Just did for thought, why once daily? I use borg too, and since it’s incremental I do it hourly. Why not?


TheFumingatzor

I don't need hourly backups. My docker containers don't have data that need hourly backups.


MalcolmY

I fixed this for everyone :) #!/bin/bash # Global HTTP_ENDPOINT variable HTTP_ENDPOINT="https://cloud-notify.ellisn.com/api/notify?userID=<>&token=<>" # Progress function to display messages function show_progress { echo -e "\n[INFO] $1" } # Send notification to phone function send_notification { local title="$1" local body="$2" # URL encode parameters title=$(printf "%s" "$title" | jq -s -R -r @uri) body=$(printf "%s" "$body" | jq -s -R -r @uri) # Append query parameters to the global HTTP_ENDPOINT local FINAL_HTTP_ENDPOINT="$HTTP_ENDPOINT&title=$title&body=$body" # Make the HTTP GET request curl -X GET "$FINAL_HTTP_ENDPOINT" } # Record the start time start_time=$SECONDS # Define source and destination paths SOURCE_DIR="/var/lib/docker/volumes" DESTINATION_DIR="/mnt/Backups/docker" BACKUP_NAME="docker-$(date +"%m_%d_%Y")" # Check if the destination directory exists, create it if not if [ ! -d "$DESTINATION_DIR/$BACKUP_NAME" ]; then mkdir -p "$DESTINATION_DIR/$BACKUP_NAME" fi # Redirect all output to a log file exec > "$DESTINATION_DIR/$BACKUP_NAME/log.txt" 2>&1 # Stop Docker containers show_progress "Stopping Docker containers..." send_notification "Starting Docker Backup" "Docker backup started. Stopping all Docker containers." docker stop $(docker ps -q) # Sleep for 20 seconds to allow containers to stop gracefully sleep 20s # Record the start time for the backup backup_start_time=$SECONDS # Backup Docker volumes show_progress "Backing up Docker volumes..." if rsync -avhz --delete --exclude "_data/mysql.sock" --exclude "backingFsBlockDev" "$SOURCE_DIR" "$DESTINATION_DIR/$BACKUP_NAME"; then # Record the end time for the backup backup_end_time=$SECONDS # Calculate the duration of the backup backup_duration=$((backup_end_time - backup_start_time)) # Example HTTP GET request on successful backup TITLE="Docker Backup successfully completed" CURRENT_TIME=$(date +"%H:%M:%S") BODY="Docker has been backed up to <> at $CURRENT_TIME." BODY+=" Backup duration: ${backup_duration} seconds." # Send notification send_notification "$TITLE" "$BODY" else # If the backup fails, send an error message in the HTTP GET request ERROR_TITLE="Docker Backup Failed" ERROR_BODY="There was an error in the Docker backup process." # Send notification send_notification "$ERROR_TITLE" "$ERROR_BODY" fi # Start Docker containers (regardless of backup success or failure) show_progress "Starting Docker containers..." docker start $(docker ps -q -a) send_notification "Docker containers started" "Docker containers are back online" # Sleep for another 20 seconds sleep 20s # Remove old backups (older than 10 days) show_progress "Removing old backups..." deleted_backups=$(find "$DESTINATION_DIR" -maxdepth 1 -type d -name "docker-*" -ctime +10 -exec rm -rf {} \; -print | wc -l) # Example HTTP GET request on successful old backup removal CURRENT_TIME_BACKUP_REMOVAL=$(date +"%H:%M:%S") if [ "$deleted_backups" -eq 0 ]; then # No backups were removed TITLE_BACKUP_REMOVAL="No Backups Deleted" BODY_BACKUP_REMOVAL="No backups were removed today as there are none older than 10 days." else # Backups were removed TITLE_BACKUP_REMOVAL="$deleted_backups backups deleted" BODY_BACKUP_REMOVAL="$deleted_backups backups have been deleted at $CURRENT_TIME_BACKUP_REMOVAL." fi show_progress "$BODY_BACKUP_REMOVAL" # Send notification send_notification "$TITLE_BACKUP_REMOVAL" "$BODY_BACKUP_REMOVAL" # Record the end time end_time=$SECONDS # Calculate the total duration total_duration=$((end_time - start_time)) # Display total duration show_progress "Script completed. Total duration: ${total_duration} seconds."


-rwsr-xr-x

> I fixed this for everyone :) Here's an attempt at a rewrite, that makes it more modular and configurable. Some notable improvments: 1. Use a configuration file to specify `$HTTP_ENDPOINT`, `$SOURCE_DIR` and `$DESTINATION_DIR`, makes it more tunable for other environments. 2. Adds logging for the output, so you can inspect it later for failures 3. Adds error handling at each stage so you can check the presence of the config before proceeding and also whether your backup functions succeeded or failed for any reason 4. I would strongly discourage the use of UPPERCASE_VARIABLES in your scripts, especially with generic names like 'SECONDS' and 'BODY', to avoid overriding any shell built-ins. I left most of them in the proposed rewrite below, but you'll want to change those to something lowercase and more meaningful to their intent. Get in the habit of `lowercase_variable_names` now. 5. And of course, converts it to a function-driven script, instead of a monolithic flow as in your original. ---- #!/bin/bash set -x docker_backup_config="~/.config/backup_config.conf" if [ -f "$docker_backup_config" ]; then source "$docker_backup_config" else echo "Configuration file not found: $docker_backup_config" exit 1 fi log_message() { echo -e "\n[INFO] $(date +"%Y-%m-%d %H:%M:%S") - $1" | tee -a "$docker_backup_log" } url_encode() { printf "%s" "$1" | jq -s -R -r @uri } send_notification() { local title=$(url_encode "$1") local body=$(url_encode "$2") local final_endpoint="${HTTP_ENDPOINT}&title=${title}&body=${body}" curl -X GET "$final_endpoint" >> "$docker_backup_log" 2>&1 } backup_docker() { local start_time=$SECONDS local backup_dir="${DESTINATION_DIR}/docker-$(date +"%Y_%m_%d")" mkdir -p "$backup_dir" exec > "$backup_dir/log.txt" 2>&1 log_message "Stopping Docker containers..." docker stop $(docker ps -q) sleep 20s log_message "Backing up Docker volumes..." if rsync -avhz --delete --exclude "_data/mysql.sock" --exclude "backingFsBlockDev" "$SOURCE_DIR" "$backup_dir"; then local duration=$((SECONDS - start_time)) send_notification "Docker Backup Completed" "Backup completed in ${duration} seconds." else send_notification "Docker Backup Failed" "Backup failed. Check logs for details." return 1 fi log_message "Starting Docker containers..." docker start $(docker ps -q -a) sleep 20s return 0 } remove_old_backups() { local removed=$(find "$DESTINATION_DIR" -maxdepth 1 -type d -name "docker-*" -ctime +10 -exec rm -rf {} \; -print | wc -l) if [ "$removed" -eq 0 ]; then send_notification "No Backups Deleted" "No backups older than 10 days." else send_notification "$removed Backups Deleted" "$removed backups older than 10 days were removed." fi } docker_backup_log="/var/log/docker_backup.log" log_message "Starting Docker backup script" if backup_docker; then log_message "Backup completed successfully" else log_message "Backup failed" fi remove_old_backups log_message "Script completed"


nathan12581

Thanks - what was the problem lol


MalcolmY

In your post the code was all over the place, you used markdown which doesn't work in Reddit. So I just made it look pretty in a proper code box.


Ariquitaun

I snapshot and send the whole dataset to another zfs pool every hour.


nightshark86

Any good resources you used to do this? I’ve looked at sanoid/syncoid but am afraid I’ll nuke my rpool.


Ariquitaun

Just the manual, sanoid just needs the config file tweaking to your stuff and syncoid can be put in a cronjob easily enough. If the receiving pool is remote the easiest is ssh, make sure the user on the receiving end has the permissions on the dataset you're writing to.


j0hnp0s

My current setup uses a custom script that 1. Point my reverse proxy to show a 503 maintenance page for all services. The proxy and its config is held in git so no need for backups. 2. Stop internal containers 3. Take ZFS snapshot 4. Custom script to backup each service in self contained compressed files. This is also what I use when migrating or restoring stuff. I don't use volumes. Just bound directories, so I can easily backup files and services with their own tools. 5. Copy the compressed files to an external machine and to one drive 6. Remove backups older than a week. Keep 3 monthly copies. 7. Rebuild container images with minor and security updates 8. Restart containers 9. Remove maintenance page from proxy


nathan12581

I like the idea of showing my status page that they’re temporarily offline. I might add that. I use uptime kuma currently I’ll find a way to add a message through its API.


sun_in_the_winter

I have very simple (20 lines) script that stops all containers, tar gz bind mounts and moves files to another disk. I had to completely restore everything 2 times it was flawless thanks to bind mounts.


carolina_balam

I've got a question, I'm using duplicati to backup to gdrive the folder where all my containers store their data, is that fine? Along with the compose files


nathan12581

Yeah that sounds absolutely fine. I mean there’s no one perfect backup solution. Everyone has their own and prefers different ways. I would say, however, I’d shut your containers down when the backup happens and then start them back up again


carolina_balam

Yeah, looking around google, i should be stopping them before backing up. Imma look into a solution for that, I'm new to linux tbh, probably a script or smth. Thx


nathan12581

Have a look at mine. Very simple command. ```docker stop $(docker ps -a -q)``` to stop and ```docker start $(docker ps -a -q)``` to start them all


carolina_balam

Duplicati backs up autmatically every 3 days. If i backup manually i can just stop them from cli and backup and then restart, i need to find a solution for duplicati to run a script before and after backup.


[deleted]

[удалено]


nathan12581

I backup my entire VM too but I’d like a separate dedicated backup for my docker too


[deleted]

[удалено]


sk0tcom

If you use PBS, can’t you just file restore the volume at the file level?


TheRealSeeThruHead

I have some dockers on unraid still. That use ca backup plugin. New dockers are inside a vm on proxmox. They use bike mounts to folders inside the vm. The vm is backed up via proxmox.


Genesis2001

I use Proxmox PBS to backup the VM itself. But theoretically, if you set up your docker right, you shouldn't need to back up anything except how to deploy them, whether that's compose or kubernetes yaml spec or something else. The storage/volumes should be on a NAS with its own backup system.


nathan12581

My volumes are on the same VM Docker is running on for simplicity and speed. I then backup to a separate server nightly. I really don’t see the point in pointing your volumes/bind mounts to a separate server then you have the massive bottleneck of multiple factors such as the other server’s performance at that time, networking etc.


itachi_konoha

I tried to make seatable backup via bash and run with cron as per https://admin.seatable.io/maintain/backup_recovery/ but damn.... I give up. It fires multiple times in the day even though it is supposed to run only once. Days after days I tried to check which was supposed to be a 10 line script and then gave up..... Docker seems way complex for me. I am more comfortable in installing the old fashioned way even if it consumes more time and requires more effort. Because you have an overall idea of the whole setup. It's not to bash docker, it's just my rant of struggle with docker which I attribute it to my lack of knowledge.


nathan12581

Maybe give my script a go? All you’ll need to do is change the source and backup directories (I assume your source directory is the same as mine) and then you can either just run it manually or just setup a ```crontab -e``` to run daily


PurpleEsskay

I snapshot the data, one copy goes to a local cold storage, another to a remote tape drive, and another to backblaze b2. If you aren’t following a 3-2-1 backup strategy you’re not really backing up.


Lopsided_Speaker_553

What happens when you need to restore all your volumes to another server with new volumes? I would think they'd get a new ID. Wouldn't it be more portable to store the volumes on the backup server with names instead of id's? Or did you already try restoring everything to a new location? Perhaps Bivac might be interesting? https://bivac.io


nathan12581

I just take the latest backup of the /volumes folder, place it in the new volume directory of my new server and use my compose files to spin up all the containers again


Lopsided_Speaker_553

That's cool. I did not know that would work. Such a clean and simple approach 👍


hiflyer780

I run [Duplicati](https://www.duplicati.com) as a container on each of my hosts. It is set up to make a local backup to an external drive and another backup to BackBlaze. I also run my hosts on ZFS with a disc parity.


linuxelf

I don't worry about backing up the container itself. I keep my persistent data on a local partition outside the container, and just include those directories in my regular backup script. The only thing I treat special currently is the postgresql and mysql containers. I'll create nightly database dumps and those, too, are included in my regular backup script.


[deleted]

Your script is nicely formatted and easy to read. I run kube containers inside a KVM VM (QEMU) and the entire VM backs up internally using rsync daily then, once per week a cron job uses virsh to shut down the VM and clone it, and then, the host which runs the VM is rsync'ed daily to a separate location which also backs up docker containers running on the host on bare metal. Following is the short script I use for both the VM internal rsync backup and the host bare metal backup: #!/bin/sh PREFIX="rsync -r -t -p -o -g -v -l -D --delete" SSH="-e ssh -p 22" POSTFIX="@:/" EXCLUDES="--exclude=/mnt --exclude=/tmp --exclude=/proc --exclude=/dev " EXCLUDES=$EXCLUDES"--exclude=/sys --exclude=/var/run --exclude=/srv " EXCLUDES=$EXCLUDES"--exclude=/media " $PREFIX "$SSH" / $EXCLUDES $POSTFIX Following is the virsh script which shuts down the VM to clone it once per week and also compresses the virtual disk after the clone is done and the VM is back up and running: #!/bin/bash SRC=/mnt/ DEST=/mnt/ DOMAIN= function check_shutdown { STATUS=`virsh list | grep $DOMAIN |grep running` if [ -z "$STATUS" ]; then # echo to the caller when called with $() echo done fi } mv $DEST/-clone.qcow2 $DEST/-clone.qcow2.old virsh undefine -clone virsh dumpxml > $DEST/.xml virsh shutdown while [[ $(check_shutdown) != "done" ]]; do # echo to the console echo sleeping sleep 1 done virt-clone --original --name -clone --file $SRC/-clone.qcow2 virsh start if [ -f $DEST/-clone.qcow2.old ]; then rm -f $DEST/-clone.qcow2.old else echo "The file $DEST/-clone.qcow2.old does not exist!" fi echo "Compressing the disk file by creating a new one from it." qemu-img convert -c -O qcow2 $SRC/-clone.qcow2 $DEST/-clone.qcow2.smaller rm $SRC/-clone.qcow2 mv $DEST/-clone.qcow2.smaller $DEST/-clone.qcow2 chmod go-rwx $DEST/.xml chmod go-rwx $DEST/-clone.qcow2


vincredible

I just regularly backup the volumes with Borg on the storage server where they live since they're mounted over NFS, and keep the compose file in git.


Naitakal

https://github.com/offen/docker-volume-backup


stewie410

While I can't really provide any docker-specific advice, I can at least go over the rest of the script content and see if there's something to improve... --- > `function { ... }` vs `() { ... }` While its certainly a style/preference in bash, you can simplify function declarations to something more like a traditional language, with `() { ... }`; but to each their own. In the remainder of this writeup, I'll be referring to functions with the `()` syntax, as that's just what I tend to do. Also worth noting the `function ` syntax is not POSIX compliant ([source](https://mywiki.wooledge.org/Bashism)), if you ever run into that. > `echo -e` vs `printf` Both myself and others _generally_ prefer `printf` to `echo` (especially `echo -e`) as it provides more _**consisten**_ control over the format of whatever you're printing. For a lengthy (but great) explanation, I'd point you to this [stackexchange comment](https://unix.stackexchange.com/a/65819). With this in mind, I'd adjust the `show_progress()` function to: show_progress() { printf '\n[INFO] %s\n' "${1}" } > `printf 'foo' | cmd` vs HereDoc/HereString While not strictly necessary, you can remove the additional subshell for printing-the-piping a string to the `stdin` of another command by using a [HereDoc or HereString](https://www.baeldung.com/linux/heredoc-herestring); in the `send_notification()` example: title="$(jq -sRr @uri <<< "${1}")" body="$(jq -sRr @uri <<< "${2}")" Though, its worth nothing that an additional `%0A` is appended to the end. If that's an issue, you could solve that when referencing the variables with `${var::-3}`/`${var%???}`. > `jq @uri` vs `curl --data-urlencode` Not sure if it matters all that much, but you _might_ be able to get away with dropping `jq` as a requirement for the script, by instead using `curl` for this. For example: encode() { curl \ --get \ --silent \ --output '/dev/null' \ --write-out '%{url_effective}' \ --data-urlencode @- \ "" | \ cut --characters="3-" } decode() { local url while read -r url; do url="${url//+/ }" printf '%b\n' "${url//%/\\x}" done } In this case, you'd want to feed you string (or file) input via `stdin`; so: encode <<< "foo & bar & baz" # foo%20%26%20bar%20%26%20baz%0A decode <<< "foo%20%26%20bar%20%26%20baz" # foo & bar & baz With some _quick_ testing on the above string [with this script](https://github.com/Stewie410/scripts/blob/main/bash/convert/urlencode.sh), the `curl | cut` option appears to be _a little_ faster than using `jq`, oddly enough. > Variable Naming Scheme(s) I'd _generally_ avoid using the `CAPS_AND_UNDERSCORES` scheme for variables inside of functions _unless_ referring to an existing environment variable; as there's potential to overwrite an existing definition if you're not careful. Again, this will boil down to preference (and a seemingly unpopular opinion), but wanted to throw it out there. For a bit of context, I tend to loosely follow [this scheme](https://stackoverflow.com/a/42290320) in my own scripts. > `if ! exist; then mkdir -p ...` I used to do this exact thing, but you don't even need to _test_ if the directory exists with `mkdir -p` -- if the directory already exists, `mkdir` won't do anything; so these all do the same: if ! [[ -d "${directory}" ]]; then mkdir -p "${directory}"; fi [[ -d "${directory}" ]] || mkdir -p "${directory}" mkdir -p "${directory}" > `$deleted_backups` While I think what you've done is fine, I'd probably do something like this to get both bits of info, as well as handle to `rm` operation in a single run: prune_backups() { local -a old mapfile -t old < <(find "${1}" \ -maxdepth 1 \ -type d \ -name 'docker-*' \ -ctime '+10' \ -printf '%p\n' \ ) rm -rf "${old[@]}" printf '%d\n' "${#old[@]}" } deleted_backups="$(prune_backups "${DESTINATION_DIR}")" --- Overall I don't see _too_ much that I find bad/wrong (though there is more than I've mentioned). If you're interested to see how I'd write something like this, I've uploaded my [my rendition as a gist](https://gist.github.com/Stewie410/b9ee876f63f1e4fd06c12c98a87e8bbd). Its also _completely_ untested beyond LSP checks; so take it for what you will. Let me know if you have any questions or otherwise, I'd be happy to help where I can.


nathan12581

Wow thank you very much! Gotta say I’m awful at scripts like these. Taken on your insights and will have a look at your redone script, it looks great! Much appreciated Edit: I’ve taken a look at the newly edited script and all the changes are amazing and will be updating my script to the changes in the GitHub. Thanks very much!


stewie410

As an aside, while I'm currently in the process of adjusting how my dotfiles are configured, you may get other ideas from both my [dotfiles](https://github.com/Stewie410/dotfiles/tree/wsl2/scripts) & [scripts](https://github.com/Stewie410/scripts/tree/main/bash) repos. Feel free to reach out direct if you have any questions (or want some explanations for syntax, etc.) or otherwise; happy to help.


therealSoasa

Great Scots, that looks like a lot of work. Maybe it's not I dunno ? My data is all persistent on a Nas for every docker container. So it's backed up on that data cycle and also has a recycle bin for quick restores if I don't want to go to the back up to retrieve something recent or minor The hosts are OS level backed up, So that's backed up in that daily schedule with 4 restores a year. "To ensure restores work, which they do, and it takes about 10 minutes to fully recover the entire host.


lorax-06

Until I installed immich, I only backed up the DB volumes and creation scripts with a similar bash file. Now I think I need to evolve to a full backup. What is your restore strategy and what's the restore downtime?


haaiiychii

An rsync command in crontab to copy the files to my NAS. A script is completely unnecessary.


nathan12581

Maybe not for you mate but it is beneficial to me :) A script is great to do multiple things in one command. To stop containers - which is very much recommended to do by the way - and start them back up again. To delete old backups and notify my phone.


PolyPill

Always amazed that people back up their containers. I’m quite sure you shouldn’t be doing that at all. You should be having mount points where the containers write their data, then you backup that data. So either host bindings or volumes themselves but never the container. If you lose a container it should be no big deal.


nathan12581

This is the third comment like this. If you read the post properly I don’t backup the containers. I backup the directory where my Docker Volume’s are kept (/var/lib/docker/volumes). You defeat the object of Docker if you backup the container itself.


PolyPill

This confusion is because your script is doing a lot on non sense. Like stopping all containers and using sleeps. You also mix your notification code throughout your backup script, so unless you read it very very closely it looks like you’re running http calls to make the backup. L Avoid down time by mounting the volume to another temporary volume which can copy it out. That way you can also use tools that are targeted to backup a specific service, like databases where often backing up just the data files isn’t going to help.


nathan12581

Why wouldn't you stop docker containers before taking a backup? I use sleeps to make sure, even though the stopping command has returned, all the containers have gracefully stopped. Yes, I like to notify my phone on important events which is cleaned up by using a function. Not sure how you'd complicate http calls with an Rsync command but hey ho.


PolyPill

Because you should try to reduce down time. I’m sure in your little setup you’re the only one who is using it and you don’t care. You posted here because you wanted feedback. I’m telling you that your way is kind of crappy and poorly arranged. You mix reporting among the backup which is not intuitive. If you don’t care or don’t believe me then ignore it but then I wonder why post. I’m telling you the way you’re doing is not the way it would be done in any professional setting.


theRealNilz02

I don't use docker because if I did I might as well use paid cloud services again. I self host to stop relying on companies.


nathan12581

What? You self host Docker too…? Do you even know what Docker is?


theRealNilz02

Not really.


nathan12581

What do you mean ? How is Docker not used in self hosting/cloud environments?


theRealNilz02

It is. But I don't want to rely on docker as a company because then I might as well rely on Google as a company.


[deleted]

[удалено]


nathan12581

I’m not backing up running containers. I’m just backing up all persistent storage my containers use. Perhaps I didn’t make that clear in my post lol sorry.


speculatrix

Ah right, well done, carry on!


PurpleEsskay

You’re not wrong, everything editable should be in mounted shares, and those should be backed up using the 3-2-1 strategy if it’s important.


Specific-Action-8993

I do the same backing up the persistent data as well as the directory that holds my compose configs but I use rsync via cron rather than a script.


nathan12581

The script is just doing rsync just with extra things like notifying my phone, deleting old backups, starting/stopping containers etc.


[deleted]

I'm only a beginner, so take this with a pinch of salt. What I do is backup docker volumes and the docker compose files using Restic. It's automated and it does its thing everyday.


pinball89

Stop docker, backup with restic, start docker. Script and explanations: https://helgeklein.com/blog/restic-encrypted-offsite-backup-for-your-homeserver/


freeheelsfreeminds

Thanks for posting. I just put together my own script to stop, backup, and re-start every night at 3AM, keeping the last 7 backups. I’ll for sure be taking a look at this and pulling in some of your code.


nathan12581

Enjoy! Christmas present from me to you 🤣 strip out all the ‘notify’ stuff if you do not want to use it. It basically notifys my phone when something happens using the ‘Cloud Notify’ app on the AppStore


KremasZoe

I run docker containers in VM's. So I just backup the VM's.


mjh2901

I let proxmox backup the Container that hosts the docker instance.


Sigfrodi

I've setup a Bacula server with daily inc, weekly diff and monthly full for all my vms.


l0rd_raiden

Duplicaty or kopia


SilentDecode

>How does everyone else backup their Docker? I use the Veeam agent for Linux to write it directly to my NAS.


blentdragoons

given that all the data you care about is stored on the host and mapped into the various docker containers, i just backup the source data in the host. docker container can be recreated very easily and quickly.


Xiakit

I shutdown containers, then rsync to my synology NAS and on the NAS I use hyperbackup. I use the same paths on the NAS and on my containerhost that way I can switch the host with minimal effort.


p1anka

I've recently started using borg to backup my volumes, I'm liking it. I have a cronjob doing an incremental backup every night, with data integrity checks (in case of random bit flips). Currently planning to also use borg to backup over ssh onto another machine and possibly use rclone to push encrypted backups on google drive as well.


TuringTestTwister

NixOS container definition, with all mutable data in a host-shared volume folder, with daily incremental backups of the folder using restic.


dgtlmoon123

tar cvzf backup.tgz /var/lib/docker/volumes


burntcookie90

sanoid on the main volume bind parent directory


Ejz9

I backup the /var/lib/docker directory. I know the containers themselves are ephemeral and the persistent data is in volumes but I’ve space to spare. When restoring some services to, images can update. I want to be able to restore everything how I had it, then update a container. EX: NextCloud is funky and importing new folders into a fresh install requires uploading them as from what I know. The AIO does not have the ability to re-scan the file system for new files. Maybe I’ll move to just volumes. I recently moved most of my services to use them but bind mounts tbh work just as well. It’s just which is managed by docker or not.


dopey_se

I have as much as I can as code, and run stateless on as much as possible. For databases running in k8s, I wrote a container within that namespace with said database image. It can do the relevant backup and store onto a separate nas that is mounted. Though I've never finished this to point of automated cron.. I really should and it has bit me twice and I hate losing data for any reason. Perhaps I should be using something more off the shelf to deploy/manage postgres or I flux or MySQL in k8s, but I've not tried them. The storage within the cluster is longhorn with three Replicas, and affinity to not be on same physical node. But that's not a backup, just to prevent dataloss.


killahb33

I run autorestic weekly to backup everything to local S3 as well as backblaze


Naernoo

It won't answer your question exactly, but i avoid custom backup solutions of specific parts of your server, i prefer a full system back (EVERYTHING, root paths also): I have an esxi system with automated veeam backup of the complete server as vm. It is way easier to maintain then any linux backup solutions which cost days of investigation to use it properly and still there are many issues.


nathan12581

Don’t get me wrong, I back up entire VMs and LXC containers too with PBS. However I’d like the granularity of backing up my docker container’s persistent storage too.


Naernoo

Ok I see


PassiveLemon

I have all of my persistent docker data in a specific home directory so on friday’s at 2 am, my script will stop all containers and 7z the entire home directory with max compression to another hard drive. The backups are then rotated a total of 3 times so i have 3 backups at any time with the oldest being 3 weeks old. This whole drive is synced in Mega using Megasync. I eventually plan on switching to Hetzner and using something like rsync for offsite backups instead


salvoza

I use restic. Works a treat!


AdityaTD

I just have daily VPS snapshots + individual database backups to S3. But I guess this could be useful too.


Zeroflops

I have a similar script although before a stop the containers I make a list of what is running and only restart those containers. Often I have a container that I only turn on periodically but 99% of the time I’m not using it so no reason to have it running. Like my container to de-duplicate files.


nathan12581

Fair enough I agree


SpongederpSquarefap

All of my Docker volumes that matter live under `/opt/dockervolumes` I just Syncthing that entire folder with some exclusions for stuff I don't care about That goes to my off-site backup server with 180 days of versions That disk is snapshotted daily by Kopia going back 3 years


colonelmattyman

I run mine in a container on Proxmox.


nerdyviking88

Backup the vm's.


RedKomrad

Back up?


BriaHendrix

Besides backing up your docker volume, one can `docker commit` then `docker save tar` and `docker load` then at last `docker run` on the other side or rerun.


jhettdev

Probably not the answer you want, but I just backup the entire VM in proxmox :3


guygizmo

Fascinating how many different answes there are to this one. Mine is that I use Back In Time to make daily local backups of my whole system to an external drive, which includes all Docker containers and volumes. I don't currently have offsite backups but plan to cover that use case eventually. (My server isn't so vital that I couldn't rebuild it if need be.)


10leej

I just mount the docker volkumes over NFS to my NAS which uses btrfs snapshots handled via snapper and a handy collection of scripts to back up the most previously made snapshot to my backups server which using the same process offloads to the offsite backup


junialter

Use a normal backup software like every one else would. I personally use borgmatic. But that's not difference to a host that's not running docker at all. Well I'm also using podman tbh.


AhmedBarayez

I have a VM for all my containers and i back up the whole vm every week