Rclone backup setup for my 'hit-by-a-truck' protocol
bikegremlin
ModeratorOGContent Writer
in Technical
When my spare backup HDD at work died (and the warranty is going to take about a month), I configured encrypted Rclone sync from my PC HDD (source of truth) to a Hetzner storage box.
Made a step-by-step tutorial for my best man - my "hit-by-a-truck" protocol for when (motor)cycling goes wrong. ![]()
Any corrections are more than welcome - and if it helps other folks, even better:
https://io.bikegremlin.com/38747/rclone-installation-and-configuration-step-by-step/
Rclone should work with many cloud storage providers - it is just that Hetzner Storage Box has been reliable for hosting server backups for me over the years, and I was already paying for it, so I used that.
Comments
Hmm.. interesting !
Windows made me.. reheat my coffee. But then scrolling down further... it tasted better
blog | exploring visually |
LOL. Good one.
Serbia is a Windows country (there was even some official Microsoft partnership at the start of the century).
Practically everyone I know uses Windows only.
🔧 BikeGremlin guides & resources
I hear you...
on a serious note, the encryption part: why do it/ steps needed... very useful, yes even in age of AI agents
blog | exploring visually |
After losing data last week I’ve been rethinking backups too
Thinking mixture of zfs send, git and borgbackup.
Are you saying you want me to hit you with a truck?
Free Hosting at YetiNode | MicroNode| Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?
I am using btrfs send to push encrypted snapshots to S3 (minio). I do monthly full and weekly/daily incremental, with the 'parent' as tags, so a restore script can fetch everything needed to restore to a certain date, and the built-in S3 lifecycle cleans them up.
I also push the monthly backups to borgbackup for longer-term backups. So maybe not dissimilar to you.
Terrific article!
Slightly OT - my (75 y/o) dad phoned me today asking about a user friendly GUI backup tool on Windows. And I didn't have an answer.
Can only think of Duplicati and the internet is full of reports of data corruption & sundry fails on it.
Free File Sync might be a good option.
@tetech
Share on how you do it please.
https://microlxc.net/
A few different approaches I've used: one is Veeam to my own server; one is iDrive; one is to use Bvckup to sync to a Samba share and then backup using Linux tools from there.
I used to use CrashPlan in the old days until the prices became prohibitive. I never used Backblaze but I have family members that do, similarly for Cloudberry (now MSP360).
Macrium Reflect is GUI for whole system images. Subscription only now, unfotunately.
Acronis too - subscription only too.
FileZilla if he wants to only copy files to a storage (Hetzner Storage Box is not expensive).
🔧 BikeGremlin guides & resources
Mmm which part? As background info, I containerize everything using LXC on btrfs. However, the principle applies more generally: you don't need to use containers or btrfs, the below could be easily modified to any other type of snapshot-able filesystem.
I've got an outer loop something like this:
For each container (i.e. within the loop), I maintain a monthly and weekly snapshot that incremental backups are relative to. So on the first of the month I delete the monthly+weekly snapshot and on Sundays I delete the weekly snapshot:
Then the backup logic for each container works by detecting what snapshots are missing. So something like:
The missing piece of the puzzle is the
push_snapshotfunction which sends it to S3 (minio) using rclone:In this function, the headers are so that a restore script can fetch all the necessary parts to reassemble a point-in-time snapshot. Set GPGCMD to the encryption process. But the actual snapshot backup is basically one line (and it doesn't create a whole lot of temporary files, so does OK if there's limited/full disk).
I do a few extra things like cpu limiting, reading parameters from config files, and getting a "token" from my minio server to limit the number of concurrent uploads - I have several dozen hosts and if they all try to back up their containers simultaneously it kills the I/O of the minio server.
Is that what you wanted to know?
How it looks in minio. The backup list:


The metadata which is used to reassemble the snapshot:
Everything is encrypted before reaching the server so the only thing "exposed" is the container name. So it should be OK for any S3 storage, but mine is on my own minio.