Best way to add storage to VPS
I started using my VPS as a node on my Syncthing network.
I'm thinking of signing up for Backblaze B2 and mount it using rclone. I have no experience with either one, but it's fun to learn.
More than anything, I'm curious to hear how other people approach this problem. I've seen mentions of GDrive, but I'm trying to stay away from Google/Amazon and the like.
It's pronounced hacker.
Comments
Juicefs
It all depends on the performance I need
I regularly use heztner storage boxes and mount them via sshfs.
I also use object storage a lot for my backups, but using the S3 API. (Scaleway object storage)
I've never tried mounting as a volume, but it seems feasible.
https://github.com/s3fs-fuse/s3fs-fuse
This is a good comment.
Sometimes I use NFS, sometimes ISCSI+bcache.
I've had good experience with sshfs for mounting remote storage.
Recommended hosts:
Letbox, Data ideas, Hetzner
I did not know about this one. I'll look into it. Any particular reason you like it?
I'm not worried about performance at all. This is basically a fallback for when I'm not at home, or can't use my VPN for some reason (like at school, where they block Wireguard why?!). The rest of the time I have direct access to my home server and can pull from that.
Ironically, I had thought about SSHFS, but in my experience (a long time ago) it was unreliable, which is something I do care about.
@tetech said:
I use NFS at home, but for remote access I would need to add the server to my VPN or something of the sort, no? I'm trying to avoid that because it would mean opening my home server to an untrusted machine.
That's another vote. I'll definitely look into it again.
Thank you all for your replies.
It's pronounced hacker.
Interesting aside: I never realized how quickly stuff gets indexed by search engines nowadays. I was searching on DuckDuckGo (which I believe used Bing) for "add remote storage to vps", and this thread is already one of the results. 🤯
It's pronounced hacker.
I've got two scenarios for this. For a home server, I set up a Wireguard connection from the home server to the VPS, so no ports being opened on the home server, and iptables locks everything down, i.e. the Wireguard port is only accessible from the home IP, and the NFS port is opened on the Wireguard interface. Otherwise everything is dropped in both directions.
The other scenario is for a "virtual cloud" (sharing among VPS'es), in this case I use tinc.
nfs mount, rclone mount, minio, if no mounting needed: syncthing send only (host own relay if public one is too slow)
Fuck this 24/7 internet spew of trivia and celebrity bullshit.
i did sshfs too, but a conventional storage vps (even better with raid10) worked best for me
I bench YABS 24/7/365 unless it's a leap year.
Ohhh... this might work as well. Use the VPS for the transfer, not the storage. Only downside is that I liked the idea of the VPS having an offsite copy of the files, but I really should focus on having proper backups instead. Thanks!
Yeah, my BF plans is to get a new storage VPS. Thank you.
It's pronounced hacker.
Best method: buy a VPS with large SSD.
Second method: buy a hybrid storage VPS with SSD for software and HDD for storage, HostBrr and BuyVM have these.
Worst method: all kinds of NFS or Rclone mounts, you never know when a glitch causes data loss.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
I've actually been eyeing HostBrr for a bit, just waiting for a good deal (1TB/$30 w/IPv4).
Thanks for replying.
It's pronounced hacker.
It's not
You can have micro-cuts in the connection between your servers and if you don't give the right arguments when mounting the volume with sshfs then indeed it can give the impression of being unreliable. Because it will never re-mount the volume at the slightest network problem.
Take a look at the options:
There is of course an overhead compared to NFS, as traffic is encrypted.
Fast, reliable and most important feature: POSIX compliant.
Try it for yourself. Rclone is fine, but i like juicefs more.
Maybe that's what I ran into back then.
Ah, I recently had to deal with those to get an autossh to work reliably.
Makes sense. And I see that as a bonus. Thanks again for the info. I appreciate it.
Nice. Will do. Thank you!
It's pronounced hacker.
Looks like an interesting solution, first time I hear of them. Are you using it?
Lead Platform Architect at the day job, Ethical Hacker/Bug Bounty Hunter on the side
At the moment: Hetzner Storage Box via SSHFS in a CCX13. Thinking about switching to a php-friends box on BF, would need some storage for Nextcloud. php-friends could give 500 GB NFS/SMB storage, an alternative would be e2 via s3fs or goofys. The latter is faster, but unfortunately uses mtime as ctime and atime.
Yes. Using it together with syncthing. Works like a charm
Thanks for the hint on juicefs. I used gocryptfs on a storagebox with sshfs before, which made a lot of overhead.
@flo82 Any experience which is best for metadata?
Juicefs saves metadata for files in a separate DB. So the binaries are uploaded as chunks to a different backend. This is why juicefs is blazing fast. Furthermore juicefs allows caching of chunks - which makes it as fast as your local drive is. Encryption can be configured
on client side or you can you use server side encryption if the backend supportes this.
E.g. You can use a (distributed) redis server for backing metadata and s3 with server-side encryption for binary chunks. This is what i'm doing. Please be aware: if you loose metadata db, you'll loose your data. So backup metadata db frequently.
In my experience "rclone mount" works better than sshfs, even for ssh itself. Enabled local caching and can watch 1080p video files from the mount point, with a 80ms ping. Or if you have a low ping (<10ms), CIFS or NFS will work best.
for big files this works fine. you will run into problems if you have many small files. So it depends on the use case.
btw - here is a performance benchmark on juicefs: https://juicefs.com/docs/community/benchmark/
i'm not related to the company of juicefs - i just like it very much :-)
Yeah, I read it and thought I should add redis to my backups. PostgreSQL seems too slow on my box. Fortunately since 1.1.0 the client backups metadata also to the repo every hour.