Options for sharing accumulated VPS storage space as a single volume?

wankelwankel OG
edited September 2022 in Technical

Hi all,

One way or another I ended up with a very reasonable (quite sure it's still single digits) number of VPN's. Most of them got more storage space than needed, but a couple could use some more space.

Is there a 'best' way to create some JBOD-like storage on one of my VPS out of left over bits of other VPS's?

I love SSHFS and use it to bolt all kinds of things together. It gives separate directories for each mount point, so with 5 VPS's that have 2 GB left, I can create 5 separate directories on the 6th server. Is there a way to homogenize those into a single 10 GB directory or volume?

I hardly have any experience with NFS or iSCSI, I think those won't be happy with the relatively high latency between hosts. Would an open source S3 storage framework work? Would Ceph or Gluster give what I look for?

The storage is mostly needed for Nextcloud. It doesn't need to be very fast. All VPS's run Debian.

Comments

  • Distributed FS like Tahoe or MooseFS, or distributed DB / key-value store like Cockroach or Cassandra. If you want them to be safe, they're generally not going to be fast.

    Ceph, gluster, minio, etc. all are designed for within a single site (stretch cluster / active-active notwithstanding) and cannot tolerate long latency amongst peers. What happens when one node dies forever -- do you want to be able to reconstruct its data from the others (i.e., parity RAID)? What happens when one node is just temporarily offline? What happens when half the nodes are disconnected from the other half, but each half can still talk amongst itself (i.e., split-brain)?

    Thanked by (3)wankel lentro chimichurri
  • Thanks for your insights.

    I'm not looking for distributed, fail-safe storage, but all systems that provide storage over network seem to be built (obviously) with these features in mind.

    In short: it would be nice for experimenting, but gives quite a bit of overhead for those few gigabytes. Manually assigning an SSHFS mount to directories that are getting quite large and moving stuff is not that much fun, but of far lower complexity.

  • if you're ok with the decreased availability and just need to manage multiple sshfs mounts as one, consider mergerfs.

    Thanked by (2)bdl wankel
  • I have/had good experiences with AUFS and was leaning in the direction of looking up overlay FS's. MergerFS was not on my radar yet, but seems the best fit. Thanks!

  • You could also use webav and reverse proxy some of the directories.

    Thanked by (1)wankel
  • @mwt said:
    You could also use webav and reverse proxy some of the directories.

    Thanks for your suggestion!

    I don't understand what you mean by 'revers proxy' the directories, could you elaborate?

    In my experience WebDAV is rather slower than SSHFS, other variables being the same. Do you have side-by-side experience with better outcome for WebDAV?

  • @wankel said:

    @mwt said:
    You could also use webav and reverse proxy some of the directories.

    Thanks for your suggestion!

    I don't understand what you mean by 'revers proxy' the directories, could you elaborate?

    In my experience WebDAV is rather slower than SSHFS, other variables being the same. Do you have side-by-side experience with better outcome for WebDAV?

    You set up a webdav server on each node. Then select one machine to be the main node. Reverse proxy some path like /node2/ from the main node to the root directory of one of the secondary nodes.

    I like this because it doesn't involve actually mounting the drive on the main server. This might not suit you.

    I think webdav's speed depends on the circumstances. I'd imagine it's very good at sequential reads.

    Thanked by (1)wankel
Sign In or Register to comment.