We love big disks (the storage thread)
There is a lot of interesting information spread in multiple threads. Please continue the conversation here.
@AuroraZero said:
@host_c said:
If you go HW Raid ( cache on controller ) you only have to worry about VM's running plus the overhead they do on the node.
With ZFS you can have flexibility ( expand the pool as you change / upgrade the drives ) on the go but will eat RAM,CPU and induce Latency
With HW raid you will have the lowest Latency, no CPU and RAM usage on the Node ( all that if you use a modern HW raid controller, not something 2 decades ago ), but cannot expand the pool on the go ( you will need to empty the node, decommission the raid array, put the new drives, wait a few days for the to initialize )It is a trade off, we picked out poison. ( I will go for low latency any day of the week )
Until ZFS crashes then let the cluster begin!!!!
I use Proxmox and ZFS. Don't let my cluster explode!
Comments
Do you have 1 GB of ram / 1 TB of data??? if you have a 4 bay setup, did you use raid 10? or raid 5 as you wished for "as much space I can have" - if the later, you might run into a problem at some-point
Jokes aside, this will also make a good read, id did not wish to double post:
https://lowendspirit.com/discussion/9570/whats-the-difference-between-software-raid-vs-hardware-raid-now-in-2025#latest
PS: That is a node, Cluster implies a few nodes ( 2 for minimum ) to be managed/linked together/clustered )
You are fine
Regardless of what you use HW or SW raid, please do not use RAID5/Z1.
PS: @imok
If I can find a picture, I have a tale that is precisely " seconds from disaster ".
I will search for it and share what happened on a Sunday that almost lead to 108 TB of usable data to go out the door, on a pretty new server.
Give me till tomorrow to search the pic.
Host-C - VPS & Storage VPS Services – Reliable, Scalable and Fast - AS211462
"If there is no struggle there is no progress"
Yes! Double D's are my most loved parts of the computer. Double Disks in RAID 1 for the champions.
Real men go commando raid 0 no backups!!!
Free Hosting at YetiNode | MicroNode| Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?
Yeah I have a Proxmox cluster with 3 nodes, but no shared storage yet nor HA. I wish I can afford it but I don't think it's possible over a 1gbps link right?
Also I would have to set up switches and stuff like that right? Seems a bit complicated, and if it fails... Oh man I don't want to think about it. ZX Host comes to my mind.
I like big disks and I cannot lie
Head Janitor @ LES • About • Rules • Support
Any failures?
Sexy shit right there!!!!!
Free Hosting at YetiNode | MicroNode| Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?
What if I have 1 TB of ram and 1 GB of data? What happens to me then?
On a side note, can anyone mail me 8 sticks of 128GB sticks of DDR4 ECC RAM?
Never make the same mistake twice. There are so many new ones to make.
It’s OK if you disagree with me. I can’t force you to be right.
what speed?
I bench YABS 24/7/365 unless it's a leap year.
Only 2400mhz
Never make the same mistake twice. There are so many new ones to make.
It’s OK if you disagree with me. I can’t force you to be right.
Large and girthy di(s)ks indeed! Congrats.
All my life, I've been consoling myself that size doesn't matter...
🔧 BikeGremlin guides & resources
Eh... we are still talking about "disk", with a "S" right? Right?!?
Never make the same mistake twice. There are so many new ones to make.
It’s OK if you disagree with me. I can’t force you to be right.
Maybe with him ya never know anymore
Free Hosting at YetiNode | MicroNode| Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?
What do you mean - you know I'm all about desks!
🔧 BikeGremlin guides & resources
My home server
CPU: Ryzen 9 5900X
RAM: 64GB
Other places you can find me
You are about Pidgeons!!!!
Free Hosting at YetiNode | MicroNode| Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?
What are you using the 100 TB of data for? And what kind of backup do you use for the 100 TB of data?
How many pigeons can you fit in a TB?
Never make the same mistake twice. There are so many new ones to make.
It’s OK if you disagree with me. I can’t force you to be right.
3 bigguns and one petite one, pigdies take up alot of room
Free Hosting at YetiNode | MicroNode| Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?
Linux ISOs, family data archiving, some data processing for work stuff (need a few TB fairly regularly for research), and local cloud storage.
Nothing on there is too important, so no real backups for it. Anything that is important is backed up with borg elsewhere 🙏
Other places you can find me
Same. Must admit I'm a little puzzled by storage VPS in general
Mission critical stuff goes to big providers (hetzner and up). Linux ISOs & friends I'll keep two copies on local LAN - acceptable risk profile and I'd rather not have linux isos with my name on it floating about the internet.
Clearly there is demand for it so someone has a use case for it, I just don't get it
big boi
ZFS array:
Totals:
I have another server @ home with ~10 TB of SSD space (same raidz2 config), but can’t find my SSH key for that right now (on my phone, anyway).
Mainly use “big boi” for storing ISOs, local backups, etc. CPU is a 5600X, as my 5800X was about to turn the stock cooler into molten aluminum.
Edit: Ought to get more backups going. Only have my important stuff copied to two places ATM, with one being dirtbox-tier (questionable reliability) lol
This 2TB NVMe is dying?
Something is wrong with this forum. Every time I read this thread on front-page, I read it as "We love big dicks (the storage thread)".
Please stop the planet! I wish to get off!
It's fine if you are into that. I don't judge people.
#nohomo
yes and no.
it heavily depends on how you use it right now and has been used in the past. yes, it has blocks dying but it also still has a lot reserve.
while it is quite aged already this can be normal, especially if it is filled up quite good with stuff that rarely moves and the daily write load always hits the same few sectors.
background: for a ssd/nvme to last long it requires to balance its writing across all available blocks as much as possible.
so if you have a lot of it's capacity occupied by stuff that is only read and never written/removed, then obviously these "blocked" sectors cannot be used in the balancing scheme and only free blocks have to handle the load.
if this is the case here, simply copying long existing data into the now free area and deleting the original data afterwards can help extending the NVMes lifespan, because other sectors will be used fpr the daily load in the future, and those haven't been written to in a long time.
if your data across the whole disk changes a lot, then this does not apply and probably indeed the disk is dying anyway.
Wait, woot, 2TB ssd dying? Or just nvme?
I had a VM that kept running, but I couldn’t take backups. I’m not sure if the issue was related to writing the backup or reading the data.
It might have been unable to write data, because after freeing up some space, the backup succeeded.
If you have VMs running using disk image files, try running fstrim regularly within the VMs or resparse the images regularly. This should helpnfreeing space and with that increase the available blocks for the nvme to spread the writes better.