I used to exclusively use Hardware RAID, but nowadays I stick with Linux Software RAID.
Had instances of RAID controllers failing which caused disruption, but the main reason for me was the inability to view full S.M.A.R.T metrics of the disks. I can only speak of my experience which has been with LSI 92xx controllers.
@EthernetServers said:
I used to exclusively use Hardware RAID, but nowadays I stick with Linux Software RAID.
Had instances of RAID controllers failing which caused disruption, but the main reason for me was the inability to view full S.M.A.R.T metrics of the disks. I can only speak of my experience which has been with LSI 92xx controllers.
It's friday. Decided to chuck 3 old laptop hard drives in my spare server and setup a raid-5 array...
Ubuntu installer REFUSED to recognize the disks for raid. Says "no active drives", even though I can still partition the drives fine on the same installer screen...
So installed ubuntu on a nvme SSD, booted from it, and now trying to do a mdadm raid 5 setup. It's gonna take 2 hours to fill all 450GB of disk space... The drives may be slow, but I didn't expect it to be this slow.
I was even browsing SSDs on amazon to buy, but once I realize i need 3 or 4 of them, decided to stop... Cant find any good ebay deals on them either.
Anyway, another hour left on the raid setup. Lets see what happens after this.
@somik said:
It's friday. Decided to chuck 3 old laptop hard drives in my spare server and setup a raid-5 array...
Ubuntu installer REFUSED to recognize the disks for raid. Says "no active drives", even though I can still partition the drives fine on the same installer screen...
So installed ubuntu on a nvme SSD, booted from it, and now trying to do a mdadm raid 5 setup. It's gonna take 2 hours to fill all 450GB of disk space... The drives may be slow, but I didn't expect it to be this slow.
I was even browsing SSDs on amazon to buy, but once I realize i need 3 or 4 of them, decided to stop... Cant find any good ebay deals on them either.
Anyway, another hour left on the raid setup. Lets see what happens after this.
Based on my experience, RAID 5 is one of the worst RAID options. Although it may seem attractive, I lost my data with RAID 5, and the speed isn't that great either. In fact, the more disks you use in RAID 5, the less reliable it becomes.
@somik said:
It's friday. Decided to chuck 3 old laptop hard drives in my spare server and setup a raid-5 array...
Ubuntu installer REFUSED to recognize the disks for raid. Says "no active drives", even though I can still partition the drives fine on the same installer screen...
So installed ubuntu on a nvme SSD, booted from it, and now trying to do a mdadm raid 5 setup. It's gonna take 2 hours to fill all 450GB of disk space... The drives may be slow, but I didn't expect it to be this slow.
I was even browsing SSDs on amazon to buy, but once I realize i need 3 or 4 of them, decided to stop... Cant find any good ebay deals on them either.
Anyway, another hour left on the raid setup. Lets see what happens after this.
Based on my experience, RAID 5 is one of the worst RAID options. Although it may seem attractive, I lost my data with RAID 5, and the speed isn't that great either. In fact, the more disks you use in RAID 5, the less reliable it becomes.
It's ok. I just wanted to try it out. I would have prefered Raid 10 but i only got 3 disks.
On my real servers, I do automated data backups, not raid. Raid may be great for redundancy and keep the servers up even with disk failures, but my servers are not mission critical, so it going down for a while is not a big deal. As long as my data are properly backed up, i can restore it with time, make it better even
Never make the same mistake twice. There are so many new ones to make.
It’s OK if you disagree with me. I can’t force you to be right.
If I wanted to do a proper RAID setup, maybe. But I am just running it off of old 500GB laptop hard drives. Moreover, all 3 drives in RAID are of different brands (a big NO NO in RAID). So if I wanted to buy drives, I would need to buy 4 identical drives, probably SSDs, between 1 and 4 TB in size. So if i go with RAID 10 as planned, I would be paying for 4 drives for the space of 2 drives. That's a bit rich for my lowend blood...
What I do now is run my drives as standalone (without RAID) and run a disk clone to a 8 TB HDD sitting in the server. So if the drive fails, I can still restore the VM from the disk image (fyi, the non-os site data gets backed up to a different server).
Anyway, from the little playing around I did, mdadm was the easiest RAID setup I did. I am more used to setting it up from the motherboard RAID controller or hardware RAID controllers, but mdadm setup was basically 1 command and wait long long for it to be done.
As for ubuntu, the RAID setup on ubuntu server gui installer is very lacking. I'll reset the server and try out debian or centos installer (not sure yet). Hope the RAID setup works better on those.
Never make the same mistake twice. There are so many new ones to make.
It’s OK if you disagree with me. I can’t force you to be right.
Comments
I used to exclusively use Hardware RAID, but nowadays I stick with Linux Software RAID.
Had instances of RAID controllers failing which caused disruption, but the main reason for me was the inability to view full S.M.A.R.T metrics of the disks. I can only speak of my experience which has been with LSI 92xx controllers.
For some of the posts above talking about Debian EFI RAID, when I was researching this, I came across https://wiki.debian.org/UEFI#RAID_for_the_EFI_System_Partition
🌐 Ethernet Servers Ltd – 10+ Years Online
Shared, VPS, Dedicated Servers & Domains – www.ethernetservers.com
Thanks! This is how I do it today, it’s just annoying and I need to build preseed and scripts so its more repeatable as I grow the fleet.
Rock Solid Web Hosting, VPS & VDS with a Refreshing Approach - Xeon Scalable Gold, DDoS protection and Enterprise Hardware! HostBilby Inc.
So data is ok, but EFI will fail? How does the grub hook work? How do I set it up?
Never make the same mistake twice. There are so many new ones to make.
It’s OK if you disagree with me. I can’t force you to be right.
It's friday. Decided to chuck 3 old laptop hard drives in my spare server and setup a raid-5 array...
Ubuntu installer REFUSED to recognize the disks for raid. Says "no active drives", even though I can still partition the drives fine on the same installer screen...
So installed ubuntu on a nvme SSD, booted from it, and now trying to do a mdadm raid 5 setup. It's gonna take 2 hours to fill all 450GB of disk space... The drives may be slow, but I didn't expect it to be this slow.
I was even browsing SSDs on amazon to buy, but once I realize i need 3 or 4 of them, decided to stop... Cant find any good ebay deals on them either.
Anyway, another hour left on the raid setup. Lets see what happens after this.
Never make the same mistake twice. There are so many new ones to make.
It’s OK if you disagree with me. I can’t force you to be right.
Based on my experience, RAID 5 is one of the worst RAID options. Although it may seem attractive, I lost my data with RAID 5, and the speed isn't that great either. In fact, the more disks you use in RAID 5, the less reliable it becomes.
It's ok. I just wanted to try it out. I would have prefered Raid 10 but i only got 3 disks.
On my real servers, I do automated data backups, not raid. Raid may be great for redundancy and keep the servers up even with disk failures, but my servers are not mission critical, so it going down for a while is not a big deal. As long as my data are properly backed up, i can restore it with time, make it better even
Never make the same mistake twice. There are so many new ones to make.
It’s OK if you disagree with me. I can’t force you to be right.
buy one more disk
I bench YABS 24/7/365 unless it's a leap year.
If I wanted to do a proper RAID setup, maybe. But I am just running it off of old 500GB laptop hard drives. Moreover, all 3 drives in RAID are of different brands (a big NO NO in RAID). So if I wanted to buy drives, I would need to buy 4 identical drives, probably SSDs, between 1 and 4 TB in size. So if i go with RAID 10 as planned, I would be paying for 4 drives for the space of 2 drives. That's a bit rich for my lowend blood...
What I do now is run my drives as standalone (without RAID) and run a disk clone to a 8 TB HDD sitting in the server. So if the drive fails, I can still restore the VM from the disk image (fyi, the non-os site data gets backed up to a different server).
Anyway, from the little playing around I did, mdadm was the easiest RAID setup I did. I am more used to setting it up from the motherboard RAID controller or hardware RAID controllers, but mdadm setup was basically 1 command and wait long long for it to be done.
As for ubuntu, the RAID setup on ubuntu server gui installer is very lacking. I'll reset the server and try out debian or centos installer (not sure yet). Hope the RAID setup works better on those.
Never make the same mistake twice. There are so many new ones to make.
It’s OK if you disagree with me. I can’t force you to be right.