Is this mean that hardware failed, but not the disks?
I guess the 400 HDD ran on (maybe) RAID 0, some disk failed. He can only recover 50 of them and he doesn't have new HDD to put the data.
EDIT : We don't know for now whether it's RAID 0 or not but I can't think other reason for a node to go down and can only recover 1/4 of the disk.
It was confirmed as raid 5 I believe a long time ago. either way, I want to know what sort of chassis takes 400 disks
Storinator from 45drives has an XL version that takes up to 60 drives. 4U of height, average rack of 40U makes that 600 drives/rack (not taking cooling into account here). iirc their designs are open-source so anyone could make a clone of them for cheaper.
Imagine a fire and/or leaky roof near one of those racks.
Then again, judging by that space is probably cheap there, my bet is that it's something a little cheaper and a little less dense. Maybe the hdds were not in any enclosure?
I remember when he was building it, he was looking for specific HP drive caddies so unlikely it is that storinator, probably an HP server with a giant or multiple HP DAS units
Is this mean that hardware failed, but not the disks?
I guess the 400 HDD ran on (maybe) RAID 0, some disk failed. He can only recover 50 of them and he doesn't have new HDD to put the data.
EDIT : We don't know for now whether it's RAID 0 or not but I can't think other reason for a node to go down and can only recover 1/4 of the disk.
It was confirmed as raid 5 I believe a long time ago. either way, I want to know what sort of chassis takes 400 disks
Storinator from 45drives has an XL version that takes up to 60 drives. 4U of height, average rack of 40U makes that 600 drives/rack (not taking cooling into account here). iirc their designs are open-source so anyone could make a clone of them for cheaper.
Imagine a fire and/or leaky roof near one of those racks.
Then again, judging by that space is probably cheap there, my bet is that it's something a little cheaper and a little less dense. Maybe the hdds were not in any enclosure?
He did mention the word "Rack" so the entire Rack went nuts.
Unlikely that a sane person would put 400 Drives into Raid 5 in a single machine.
In the day 12.05.2021 we have find 1 big storage netup down with+ 3000 clients involucrated in this so we have try to recuperate the data but unsucces.
unfortunatly from 400 hdd of 12tb each we have recuperate like 50 until now. he big problem in this period is buy new hdd due of the shit coin chia because the price in new hardware is almost 3x more exensive than normal so i have dedicated all my time infind a solution to have new hdd as rapid i can to have the posibility to migrate the data from the affected hdd.
What i think to do and hope is restablish the old service until the clients can migrate the data and offer a same service with 1 year for free to all clients involucrated in this (To note from all our services the issue is only in some storage plans , the rest is up an running so the compensation will be only for the clients from this Rack wich was fucked.)
I have not connect here to ask merci because i dont need it , i have connect to update a little the problem as i have been driving 3 times the way oradea to bucharest in 1 week.
We will came with a final decision Monday as i need 1 day for me because i am almost depressed due of this .
To me cociu comes across as someone who genuinely wants best for his clients, yet he's too nice in a sense that he under prices his stuff, which in turn causes him to get into financial trouble. He then tries to 'solve' this with some workarounds like double credit promos or shifting stuff (on paper) towards his Perfume company, but in the end that's just buying time and not tackling the actual problem: his prices are not sustainable.
I have a different guess. Let assume everything he says is actually true. Lets do this quick and dirty,,,,
Any type of 2u dual E5 server. (as cheap as possible) - 2u
(The main criteria for the 2u server is having both 2x 10g sfp+ or better networking and the option for a SAS HBA.)
10x CSE-847 847JBOD-14 ... those- are 45 disks in 4u -> 10 units in the rack=> 40u
gives you a total disk capacity of 45x10 -> 450 3,5" disks + up to 12x 3,5" disks from the server in one 42u rack.
then he probably runs 2x 10g lagg or something to one of his other racks where the vps nodes sit. and the rest is network storage mounts.
here our single points of failure are:
Network
raid controller
the 2u server
edit: he also stated he uses 12TB disks. so those are potetially SMR ... so a lot of fun when attempting to rebuild a raid array or zfs pool.
I would assume he uses the 3.5" ones for HDDs and 2.5" ones as OS disks.
These are 2U.
Let's say he uses 46U racks (rare but let's assume)
There is also a 14 x 3,5 Slot Version of the Dl380 wich we use for our Storage Solutions, but that is also not enough for 400 drives in one rack.
But I also can't believe what He told has happen. (I am nearly sure He install the OS on the same array or use a USB drive / SD card for)
We got 3 Power outages since I startet in 2013, but there were never more that a Hand full drives dead after that happens.
Comments
I remember when he was building it, he was looking for specific HP drive caddies so unlikely it is that storinator, probably an HP server with a giant or multiple HP DAS units
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
Agreed, no sane person would.
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
What is OGF?
Other Green Forum - lowendtalk.com
Old Green Fungus
Michael from DragonWebHost & OnePoundEmail
An accurate description IMHO
THAN
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Don’t make me come over and hit you over the head with my flip-flop.
“Technology is best when it brings people together.” – Matt Mullenweg
Hen
Nexus Bytes Ryzen Powered NVMe VPS | NYC|Miami|LA|London|Netherlands| Singapore|Tokyo
Storage VPS | LiteSpeed Powered Web Hosting + SSH access | Switcher Special |
Swedish or English?
“Technology is best when it brings people together.” – Matt Mullenweg
@seriesn the Mathematician...
As seen on OGF.
Did he just poke holes into the Cociu story?
———-
blog | exploring visually |
Ikea and WalMart.
Nexus Bytes Ryzen Powered NVMe VPS | NYC|Miami|LA|London|Netherlands| Singapore|Tokyo
Storage VPS | LiteSpeed Powered Web Hosting + SSH access | Switcher Special |
I don't know. I just wanted to make sense out of the whole thing.
Nexus Bytes Ryzen Powered NVMe VPS | NYC|Miami|LA|London|Netherlands| Singapore|Tokyo
Storage VPS | LiteSpeed Powered Web Hosting + SSH access | Switcher Special |
Cociu message is more confused than usual. I didn't even try to make sense of it let alone try to work out the numbers.
To be honest, it sounds unbelievable.
He's probably using these: https://www.ebay.com/itm/HP-Proliant-DL380e-Gen8-Storage-Server-Dual-8-Core-E5-2450L-P420-1GB-14x-LFF-Bay-/274431553423 (Correct me if I'm wrong)
These have 12 x 3.5" slots and 2 x 2.5" slots.
I would assume he uses the 3.5" ones for HDDs and 2.5" ones as OS disks.
These are 2U.
Let's say he uses 46U racks (rare but let's assume)
And let's say he only puts servers in these and don't use 1U for TOR switch.
Then he can load 23 x 2U servers in a rack.
23 * 12 = 276 3.5" HDDs.
So for 400 HDDs, he needs another rack.
So he had 1.4 racks worth of servers (33.3 total) failed with a power outage.
Again, 33.3 servers had RAID failure with a power outage at the same time.
He should be suing the power company and RAID card manufacturer along with the HDD manufacturer for damages.
I gotta wonder about what kind of power conditioning he might have been using. Is the claim actually that all those hdd's were fried?
The last power outage I did notice was 23 days ago, but that is a bit far away.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
His claim:
In the day 12.05.2021 we have find 1 big storage netup down with+ 3000 clients involucrated in this so we have try to recuperate the data but unsucces.
unfortunatly from 400 hdd of 12tb each we have recuperate like 50 until now. he big problem in this period is buy new hdd due of the shit coin chia because the price in new hardware is almost 3x more exensive than normal so i have dedicated all my time infind a solution to have new hdd as rapid i can to have the posibility to migrate the data from the affected hdd.
What i think to do and hope is restablish the old service until the clients can migrate the data and offer a same service with 1 year for free to all clients involucrated in this (To note from all our services the issue is only in some storage plans , the rest is up an running so the compensation will be only for the clients from this Rack wich was fucked.)
I have not connect here to ask merci because i dont need it , i have connect to update a little the problem as i have been driving 3 times the way oradea to bucharest in 1 week.
We will came with a final decision Monday as i need 1 day for me because i am almost depressed due of this .
To me cociu comes across as someone who genuinely wants best for his clients, yet he's too nice in a sense that he under prices his stuff, which in turn causes him to get into financial trouble. He then tries to 'solve' this with some workarounds like double credit promos or shifting stuff (on paper) towards his Perfume company, but in the end that's just buying time and not tackling the actual problem: his prices are not sustainable.
I hope he recovers from this and, more importantly, learns from it. I wish him all the best, even though I have no services with him... but please stop lying about that magic 'faster than light' tunnel.
LinuxFreek.com
https://hostloc.com/thread-847254-1-1.html
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Anything interesting there to be translated @yoursunny ?
Hostsolutions.ro
Has become
Hostsolutions Row (controversy/strife)
Or
Hostsolutions रो (Ro spelt in Devnagri, means “to cry” in most parts of India)
———-
blog | exploring visually |
I missed to mention that, there was nothing interesting.
Besides, chinese people did rage about the shitty english to chinese translation.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
If its local storage, with SAN Storage it would be possible. But very expensive.
https://h20195.www2.hpe.com/v2/getpdf.aspx/c04607918.pdf
Thats HPs smalles/cheapest alternative (older 3Par) not expecting him to have a Nimble storage already (without warranty).
“Technology is best when it brings people together.” – Matt Mullenweg
I have a different guess. Let assume everything he says is actually true. Lets do this quick and dirty,,,,
Any type of 2u dual E5 server. (as cheap as possible) - 2u
(The main criteria for the 2u server is having both 2x 10g sfp+ or better networking and the option for a SAS HBA.)
10x CSE-847 847JBOD-14 ... those- are 45 disks in 4u -> 10 units in the rack=> 40u
gives you a total disk capacity of 45x10 -> 450 3,5" disks + up to 12x 3,5" disks from the server in one 42u rack.
then he probably runs 2x 10g lagg or something to one of his other racks where the vps nodes sit. and the rest is network storage mounts.
here our single points of failure are:
edit: he also stated he uses 12TB disks. so those are potetially SMR ... so a lot of fun when attempting to rebuild a raid array or zfs pool.
★ MyRoot.PW ★ Dedicated Servers ★ LIR-Services ★ | ★ SiteTide Web-Hosting ★
★ MrVM ★ Virtual Servers ★ | ★ Blesta.Store ★ Blesta licenses and Add-ons at amazing Prices ★
this is getting messsy
I bench YABS 24/7/365 unless it's a leap year.
Dl380 LFF has up to 14 Slots for 3,5 drive Slots.
As He is talking about a Rack there could be around 20 of These servers (even > @serverian said:
There is also a 14 x 3,5 Slot Version of the Dl380 wich we use for our Storage Solutions, but that is also not enough for 400 drives in one rack.
But I also can't believe what He told has happen. (I am nearly sure He install the OS on the same array or use a USB drive / SD card for)
We got 3 Power outages since I startet in 2013, but there were never more that a Hand full drives dead after that happens.
Are you saying that he's using 2 CPUs (low power and low core ones, too) to host thousands of VMs?
Very optimistic guess thinking he'd achieve that kind of cluster setup running fine.
They must be bare metal servers.
I’m only adding SAN storage as an option IF it actually was 400 disks that went tits up.
Honestly, I’m having trouble believing everything that @cociu ever said and now say about his infrastructure.
“Technology is best when it brings people together.” – Matt Mullenweg