Well I remember paying over $1000 for a 1TB NVMe not THAT long ago, market pressure is set to go way higher than it was when 1TB was a big NVMe so.... Interesting times ahead.
TierHive - Hourly VPS - NAT Native - /24 per customer - Lab in the cloud - Free to try. | I am Anthony Smith FREE tokens when you sign up, try before you buy. | Join us on Reddit
I have played with 2x 24 SSD ( 128 to 256 GB ) in DELL MD1220 + Optane or NVME as chache, 256 GB RAM, high freq CPU and whatever you can imagine you can tweak in zfs. Used Debian, Ubuntu, Freenas, TrueNAS Core/Scale.
same for 12 or 24HDD in DELL MD1200.
Sincerely, total waste of time.
The second you hit it with random HIGH IO from VM's over 2x10GBPS or 2x25GBPS it will utterly suck.
Storage exported via ISCSI was much faster then NFS, but I did not fall of my chair performance wise.
Other the the ability to extend a pool on the fly by replacing drives with larger ones and the strong data integrity ZFS gives you, I personally see no real value in it for our use case.
I have played with 2x 24 SSD ( 128 to 256 GB ) in DELL MD1220 + Optane or NVME as chache, 256 GB RAM, high freq CPU and whatever you can imagine you can tweak in zfs. Used Debian, Ubuntu, Freenas, TrueNAS Core/Scale.
same for 12 or 24HDD in DELL MD1200.
Sincerely, total waste of time.
The second you hit it with random HIGH IO from VM's over 2x10GBPS or 2x25GBPS it will utterly suck.
Storage exported via ISCSI was much faster then NFS, but I did not fall of my chair performance wise.
Other the the ability to extend a pool on the fly by replacing drives with larger ones and the strong data integrity ZFS gives you, I personally see no real value in it for our use case.
/2c after burning far too many hours in tests.
I spent about 4 weeks tuning and exporting storage over 40g from truenas using iscsi and the performance in proxmox was always abysmal, switched to nfs export and got 7gbps throughput (up from 1.3) and a 5x increase in available iops.
Comments
Well I remember paying over $1000 for a 1TB NVMe not THAT long ago, market pressure is set to go way higher than it was when 1TB was a big NVMe so.... Interesting times ahead.
TierHive - Hourly VPS - NAT Native - /24 per customer - Lab in the cloud - Free to try. | I am Anthony Smith
FREE tokens when you sign up, try before you buy. | Join us on Reddit
@havoc
I have played with 2x 24 SSD ( 128 to 256 GB ) in DELL MD1220 + Optane or NVME as chache, 256 GB RAM, high freq CPU and whatever you can imagine you can tweak in zfs. Used Debian, Ubuntu, Freenas, TrueNAS Core/Scale.
same for 12 or 24HDD in DELL MD1200.
Sincerely, total waste of time.
The second you hit it with random HIGH IO from VM's over 2x10GBPS or 2x25GBPS it will utterly suck.
Storage exported via ISCSI was much faster then NFS, but I did not fall of my chair performance wise.
Other the the ability to extend a pool on the fly by replacing drives with larger ones and the strong data integrity ZFS gives you, I personally see no real value in it for our use case.
/2c after burning far too many hours in tests.
Host-C | Storage by Design | AS211462
“If it can’t guarantee behavior under load, it doesn’t belong in production.”
I spent about 4 weeks tuning and exporting storage over 40g from truenas using iscsi and the performance in proxmox was always abysmal, switched to nfs export and got 7gbps throughput (up from 1.3) and a 5x increase in available iops.
Any idea what I did wrong?