Says it's a Threadripper in the plan title, but I think it's actually an AMD EPYC 7351 based on AuthenticAMD Family 23 Model 1 Stepping 2 as the CPU type in Geekbench.
AMD EPYC 7351 only has a turbo speed of 2.9ghz. Looking at clock speed it's possibly Ryzen with the CPU model set as a generic "AMD EPYC" in whichever hypervisor they're using.
AMD EPYC 7351 only has a turbo speed of 2.9ghz. Looking at clock speed it's possibly Ryzen with the CPU model set as a generic "AMD EPYC" in whichever hypervisor they're using.
That's true. Forgot to look at that.
From Geekbench -
Processor Information
Name AMD EPYC Processor (with IBPB)
Topology 2 Processors, 2 Cores
Identifier AuthenticAMD Family 23 Model 1 Stepping 2
Base Frequency 3.49 GHz
L1 Instruction Cache 64.0 KB
L1 Data Cache 32.0 KB
L2 Cache 512 KB
L3 Cache 8.00 MB
For a Threadripper vCPU, the GB score is pretty weak.. similar to Intel Gold, atleast EPYC would give you ~850 for single core and Threadripper would give you ~1200 for single core
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.
@chocolateshirt said:
For a Threadripper vCPU, the GB score is pretty weak.. similar to Intel Gold, atleast EPYC would give you ~850 for single core and Threadripper would give you ~1200 for single core
Note:
a. Some penalty in GB5 score- Zorin OS versus Ubuntu (Kernel version, etc..)
b. I tried something today that I had not anticipated doing- setting up RAID1. Both Kingston SATA disks, not quite rockets but good workhorses. Steps in a nutshell
Reinstalled Linux (Ubuntu)
Set up RAID1 (2 x 240 GB SATA disks) --> read up on relevant posts, RAID is not backup is now etched in my head.
Default ZFS config, right now the system is using about 900MB including bloatmox UI.
I don't really see much memory usage, but I am also not doing a lot of I/O operations.
I would give it a try if you got 32GB systems or higher, but I won't do it on 16GB systems.
Since on 16GB you already giving 1GB away for the system and bloatmox and another potential 1GB per TB storage for ZFS.
I don't have exact figures, just starting using ZFS and its amazing.
means ZFS is gonna eat up to 50% of the available memory for ARC. it does not neccessarily release that easily, so even if memory consumption for now is low, it will raise to 16GB over time no matter what. if you can afford it, that's a good thing, if you might need it otherwise you need to be careful to not lose the speed benefit afterwards because the system starts to swap a lot.
@ehab said: @Falzo I didn't search but can you please tell why you like ZFS for proxmox?
I do not. or at least I make that decision carefully depending on how many VMs with how much RAM should go on there. as just said ZFS potentially consumes lots of RAM for its ARC unless you restrict it. also the system can not claim ARC back like it would do with regular filecache because it is controlled by ZFS and the kernel only sees blocked memory.
so it can be dangerous if you think you can commit 32GB to your VMs but ZFS is gonna eat half of that.
with that said, obviously one still can benefit from ZFS if calculated/balanced properly as the caching works pretty well. as an example I have a SYS dedi with 4x2TB where I do not need the full 32GB for the VMs because it's rather a backup box.
apart from the backup VM I only have a win rdp with 10GB RAM on it and another 10GB for different gameservers... limit zfs to 8GB and still have breathing room for the main system...
on the other hand if you plan to have more VMs on a box and eventually even overcommit on memory slightly to take advantage of KSM and stuff, I probably would not use ZFS - and most likely no HDD anymore anyway.
I was actually expecting that ZFS would release memory if its getting thin.
But thanks for the reminder to set a cap at xGB and override the default one.
@Neoon said:
I was actually expecting that ZFS would release memory if its getting thin.
But thanks for the reminder to set a cap at xGB and override the default one.
theoretically would there be performance loss with capping ram?
@Neoon said:
I was actually expecting that ZFS would release memory if its getting thin.
But thanks for the reminder to set a cap at xGB and override the default one.
theoretically would there be performance loss with capping ram?
I'd say that heavily depends on your real world workload.
the more RAM you can use for ARC the better simply because you can hold more data in there. however if you move a 4GB file one time and do not touch it directly afterwards it might simply clog you cache for nothing. you want your cache be hot and re-used frequently. most likely you can achieve that with a smaller one already 🤷♂️
for most regular use cases I think on a 32GB machine you do not need 16GB and if ARC is restricted to 8 or even 4GB you won't see or feel much of a difference performance wise after all.
@Neoon said: The 50% is a LIE, that used 90% of the hole system memory.
it shouldn't though, did you check the cache stats before changing the max limit? of course regular filecache will come on top, so might have looked different... but yeah, as said before. needs to be handled with a bit of care ;-)
@Neoon said: The 50% is a LIE, that used 90% of the hole system memory.
it shouldn't though, did you check the cache stats before changing the max limit? of course regular filecache will come on top, so might have looked different... but yeah, as said before. needs to be handled with a bit of care ;-)
The System was hitting 90%+, KSM did already engage.
So without setting a hard limit to ZFS I guess with any I/O intensive application you may get banged anytime.
RawSRV.com, $9.50/month dedicated core VPS in Miami
Main selling point is the 10 Gbps (shared) Unmetered and Path.net DDoS Protection
Main use is probably for GRE tunnels and such, but I'm using it to send data to Dropbox from multiple servers.
This is the lowest score of Ryzen 3900x I have ever seen
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.
Comments
LowHosting RKVMPROTECTED16 - 1 month free gift from LowHosting.
I wish I can keep it forever
AlphaVPS
Won a giveaway for 1 year
1GB Dedicated RAM
512GB RAID60 HDD
1.5TB Bandwidth
1 Shared CPU core
1Gbps Port
1 IPv4 IP
/64 of IPv6
BG
I bench YABS 24/7/365 unless it's a leap year.
Dell 710 II with 2x L5630 and SAS-2 HDD's
Old homelab server that screams for faster disks.
hostingbot.com - Jacksonville - Ryzen Threadripper - 10Gbps - 512MB - $4.95/month
Says it's a Threadripper in the plan title, but I think it's actually an AMD EPYC 7351 based on AuthenticAMD Family 23 Model 1 Stepping 2 as the CPU type in Geekbench.
AMD EPYC 7351 only has a turbo speed of 2.9ghz. Looking at clock speed it's possibly Ryzen with the CPU model set as a generic "AMD EPYC" in whichever hypervisor they're using.
That's true. Forgot to look at that.
From Geekbench -
Processor Information
Name AMD EPYC Processor (with IBPB)
Topology 2 Processors, 2 Cores
Identifier AuthenticAMD Family 23 Model 1 Stepping 2
Base Frequency 3.49 GHz
L1 Instruction Cache 64.0 KB
L1 Data Cache 32.0 KB
L2 Cache 512 KB
L3 Cache 8.00 MB
For a Threadripper vCPU, the GB score is pretty weak.. similar to Intel Gold, atleast EPYC would give you ~850 for single core and Threadripper would give you ~1200 for single core
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.
yes i agree
I bench YABS 24/7/365 unless it's a leap year.
Inception Hosting
2 CPU Core
2 GB Ram (DDR4)
30 GB Pure NVMe SSD Disk space (Raid 1)
6000 GB Bandwidth @ 1 gbit (shared)
1 x IPv4 address
1 x /64 IPv6
Free complimentary DDOS protection
Free Direct Admin license on request.
€25 /year
Phoenix , AZ
I bench YABS 24/7/365 unless it's a leap year.
Possibly my last post on this topic (my custom multi disk rig)
fio before RAID
fio after RAID1
Note:
a. Some penalty in GB5 score- Zorin OS versus Ubuntu (Kernel version, etc..)
b. I tried something today that I had not anticipated doing- setting up RAID1. Both Kingston SATA disks, not quite rockets but good workhorses. Steps in a nutshell
blog | exploring visually |
Yabs again on Gravelands KS-LE but on ZFS Raid 1, 1TB
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
KS-LE in ROUBAAAIX with ZFS Raid 1
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
wow thats good IO for HDD
how much ram is it needed to set aside for this?
I bench YABS 24/7/365 unless it's a leap year.
Default ZFS config, right now the system is using about 900MB including bloatmox UI.
I don't really see much memory usage, but I am also not doing a lot of I/O operations.
I would give it a try if you got 32GB systems or higher, but I won't do it on 16GB systems.
Since on 16GB you already giving 1GB away for the system and bloatmox and another potential 1GB per TB storage for ZFS.
I don't have exact figures, just starting using ZFS and its amazing.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
and
means ZFS is gonna eat up to 50% of the available memory for ARC. it does not neccessarily release that easily, so even if memory consumption for now is low, it will raise to 16GB over time no matter what. if you can afford it, that's a good thing, if you might need it otherwise you need to be careful to not lose the speed benefit afterwards because the system starts to swap a lot.
@Falzo I didn't search but can you please tell why you like ZFS for proxmox?
I do not. or at least I make that decision carefully depending on how many VMs with how much RAM should go on there. as just said ZFS potentially consumes lots of RAM for its ARC unless you restrict it. also the system can not claim ARC back like it would do with regular filecache because it is controlled by ZFS and the kernel only sees blocked memory.
so it can be dangerous if you think you can commit 32GB to your VMs but ZFS is gonna eat half of that.
with that said, obviously one still can benefit from ZFS if calculated/balanced properly as the caching works pretty well. as an example I have a SYS dedi with 4x2TB where I do not need the full 32GB for the VMs because it's rather a backup box.
apart from the backup VM I only have a win rdp with 10GB RAM on it and another 10GB for different gameservers... limit zfs to 8GB and still have breathing room for the main system...
on the other hand if you plan to have more VMs on a box and eventually even overcommit on memory slightly to take advantage of KSM and stuff, I probably would not use ZFS - and most likely no HDD anymore anyway.
I was actually expecting that ZFS would release memory if its getting thin.
But thanks for the reminder to set a cap at xGB and override the default one.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
told you so ;-)
theoretically would there be performance loss with capping ram?
I bench YABS 24/7/365 unless it's a leap year.
I'd say that heavily depends on your real world workload.
the more RAM you can use for ARC the better simply because you can hold more data in there. however if you move a 4GB file one time and do not touch it directly afterwards it might simply clog you cache for nothing. you want your cache be hot and re-used frequently. most likely you can achieve that with a smaller one already 🤷♂️
for most regular use cases I think on a 32GB machine you do not need 16GB and if ARC is restricted to 8 or even 4GB you won't see or feel much of a difference performance wise after all.
The 50% is a LIE, that used 90% of the hole system memory.
Applied this now on the machines and reboot, will see if that's enough.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
it shouldn't though, did you check the cache stats before changing the max limit? of course regular filecache will come on top, so might have looked different... but yeah, as said before. needs to be handled with a bit of care ;-)
The System was hitting 90%+, KSM did already engage.
So without setting a hard limit to ZFS I guess with any I/O intensive application you may get banged anytime.
3-4GB should be fine tho, I hope.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
RawSRV.com, $9.50/month dedicated core VPS in Miami
Main selling point is the 10 Gbps (shared) Unmetered and Path.net DDoS Protection
Main use is probably for GRE tunnels and such, but I'm using it to send data to Dropbox from multiple servers.
I am a representative of Advin Servers
Hosthatch HK revisit
biennial deal
very slight steal observed but going strong after few months. lovely I/O
I bench YABS 24/7/365 unless it's a leap year.
Stratagem Tritium - limited to 200mbps. I believe plans higher than this are on 1g
Racknerd. Today I observed high steal, so I decided to run YABS:
This is the lowest score of Ryzen 3900x I have ever seen
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.