Is anyone aware of anything that could be done to improve the server's performance? I hope the performance could be as good as possible. Thanks!
that point being smart was pretty much self-proclaimed. perhaps RaveX could shed some light since he has same thing running. the GB4 single core is pretty good.
also again, if there's a chance to compare kernel 5.11 Vs proxmox 5.4.....
@corbpie said:
Has 1500 been passed on GB5 single?
Out of curiosity I ran another YABS (GB5 only). Close to breaking the 4 minute mile record but not quite Roger Bannister.
Multi seems to have dropped too.
@corbpie said:
Has 1500 been passed on GB5 single?
Out of curiosity I ran another YABS (GB5 only). Close to breaking the 4 minute mile record but not quite Roger Bannister.
Multi seems to have dropped too.
Geekbench 5 Benchmark Test:
> ---------------------------------
> Test | Value
> |
> Single Core | 1483
> Multi Core | 3519
> Full Test | https://browser.geekbench.com/v5/cpu/6615273
Guys! I still don't understand why there should be such a big difference in single core performance, just under 1500 inside the container, versus just over 1700 outside, of the container. The test result difference of approximately 200 is greater than 10%. Why would using cgroups and namespaces cause a greater than 10% difference in the test performance? Thanks!
To match @vyas, I ran another yabs on the node to test the consistency of the single core score on the bare metal AX101.
The GB single core last time was 1719. This time 1710.
The other thing that was consistent on this particular test were the busy signals on the IPv4 iperf tests. Looks like Sunday evening in Mexico / early Monday morning in Europe might not be the best time for yabs iperf testing of IPv4 performance. IPv6 was wide open, though!
The above figure shows the graph for 66% load CPU utilization for Mixed load operation. 66% load is determined by op rate while running mixed load operation which stresses on 11GB data in the Cassandra cluster and 150 threads are given in the Cassandra stress command. The above graph shows the average values of CPU utilization for 10 iterations. It shows Cassandra utilizes more CPU usage on Linux Containers than Cassandra on bare metal in both 100% load and 66% load cases. Highest CPU usage on Linux Container is 88.93%. Highest CPU usage on bare metal is 71.77%.
That's a 17% difference in favor of LXC over bare metal on this test! So a lot depends on the test being used.
But I still would like to know more about why the Geekbench 5 disfavors LXC versus bare metal.
Apologies for possibly distracting the thread away from @Mason's lovely yabs. ⭐
@Not_Oles said: That's a 17% difference in favor of LXC over bare metal on this test!
Please correct me if I'm wrong, but doesn't the above indicate that the 17% CPU utilization difference is in favor of bare metal over LXC (the lower the CPU utilization, the better)? Basically the task uses 71.77% of the CPU on bare metal, but the same task uses 88.93% of the CPU when run in a container -- meaning the 17% difference is the CPU working harder (and having more overhead) within the container. At least that's how I'm interpreting the results.
@Not_Oles said: That's a 17% difference in favor of LXC over bare metal on this test!
Please correct me if I'm wrong, but doesn't the above indicate that the 17% CPU utilization difference is in favor of bare metal over LXC (the lower the CPU utilization, the better)? Basically the task uses 71.77% of the CPU on bare metal, but the same task uses 88.93% of the CPU when run in a container -- meaning the 17% difference is the CPU working harder (and having more overhead) within the container. At least that's how I'm interpreting the results.
Thank you! I was imagining it meant that LXC got more work done because it utilized the CPU for a longer time, i.e., higher CPU utilization and lower the idle time, the better.
@Abdullah said: @Not_Oles looks excellent already! can try set disk cache to writeback too.
Bad idea, unless your cache device is redundant and hopefully backed by some kind of battery sir.
yes...so do we should set it to writethrough ?
In proxmox, there is a 'writeback' & a 'writeback (unsafe)' option. I wonder if they are different, you have any idea @Not_Oles ?
I also read at another place virtualizor uses writeback in default cache option, with some 'optimizations'...not sure if they have a way to safely use writeback.
@Abdullah said: @Not_Oles looks excellent already! can try set disk cache to writeback too.
Bad idea, unless your cache device is redundant and hopefully backed by some kind of battery sir.
yes...so do we should set it to writethrough ?
In proxmox, there is a 'writeback' & a 'writeback (unsafe)' option. I wonder if they are different, you have any idea @Not_Oles ?
I also read at another place virtualizor uses writeback in default cache option, with some 'optimizations'...not sure if they have a way to safely use writeback.
sorry to OP for being offtopic here, do you use virtualizor? can it work with kernel 5.11 in production?
@Abdullah said: @Not_Oles looks excellent already! can try set disk cache to writeback too.
Bad idea, unless your cache device is redundant and hopefully backed by some kind of battery sir.
yes...so do we should set it to writethrough ?
In proxmox, there is a 'writeback' & a 'writeback (unsafe)' option. I wonder if they are different, you have any idea @Not_Oles ?
I also read at another place virtualizor uses writeback in default cache option, with some 'optimizations'...not sure if they have a way to safely use writeback.
sorry to OP for being offtopic here, do you use virtualizor? can it work with kernel 5.11 in production?
hostvds
$1.59 per mth
don't be fooled by "data centers", only Russia available
performance wise, really crap bad throttled CPU (imagine virmach being virmached)
the score is GB4 not GB5!!!!!!
take hours to install simple webserver panel complete
Yabs inside a Proxmox KVM. Bare metal is running 5.4.98-1-pve. The kernel in the KVM was updated to the latest Proxmox-edge-kernel, 5.11.2-1-pve.
With 5.11.2-1-pve the KVM single core GB score was 1736.
Previously, on the bare metal, the GB single core score was 1719. That 1719 was with whatever kernel preceded the current stable Proxmox kernel, 5.4.98-1-pve.
It seems that the performance of the 5.11 kernel might be improved compared to 5.4. Unless I'm confused, which happens often. Luckily I have you guys to catch my mistakes!
Comments
^^^ the iperfs are looking impressively stable
that point being smart was pretty much self-proclaimed. perhaps RaveX could shed some light since he has same thing running. the GB4 single core is pretty good.
also again, if there's a chance to compare kernel 5.11 Vs proxmox 5.4.....
I bench YABS 24/7/365 unless it's a leap year.
Has 1500 been passed on GB5 single?
1719
I hope everyone gets the servers they want!
Out of curiosity I ran another YABS (GB5 only). Close to breaking the 4 minute mile record but not quite Roger Bannister.
Multi seems to have dropped too.
blog | exploring visually |
drServer - Intel® Atom™ C2750 (4-Core, 2.4GHz) 8GB DDR3 1x 200GB SSD 13usd/m
root@sv:~# curl -sL yabs.sh | bash -s -- -9
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
# Yet-Another-Bench-Script #
# v2020-12-29 #
# https://github.com/masonr/yet-another-bench-script #
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
https://dttech.top - Personal Site.
Abnormal disk speeds...
Guys! I still don't understand why there should be such a big difference in single core performance, just under 1500 inside the container, versus just over 1700 outside, of the container. The test result difference of approximately 200 is greater than 10%. Why would using cgroups and namespaces cause a greater than 10% difference in the test performance? Thanks!
I hope everyone gets the servers they want!
To match @vyas, I ran another yabs on the node to test the consistency of the single core score on the bare metal AX101.
The GB single core last time was 1719. This time 1710.
The other thing that was consistent on this particular test were the busy signals on the IPv4 iperf tests. Looks like Sunday evening in Mexico / early Monday morning in Europe might not be the best time for yabs iperf testing of IPv4 performance. IPv6 was wide open, though!
I hope everyone gets the servers they want!
Wow! In at least one test LXC actually beats bare metal! I wasn't expecting that!
Here is Figure 9 from Reventh Thiruvallur Vangeepuram's thesis, Performance Comparison of Cassandra in LXC and Bare metal:
That's a 17% difference in favor of LXC over bare metal on this test! So a lot depends on the test being used.
But I still would like to know more about why the Geekbench 5 disfavors LXC versus bare metal.
Apologies for possibly distracting the thread away from @Mason's lovely yabs. ⭐
@vyas
I hope everyone gets the servers they want!
Please correct me if I'm wrong, but doesn't the above indicate that the 17% CPU utilization difference is in favor of bare metal over LXC (the lower the CPU utilization, the better)? Basically the task uses 71.77% of the CPU on bare metal, but the same task uses 88.93% of the CPU when run in a container -- meaning the 17% difference is the CPU working harder (and having more overhead) within the container. At least that's how I'm interpreting the results.
Head Janitor @ LES • About • Rules • Support
Thank you! I was imagining it meant that LXC got more work done because it utilized the CPU for a longer time, i.e., higher CPU utilization and lower the idle time, the better.
I hope everyone gets the servers they want!
@Not_Oles looks excellent already! can try set disk cache to writeback too.
https://webhorizon.net
Bad idea, unless your cache device is redundant and hopefully backed by some kind of battery sir.
Nexus Bytes Ryzen Powered NVMe VPS | NYC|Miami|LA|London|Netherlands| Singapore|Tokyo
Storage VPS | LiteSpeed Powered Web Hosting + SSH access | Switcher Special |
$7 256MB 5GB NAT Singapore
@mikho
$7 512MB 7GB NAT Singapore
@Abdullah
yes...so do we should set it to writethrough ?
In proxmox, there is a 'writeback' & a 'writeback (unsafe)' option. I wonder if they are different, you have any idea @Not_Oles ?
I also read at another place virtualizor uses writeback in default cache option, with some 'optimizations'...not sure if they have a way to safely use writeback.
https://webhorizon.net
sorry to OP for being offtopic here, do you use virtualizor? can it work with kernel 5.11 in production?
I bench YABS 24/7/365 unless it's a leap year.
I use proxmox+virtualizor & kernel is 5.4
https://webhorizon.net
hostvds
$1.59 per mth
don't be fooled by "data centers", only Russia available
performance wise, really crap bad throttled CPU (imagine virmach being virmached)
the score is GB4 not GB5!!!!!!
take hours to install simple webserver panel complete
I bench YABS 24/7/365 unless it's a leap year.
Cape Town ZappieHost
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Auckland ZappieHost
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Nice CPU and network, but SAS is very low ... Netcup RS 2000 G8 SAS
fio Disk Speed Tests 64k (IOPS) has doubled
Netcup RS1000G9
Nexusbytes from new NL location.
It's too premium to keep for idling.
ApeWeb VM2
Prem
Nexus Bytes Ryzen Powered NVMe VPS | NYC|Miami|LA|London|Netherlands| Singapore|Tokyo
Storage VPS | LiteSpeed Powered Web Hosting + SSH access | Switcher Special |
What else can I do as first command on my new VM? Oh yes, we will run a YABS of course! (It's also the last command it will run on that install!)
naranja.tech KVM-NVME-16GB - NEW YEAR OFFER
Yabs inside a Proxmox KVM. Bare metal is running 5.4.98-1-pve. The kernel in the KVM was updated to the latest Proxmox-edge-kernel, 5.11.2-1-pve.
With 5.11.2-1-pve the KVM single core GB score was 1736.
Previously, on the bare metal, the GB single core score was 1719. That 1719 was with whatever kernel preceded the current stable Proxmox kernel, 5.4.98-1-pve.
It seems that the performance of the 5.11 kernel might be improved compared to 5.4. Unless I'm confused, which happens often. Luckily I have you guys to catch my mistakes!
Maybe I am gonna try 5.11 on the bare metal node.
I hope everyone gets the servers they want!