Yet Another Benchmark Script (YABS) - Linux Benchmarking Script using dd, iperf, & Geekbench
Some of you guys probably know about this little script already, but I just wanted to make a thread here to make others aware of it and gather input/suggestions to make it better!
Yet Another Benchmark Script (YABS) was created a few months back initially as an automated iperf script that I would use myself, then it evolved to include geekbench and dd tests as well. I then decided to tidy it up a bit and add it to GitHub. Here is the GitHub repo where you can review the script and get some additional information: https://github.com/masonr/yet-another-bench-script
Purpose: The purpose of this script is to quickly gauge the performance of a Linux-based server by benchmarking network performance via iperf3, CPU and overall system performance via Geekbench 4, and simple sequential disk performance via dd. The script is designed to not require any dependencies - either compiled or installed - nor admin privileges to run.
Benchmarking Tests
The script performs three main tests:
- dd - to estimate disk performance. (disclaimer that read speeds may be heavily affected by cache)
- iperf - to estimate network performance using parallel threads and by testing speeds in both directions (download and upload). Both IPv4 and IPv6 iperf tests are conducted (if available)
- Geekbench 4 - to estimate total system performance by running a vast array of different CPU/memory intensive benchmarks. Both single and multi-core scores are given along with a link to view the complete system results. The URL to claim the test and add it to your Geekbench profile is written to disk.
Running the Script
The script is very easy to run and does not require any external dependencies to be installed nor elevated privileges to run:
curl -sL yabs.sh | bash
The script has been tested on CentOS 7, Debian 9, Debian 10, Fedora 30, Ubuntu 16.04, and Ubuntu 18.04.
Example Output
Here is example output from the script:
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## # # Yet-Another-Bench-Script # # v2020-01-08 # # https://github.com/masonr/yet-another-bench-script # # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## # Wed 08 Jan 2020 07:33:21 PM UTC Basic System Information: --------------------------------- Processor : Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz CPU cores : 8 @ 4098.759 MHz AES-NI : ✔ Enabled VM-x/AMD-V : ✔ Enabled RAM : 31Gi Swap : 0B Disk : 221G dd Sequential Disk Speed Tests: --------------------------------- | Test 1 | Test 2 | Test 3 | Avg | | | | Write | 291 MB/s | 286 MB/s | 281 MB/s | 286.00 MB/s Read | 179 MB/s | 188 MB/s | 179 MB/s | 182.00 MB/s iperf3 Network Speed Tests (IPv4): --------------------------------- Provider | Location (Link) | Send Speed | Recv Speed | | | Bouygues Telecom | Paris, FR (10G) | 2.93 Gbits/sec | 7.80 Gbits/sec Online.net | Paris, FR (10G) | 7.79 Gbits/sec | 5.20 Gbits/sec Severius | The Netherlands (10G) | 8.98 Gbits/sec | 2.53 Gbits/sec Worldstream | The Netherlands (10G) | 8.65 Gbits/sec | 8.57 Gbits/sec wilhelm.tel | Hamburg, DE (10G) | 7.80 Gbits/sec | 9.03 Gbits/sec Biznet | Bogor, Indonesia (1G) | 752 Mbits/sec | busy Hostkey | Moscow, RU (1G) | 905 Mbits/sec | 449 Mbits/sec Vultr | Piscataway, NJ, US (1G) | 448 Mbits/sec | 51.6 Mbits/sec Velocity Online | Tallahassee, FL, US (10G) | 1.74 Gbits/sec | 1.61 Gbits/sec Airstream Communications | Eau Claire, WI, US (10G) | 1.61 Gbits/sec | 106 Mbits/sec Hurricane Electric | Fremont, CA, US (10G) | 28.2 Mbits/sec | 476 Mbits/sec iperf3 Network Speed Tests (IPv6): --------------------------------- Provider | Location (Link) | Send Speed | Recv Speed | | | Bouygues Telecom | Paris, FR (10G) | 7.78 Gbits/sec | 6.00 Gbits/sec Online.net | Paris, FR (10G) | 2.86 Gbits/sec | 5.74 Gbits/sec Severius | The Netherlands (10G) | 6.96 Gbits/sec | 2.38 Gbits/sec Worldstream | The Netherlands (10G) | 7.29 Gbits/sec | 6.02 Gbits/sec wilhelm.tel | Hamburg, DE (10G) | 4.64 Gbits/sec | 8.93 Gbits/sec Vultr | Piscataway, NJ, US (1G) | 97.5 Mbit/sec | 37.3 Mbits/sec Airstream Communications | Eau Claire, WI, US (10G) | busy | busy Hurricane Electric | Fremont, CA, US (10G) | 348 Mbits/sec | 505 Mbits/sec Geekbench 4 Benchmark Test: --------------------------------- Test | Value | Single Core | 5714 Multi Core | 19758 Full Test | https://browser.geekbench.com/v4/cpu/15115430
Feedback
I welcome any feedback you may have or any bug reports. Feel free to test it out and post your results.
-Mason
Comments
In a short time at the other place I have seen users ask for YABS more often than other BMs.
blog | exploring visually |
Nice dude! Way to stay on top of it!
BanditHost - NVMe VPS Hosting
Some hashing + ioping would be nice as well
I bench YABS 24/7/365 unless it's a leap year.
I think that fio is a much better test but I already know the compiling issues, so if you could somehow make that happen, that's the best! I second @cybertech's ioping suggestion.
Finally, is there a reason why you don't use Geekbench 5? From what I understand, Geekbench 4 may overweight the RAM portion of the score, and that has been changed in Geekbench 5.
Deals and Reviews: LowEndBoxes Review | Avoid dodgy providers with The LEBRE Whitelist | Free hosting (with conditions): Evolution-Host, NanoKVM, FreeMach, ServedEZ | Get expert copyediting and copywriting help at The Write Flow
If possible, add iperf3 Network Speed Tests to South America (Brazil).
Any specific hash tests? Which ones are most useful?
Working on fio! If I can get that working, that'd be ideal even if I have to maintain a 32-bit and 64-bit version.
ioping was removed because it added unneeded complexity to the script. My previous iteration of YABS used ioping for read tests (while still using dd for write). I put ioping in there because I thought it'd be less influenced by the host's cache than dd, but I saw the same anomalies that I experienced with dd in the end. So to make things more simple, I took it out. I may revisit that decision if fio doesn't pan out.
Mainly because Geekbench 4 is what I've generally been using for all my server testing and comparisons. It does make sense to update it to use Geekbench 5, though, so I'll probably add that in soon as the default and bake in a switch that'll still allow you to run Geekbench 4 tests, if desired.
Head Janitor @ LES • About • Rules • Support
Going to attempt to compile a static build of fio using Holy Build Box and/or the Meson Build system this weekend. Maybe one of these two tools will help since my initial efforts have proven fruitless.
Head Janitor @ LES • About • Rules • Support
@Mason Compiling statically isn't that big of a deal. Just do a static on a Deb 7 and a CentOS build, and either of those two should work pretty much anywhere given all but LIBC is compiled into it. If you need help, I might be able to assist.
My pronouns are like/subscribe.
Cheers, might take you up on that. Can't for the life of me figure it out but maybe that's because I am using a recent debian version. Compiled on a 4.15 kernel and it won't work on a 2.6 kernel machine. Compiled on a 2.6 kernel and it won't work on the 4.15 machine. I'll give debian 7 and centos a chance and call in the cavalry if I can't figure it out.
Issues seem to be coming from the libaio and libc6 libraries.
Head Janitor @ LES • About • Rules • Support
Those have different major revisions of each. Debian 7 is the lowest end for the libc required subs/services with standard libraries, whereas CentOS 7 is the Python user subgroup.
My pronouns are like/subscribe.
Are you encountering issues with the libraries even when statically linking them? Do you get an error message?
Daniel15 | https://d.sb/. List of all my VPSes: https://d.sb/servers
dnstools.ws - DNS lookups, pings, and traceroutes from 30 locations worldwide.
Deb 7 did the trick. Compiled it statically in a local vm and it's working on every vps+dedi I've tested so far (even an ancient cociu VM). Debian, Ubuntu, CentOS, and Fedora all tested. I assume I'll probably have to do a 32-bit version as well to cover my bases. Thanks for leading me in the right direction.
Yes, they were statically linked, but I was hitting 'illegal instruction' and other ominous error messages when running the compiled binary on other machines. Compiling on Deb 7 seems to have done the trick, however.
With this hurdle overcometh, I expect to have fio in YABS in place of dd/ioping by the end of the week after I do a whole bunch of testing. For the random r/w test, I think I'll be rolling with the recommended settings on BinaryLane's article --
But I'll be reducing the file size to 1G (instead of 4G) and also will be adding in a timeout of 60 seconds to cut the command off if it's taking longer than that. I've already tested the timeout and it still spits out valid info if terminated, so it's fine if it stops early (edit: looks like the runtime flag has the same effect, but exits more cleanly). Add in the minimal flag to streamline things and we're golden.
Does anyone else have any other recommendations for fio settings that would make sense in a holistic sense, since it's gonna be a "one size fits all" test?
Head Janitor @ LES • About • Rules • Support
Support for *BSD would be good
fio implementation is going smooth so far. Need to do a bunch of compatibility/QA testing before I commit the changes to the repo, but here's a preview of the results --
Currently implementing four fio tests (sequential read, sequential write, random read, and random write). Here's the fio options I'm passing -- I welcome any feedback or suggested changes to these commands as I'm a fio noob.
Head Janitor @ LES • About • Rules • Support
That looks suitable for a VPS (Shared disk) so as not to punish neighbors as its only 4 minutes that the nodes performance will tank for other users.
If you want a more real world test consider putting in a 3 second test that runs every 5 minutes for 24 hours then graphs the results over a day, far more indicative of real world use/ performance
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
Yeah, tried to make it as short as possible but still get a valid result. For most systems, it hopefully won't take the full 60 seconds to read or write the 1G file so should complete much sooner. The runtime is more of a precaution to cut off any tests that are taking an extreme amount of time.
That'd be interesting. Maybe in YABS^2, I'll make a version of this that characterizes performance over a 24-hour period. Would be pretty neat.
I've parsed and added in IOPS as well for each of the four tests since fio spits those out and they might be useful for some --
Example:
Head Janitor @ LES • About • Rules • Support
Looking good!
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
Nice work, @Mason. I think a 1G file is acceptable, although I probably will still go with a 4GB test file for my own benching mainly because a larger test file will more likely even out spikes, if any.
I am not sure how to do a 3-second test that can capture everything lol. My own homebrew script takes several minutes to complete all my tests. Well, I guess YABS is really not meant as an extended test because it needs to go with the lowest denominator. Right now, with both sequential and random speeds plus IOPS, it is good enough.
My recommendation for the fio command is that you can use the entire binary lane sample, but change the --rwmixread=75 to --rwmixread=50 so that the test will do a 50% read and 50% write. I think it is better to read and write the same amounts of data.
Deals and Reviews: LowEndBoxes Review | Avoid dodgy providers with The LEBRE Whitelist | Free hosting (with conditions): Evolution-Host, NanoKVM, FreeMach, ServedEZ | Get expert copyediting and copywriting help at The Write Flow
Personally I am not sure how a 1GB or 4GB or 60 second test captures anything you you are ever likely to use a VPS for in the real world.
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
The same can be said for all kinds of research. That's why we design methodologies that increase the probability of being correct.
Deals and Reviews: LowEndBoxes Review | Avoid dodgy providers with The LEBRE Whitelist | Free hosting (with conditions): Evolution-Host, NanoKVM, FreeMach, ServedEZ | Get expert copyediting and copywriting help at The Write Flow
Thanks for the feedback @poisson and @AnthonySmith, mucho appreciated! After having a lengthy discussion with Falzo and thinking about some of the feedback here, I've implemented some changes:
Here's an example of the current output using an ExtraVM NVMe VPS --
Another example with a Dedi w/ SSD (was also doing other tasks at the time...) --
I'm debating whether or not to add another set of three tests for 1M block size, but I think I'll keep it as is for now so the test doesn't take all damn day to run . I still have a ton of testing to do to make sure all is well when testing this on other OSes before it hits the repo.
Head Janitor @ LES • About • Rules • Support
@Falzo is indeed the man to ask. This looks good enough to me, although for a general benchmark I would say mixed read-write would suffice. It is good to know pure read and write iops and bandwidth, but you really want to know the mixed performance because that is the environment most tests will be conducted in. No harm leaving them, but if you want to shave seconds off the overall testing time, those would be on the chopping block for me.
Actually, you can work arguments into your script, right? You can offer 4k, 64k, 512k and 1M block sizes and allow them to be tested if the correct arguments are passed. So the default will be 4k and 64k, but the full can be activated of necessary.
I wonder if we can determine numbers to give an indication of excellent, good, fair and poor iops for different block sizes. This will require some testing to determine appropriate cutoffs that are reasonable but it will help people interpret the numbers more meaningfully.
Deals and Reviews: LowEndBoxes Review | Avoid dodgy providers with The LEBRE Whitelist | Free hosting (with conditions): Evolution-Host, NanoKVM, FreeMach, ServedEZ | Get expert copyediting and copywriting help at The Write Flow
ExtraVM 100K Write IOPS
I bench YABS 24/7/365 unless it's a leap year.
I got a similar sentiment from Falzo as well, so I think I may be overestimating the importance of including read/write-only tests. If I cut those out, I can instead put in 4 tests using mixed RW (50/50) with the block sizes you pointed out. Let me throw something together really quick and see how it looks
I personally don't like the idea of adding in (bad/good/great) indicators of performance from these tests for several reasons. The cutoffs between them can be arbitrary and essential our opinion of what "good" means. In addition, what's considered good today might be considered poor 2 months from now as hardware and expectations change. If it's something a lot of people want, then I'll reconsider.
Head Janitor @ LES • About • Rules • Support
Please don't. Morons who pay $4/yr for a special which is capped to 2Ghz tend to bitch a lot about these bogus statements.
My pronouns are like/subscribe.
All 4 tests are random rw tests with varying block sizes. Here's what it looks like
Head Janitor @ LES • About • Rules • Support
It's looking great @Mason. I'm not sure if you are describing the test anywhere in the script output but may I suggest that add "(mixed r/w 50/50)" on the table header above. So, it says
fio Disk Speed Tests (mixed r/w 50/50)
Thanks @beagle! I changed it to "fio Random R+W Disk Tests" shortly after posting what the current output looks like, but I like your suggestion better so I'm going to roll with that! Thanks again
A family emergency came up this past weekend, so I haven't worked on it too much. Prior to the weekend, I did get all the 32-bit juices flowing, so it's 32/64-bit compatible now. I also did some testing with IPv6-only VMs as well and made some changes to make it compatible on that front. Unfortunately both GitHub (raw.githubusercontent.com via Fastly) and Geekbench (cdn.geekbench.com) CDNs are not IPv6 ready for whatever reason. I did manage to get a workaround going for GitHub, but not Geekbench. So for now, unless I add Geekbench's files to my repo (which I don't think I'm allowed to do), Geekbench tests will only be performed on IPv4 (either native or NAT) enabled machines.
Head Janitor @ LES • About • Rules • Support
You could always haproxy it externally, because hosts.cx is IPV4 only as well.
My pronouns are like/subscribe.
You're very unlikely to hit that in real life though - Usually you have far more reads vs writes, except in some particular cases (eg. a database server could be write-heavy if it's primarily for transactions instead of analytics).
Daniel15 | https://d.sb/. List of all my VPSes: https://d.sb/servers
dnstools.ws - DNS lookups, pings, and traceroutes from 30 locations worldwide.