Why does yabs use Holy Build Box?

Not_OlesNot_Oles Hosting ProviderContent Writer

Hello!

I'm trying to understand why yabs uses Holy Build Box. So I am asking the LES yabs support desk. :)

The Security Notice in the README.md for yabs says "The network (iperf3) and disk (fio) tests use binaries that are compiled by myself utilizing a Holy Build Box compiliation [spelling typo] environment to ensure binary portability."

When I look at this Holy Build Box page, it says, "the Holy Build Box approach is to statically link to all dependencies, except for glibc and other system libraries that are found on pretty much every Linux distribution, such as libpthread and libm."

However, when I sneak in while yabs is in action :) and run ldd on the fio and iperf3 binaries installed by yabs, ldd tells me (please see output below) that both of these binaries are dynamic. Then ldd gives me a list of libraries all of which seem to be installed on the box which is running yabs.

I guess the fio and iperf3 binaries are downloaded by yabs since neither of these is installed on the host.

I hear that linux-vdso is part of the kernel and so presumably wouldn't be statically linked. libpthread and libm each are specifically mentioned above as "system libraries that are found on pretty much every Linux distribution" which wouldn't be expected to be statically linked either. I am guessing ld-linux and libdl also would be on every system. It seems that librt has to do with realtime and is Posix. So maybe librt would be everywhere too.

Could all this mean that the reason to use Holy Build Box is only to ensure the use of an old version of the C library in libc.so.6 and that no dependencies are statically linked for either the fio or the iperf3 provided by yabs? Apparently no, because the glibc version as shown below seems to be 2.35, which is the current version.

So, why use the Holy Build Box if nothing is statically linked and we also are using the current version of glibc instead of an old version? What am I missing? :)

Thanks in advance for any help! Best wishes and kindest regards from Sonora! ๐Ÿ—ฝ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡ฒ๐Ÿ‡ฝ๐Ÿœ๏ธ

root@darkstar:~/test/2022-04-22T05_46_41+00_00/disk# ldd fio
        linux-vdso.so.1 (0x00007ffd22196000)
        librt.so.1 => /lib64/librt.so.1 (0x00007f334efe2000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f334efdd000)
        libm.so.6 => /lib64/libm.so.6 (0x00007f334eef9000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007f334eef4000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f334ecda000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f334f01c000)
root@darkstar:~/test/2022-04-22T05_46_41+00_00/disk# file fio
fio: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=da93aaa4b5346d3ee4d4f9ed1d6d04aeb4aa279b, with debug_info, not stripped
root@darkstar:~/test/2022-04-22T05_46_41+00_00/disk# 
root@darkstar:~/test/2022-04-22T05_46_41+00_00/iperf# ldd iperf3 
        linux-vdso.so.1 (0x00007fff4daaa000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007f2292531000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f229252c000)
        libm.so.6 => /lib64/libm.so.6 (0x00007f2292448000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f229222e000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f229256b000)
root@darkstar:~/test/2022-04-22T05_46_41+00_00/iperf# file iperf3 
iperf3: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=7354764742f5217d04624d477d5e940fa5850ec2, with debug_info, not stripped
root@darkstar:~/test/2022-04-22T05_46_41+00_00/iperf#
root@darkstar:~# which fio
which: no fio in (/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/lib64/libexec/kf5:/usr/lib64/qt5/bin)
root@darkstar:~# which iperf3
which: no iperf3 in (/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/lib64/libexec/kf5:/usr/lib64/qt5/bin)
root@darkstar:~# 
root@darkstar:~# /lib64/libc.so.6
GNU C Library (GNU libc) stable release version 2.35.
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
Compiled by GNU CC version 11.2.0.
libc ABIs: UNIQUE IFUNC ABSOLUTE
For bug reporting instructions, please see:
<https://www.gnu.org/software/libc/bugs.html>.
root@darkstar:~# 

I hope everyone gets the servers they want!

Thanked by (2)pikachu Mason
Tagged:

Comments

  • edited April 2022

    YABS is just a script which use third party tools to collect relevant data. YABS developer does not develop those third party tools and therefore can't be responsible for what is included in them. Address your concerns to the right crowd/direction:

    https://fio.readthedocs.io/en/latest/fio_doc.html
    https://en.wikipedia.org/wiki/Ldd_(Unix)
    https://iperf.fr/contact.php

  • @Not_Oles

    By the way, there's also this page:

    https://github.com/masonr/yet-another-bench-script/blob/master/bin/README.md

    My guess is that @Mason is using Holy Build Box in order to have a less idiosyncratic (more portable) build environment.

    As far as I can tell, there's no particular claim about just how static the resulting binaries are (will be).

    Given what you posted above, the libraries linked to seem to be very generally available on any reasonably recent Linux. (And given this, YABS would probably fail on a much older version of Slackware.)

    Thanked by (1)Mason

    "A single swap file or partition may be up to 128 MB in size. [...] [I]f you need 256 MB of swap, you can create two 128-MB swap partitions." (M. Welsh & L. Kaufman, Running Linux, 2e, 1996, p. 49)

  • havochavoc OGContent Writer

    Are you concerned about security or performance impact from old libraries?

    If you're gonna pipe mystery scripts into bash you'd need to nuke the install afterwards anyway imo. And on the performance side I'd venture its more about consistency & comparability

  • @havoc said:
    If you're gonna pipe mystery scripts into bash you'd need to nuke the install afterwards anyway imo.

    Just as a remark, YABS is pretty transparent, and it cleans up after itself.

    Whenever I run YABS (which isn't often), I use local binaries for fio and iperf3, so I haven't wondered about just how static the downloaded binaries would have been.

    "A single swap file or partition may be up to 128 MB in size. [...] [I]f you need 256 MB of swap, you can create two 128-MB swap partitions." (M. Welsh & L. Kaufman, Running Linux, 2e, 1996, p. 49)

  • I suspect it's just to try and earn what the terms mean and understand things?

    My guess though is maybe the prices changed a lot since that text was written or maybe it was something they intended to do and never did.

    I'm sure ~1/4 of my code comments become obsolete within a year and probably still kick around in readmes.

    XR Developer | https://redironlabs.com

  • You are checking the one produced by Holy Build Box, and therefore you shouldn't see any dynamic linked libraries beyond those standard libraries.

    For comparison, you can check fio and iperf3 from your system(after you install one), and you clearly see that they need other dynamic linked libraries:

    $ ldd /usr/bin/fio
        linux-vdso.so.1 (0x00007ffff61d4000)
        libnuma.so.1 => /lib64/libnuma.so.1 (0x00007fac4a00d000)
        librt.so.1 => /lib64/librt.so.1 (0x00007fac4a008000)
        libz.so.1 => /lib64/libz.so.1 (0x00007fac49fee000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fac49fe9000)
        libm.so.6 => /lib64/libm.so.6 (0x00007fac49f0d000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007fac49f08000)
        libc.so.6 => /lib64/libc.so.6 (0x00007fac49cfc000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fac4a263000)
    $ ldd /usr/bin/iperf3 
        linux-vdso.so.1 (0x00007ffccf499000)
        libiperf.so.0 => /lib64/libiperf.so.0 (0x00007f43fe193000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f43fdf89000)
        libssl.so.1.1 => /lib64/libssl.so.1.1 (0x00007f43fdeec000)
        libcrypto.so.1.1 => /lib64/libcrypto.so.1.1 (0x00007f43fdbfe000)
        libsctp.so.1 => /lib64/libsctp.so.1 (0x00007f43fdbf8000)
        libm.so.6 => /lib64/libm.so.6 (0x00007f43fdb1c000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f43fe1ee000)
        libz.so.1 => /lib64/libz.so.1 (0x00007f43fdb00000)
    
    Thanked by (2)Mason Not_Oles
  • MasonMason AdministratorOG
    edited April 2022

    tl;dr - shit didn't work reliably before HBB, but shit worked flawlessly after switching to HBB

    I'm not really an expert on this stuff and I won't pretend I know what's going on under the hood. In the beginning of YABS, there was a TON of trial and error getting things to run. I had three threads going where I was soliciting user feedback and incorporating changes to the script on a near-daily basis:

    LES - https://lowendspirit.com/discussion/466/yet-another-benchmark-script-yabs-linux-benchmarking-script-using-dd-iperf-geekbench/p1
    OGF - https://lowendtalk.com/discussion/160627/yet-another-benchmark-script-yabs-linux-benchmarking-script-using-dd-iperf-geekbench/p1
    H(B/T) - https://hostedtalk.net/t/yet-another-benchmark-script-yabs-linux-benchmarking-script-using-fio-iperf-geekbench/3099

    Now... to make YABS widely accessible and available for all most Linux-based systems, the binaries for the performance tests (fio & iperf3) needed to be provided by the script somehow. Basically I didn't want the user to have to install anything beforehand or need root permissions to run the script, but I also wanted to use the latest and greatest performance libraries and not have to rely on something pre-installed (i.e. dd instead of fio) or something that's more simplistic (i.e. wget/speedtest.net instead of iperf3).

    An early version of yabs did have static binaries that were compiled traditionally -- https://github.com/masonr/yet-another-bench-script/tree/a6ca7214aa3a28fcf3dbafbdee0e985eab9f78cb/bin. This first attempt to build static libraries used the traditional method (compiling with the static options, etc.), but ended up having a ton of capabilities problems with various systems (the three threads mentioned above have lots of examples of this -- illegal instructions, failure to locate files, libc conflicts, etc.). So, I searched for alternative means, landed on Holy Build Box, and decided to give it a shot.

    Basically you can think of HBB as a way to build a "hybrid" binary. Not fully dynamic, not fully static. It helped resolve the issues I was having and bridge the gap by baking in all the libraries needed for the program, without including static libs of the common libraries shared by all Linux flavors and versions. As @zxrlha points out above, these HBB-built binaries have way less dynamic links than the package installer version. ldd isn't going to show you all of the libraries that were statically linked in the binary by HBB.

    Here's a quick comparison test on an Ubuntu box:

    YABS-provided, HBB-compiled fio binary:

    ~$ ldd fio_x64
            linux-vdso.so.1 (0x00007fff98971000)
            librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007efe376a1000)
            libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007efe3767e000)
            libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007efe3752f000)
            libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007efe37529000)
            libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007efe37337000)
            /lib64/ld-linux-x86-64.so.2 (0x00007efe376ba000)
    

    apt-get installed (Ubuntu 20.04) binary:

    ~$ ldd $(which fio)
            linux-vdso.so.1 (0x00007ffeb47a2000)
            libgfapi.so.0 => /usr/lib/x86_64-linux-gnu/libgfapi.so.0 (0x00007feb5e77d000)
            librbd.so.1 => /usr/lib/x86_64-linux-gnu/librbd.so.1 (0x00007feb5e209000)
            librados.so.2 => /usr/lib/x86_64-linux-gnu/librados.so.2 (0x00007feb5e0c5000)
            libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 (0x00007feb5e0b8000)
            librdmacm.so.1 => /usr/lib/x86_64-linux-gnu/librdmacm.so.1 (0x00007feb5e099000)
            libibverbs.so.1 => /usr/lib/x86_64-linux-gnu/libibverbs.so.1 (0x00007feb5e07a000)
            librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007feb5e06e000)
            libaio.so.1 => /usr/lib/x86_64-linux-gnu/libaio.so.1 (0x00007feb5e069000)
            libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007feb5e04d000)
            libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007feb5defe000)
            libmvec.so.1 => /lib/x86_64-linux-gnu/libmvec.so.1 (0x00007feb5ded2000)
            libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007feb5deaf000)
            libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007feb5dea7000)
            libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007feb5dcb5000)
            libtirpc.so.3 => /lib/x86_64-linux-gnu/libtirpc.so.3 (0x00007feb5dc86000)
            libacl.so.1 => /usr/lib/x86_64-linux-gnu/libacl.so.1 (0x00007feb5dc7b000)
            libglusterfs.so.0 => /usr/lib/x86_64-linux-gnu/libglusterfs.so.0 (0x00007feb5db5e000)
            libgfrpc.so.0 => /usr/lib/x86_64-linux-gnu/libgfrpc.so.0 (0x00007feb5db1d000)
            libgfxdr.so.0 => /usr/lib/x86_64-linux-gnu/libgfxdr.so.0 (0x00007feb5dafe000)
            libceph-common.so.2 => /usr/lib/x86_64-linux-gnu/ceph/libceph-common.so.2 (0x00007feb54e73000)
            libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007feb54c91000)
            libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007feb54c76000)
            /lib64/ld-linux-x86-64.so.2 (0x00007feb5e9a4000)
            libnl-3.so.200 => /lib/x86_64-linux-gnu/libnl-3.so.200 (0x00007feb54c53000)
            libnl-route-3.so.200 => /usr/lib/x86_64-linux-gnu/libnl-route-3.so.200 (0x00007feb54bd9000)
            libgssapi_krb5.so.2 => /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2 (0x00007feb54b8c000)
            libuuid.so.1 => /lib/x86_64-linux-gnu/libuuid.so.1 (0x00007feb54b83000)
            libcrypto.so.1.1 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 (0x00007feb548ad000)
            libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007feb54891000)
            libboost_thread.so.1.71.0 => /usr/lib/x86_64-linux-gnu/libboost_thread.so.1.71.0 (0x00007feb54863000)
            libboost_iostreams.so.1.71.0 => /usr/lib/x86_64-linux-gnu/libboost_iostreams.so.1.71.0 (0x00007feb5483c000)
            libblkid.so.1 => /lib/x86_64-linux-gnu/libblkid.so.1 (0x00007feb547e5000)
            libudev.so.1 => /lib/x86_64-linux-gnu/libudev.so.1 (0x00007feb547b8000)
            libkrb5.so.3 => /usr/lib/x86_64-linux-gnu/libkrb5.so.3 (0x00007feb546db000)
            libk5crypto.so.3 => /usr/lib/x86_64-linux-gnu/libk5crypto.so.3 (0x00007feb546a8000)
            libcom_err.so.2 => /lib/x86_64-linux-gnu/libcom_err.so.2 (0x00007feb546a1000)
            libkrb5support.so.0 => /usr/lib/x86_64-linux-gnu/libkrb5support.so.0 (0x00007feb54692000)
            libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 (0x00007feb5467f000)
            liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007feb54656000)
            libzstd.so.1 => /usr/lib/x86_64-linux-gnu/libzstd.so.1 (0x00007feb545ad000)
            libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1 (0x00007feb545a4000)
    

    I've been thinking about creating a LES Talk series on YABS (history, use cases, how to interpret results, etc.). Might be time to get that started :)


    edit:

    @Not_Oles said: So, why use the Holy Build Box if nothing is statically linked and we also are using the current version of glibc instead of an old version? What am I missing? :)

    Just want to re-emphasize -- HBB statically links a lot of the fluff that isn't preinstalled on most systems except for the core Linux libraries found everywhere. The reason you are seeing glibc 2.35 linked here is because... well... that's what's installed on your system. If someone with an older distro had an earlier version and used this same binary provided by YABS, then it would dynamically link to their older version of glibc. This is the "magic" sauce of HBB in action -- it gets over the libc incompatibilities but still statically compiles most of the necessary libraries for maximum portability.

    Head Janitor @ LES • AboutRulesSupport

  • havochavoc OGContent Writer
    edited April 2022

    @angstrom said: Just as a remark, YABS is pretty transparent, and it cleans up after itself.

    Indeed - wasn't aimed at commentary as to whether @Mason 's code can be trusted in particular, but rather general security practice commentary. Piping scripts off the open internet into bash is generally not awesome

    I am a happy yabs user though :)...just nuke it after as said

    Thanked by (1)Mason
  • MasonMason AdministratorOG
    edited April 2022

    @havoc said:

    @angstrom said: Just as a remark, YABS is pretty transparent, and it cleans up after itself.

    Indeed - wasn't aimed at commentary as to whether @Mason 's code can be trusted in particular, but rather general security practice commentary. Piping scripts off the open internet into bash is generally not awesome

    I am a happy yabs user though :)...just nuke it after as said

    Totally understandable! No offense taken.

    I tried my best to make things as 100% clear and open as possible, but naturally that's impossible when black box binaries are involved. So to mitigate as much as possible, I defer to locally installed binaries (if available) and make the compilation steps / instructions completely available for anyone to replicate.

    I did play around with the idea of generating a hash/signature of the compiled binary so that the extra cautious could (in theory) replicate my compilation environment and verify that the binaries they create match what's in the repo with no funny business going on. BUT even with using Docker and such, the hashes weren't matching across systems when I tested it last IIRC. Not really sure how to explain that unless the underlying host environment has an impact on what exactly goes into the binary.

    Thanked by (2)adly havoc

    Head Janitor @ LES • AboutRulesSupport

  • havochavoc OGContent Writer
    edited April 2022

    @Mason - speaking of yabs - could the defaults make a bit more aggressive assumptions on iperf. Takes forever cause its hitting a bunch of dead/overloaded servers last I used yabs. Tricky problem, but its bad enough that i usually switch bypass iperf entirely

    Thanked by (1)Mason
  • MasonMason AdministratorOG

    @havoc said:
    @Mason - speaking of yabs - could the defaults make a bit more aggressive assumptions on iperf. Takes forever cause its hitting a bunch of dead/overloaded servers last I used yabs. Tricky problem, but its bad enough that i usually switch bypass iperf entirely

    The only thing that could be tweaked is the number of retries. Right now I think it's set to 5, it was originally at 10, but maybe a revisit is necessary to bring it down even more (3?).

    For now, I'd recommend just using the -r flag, which reduces the tested iperf locations to just three instead of using the entire list. Those three are generally the most reliable.

    Head Janitor @ LES • AboutRulesSupport

  • havochavoc OGContent Writer

    @Mason said:

    @havoc said:
    @Mason - speaking of yabs - could the defaults make a bit more aggressive assumptions on iperf. Takes forever cause its hitting a bunch of dead/overloaded servers last I used yabs. Tricky problem, but its bad enough that i usually switch bypass iperf entirely

    The only thing that could be tweaked is the number of retries. Right now I think it's set to 5, it was originally at 10, but maybe a revisit is necessary to bring it down even more (3?).

    For now, I'd recommend just using the -r flag, which reduces the tested iperf locations to just three instead of using the entire list. Those three are generally the most reliable.

    Honestly I'd consider straight up 1 retry and eliminate the most common failure ones.

    This specific point just sinks the overall experience of using yabs so much. idk...it is clearly a useful metric though so I can see the enthusiasm for keeping it despite flaws

    Thanked by (1)Mason
  • MasonMason AdministratorOG

    @havoc said:

    @Mason said:

    @havoc said:
    @Mason - speaking of yabs - could the defaults make a bit more aggressive assumptions on iperf. Takes forever cause its hitting a bunch of dead/overloaded servers last I used yabs. Tricky problem, but its bad enough that i usually switch bypass iperf entirely

    The only thing that could be tweaked is the number of retries. Right now I think it's set to 5, it was originally at 10, but maybe a revisit is necessary to bring it down even more (3?).

    For now, I'd recommend just using the -r flag, which reduces the tested iperf locations to just three instead of using the entire list. Those three are generally the most reliable.

    Honestly I'd consider straight up 1 retry and eliminate the most common failure ones.

    This specific point just sinks the overall experience of using yabs so much. idk...it is clearly a useful metric though so I can see the enthusiasm for keeping it despite flaws

    Fair enough, I'll consider it.

    Eventually, I'd like to code up a bot that monitors the available iperf servers and if one goes down for whatever reason (and is inaccessible for X minutes), then have the bot remove the location (or swap with a working one) and commit the changes to the repo without any involvement from me.

    The number of available public iperf locations started out really slim, but have somewhat increased over time, so there may be better iperf options to consider at nearby (or the same) locations as the ones currently in the script.

    Thanked by (1)adly

    Head Janitor @ LES • AboutRulesSupport

  • @Mason said:

    @havoc said:

    @Mason said:

    @havoc said:
    @Mason - speaking of yabs - could the defaults make a bit more aggressive assumptions on iperf. Takes forever cause its hitting a bunch of dead/overloaded servers last I used yabs. Tricky problem, but its bad enough that i usually switch bypass iperf entirely

    The only thing that could be tweaked is the number of retries. Right now I think it's set to 5, it was originally at 10, but maybe a revisit is necessary to bring it down even more (3?).

    For now, I'd recommend just using the -r flag, which reduces the tested iperf locations to just three instead of using the entire list. Those three are generally the most reliable.

    Honestly I'd consider straight up 1 retry and eliminate the most common failure ones.

    This specific point just sinks the overall experience of using yabs so much. idk...it is clearly a useful metric though so I can see the enthusiasm for keeping it despite flaws

    Fair enough, I'll consider it.

    Eventually, I'd like to code up a bot that monitors the available iperf servers and if one goes down for whatever reason (and is inaccessible for X minutes), then have the bot remove the location (or swap with a working one) and commit the changes to the repo without any involvement from me.

    The number of available public iperf locations started out really slim, but have somewhat increased over time, so there may be better iperf options to consider at nearby (or the same) locations as the ones currently in the script.

    I think a better way would be to have an external bot test a list of iperf locations and have the script call the list (via curl or something) so the git history won't be a mess of changes just because a host goes up or down.

    Just my $0.02. :)

    Thanked by (4)saibal Mason yoursunny adly

    Cheap dedis are my drug, and I'm too far gone to turn back.

  • MasonMason AdministratorOG

    @CamoYoshi said:

    @Mason said:

    @havoc said:

    @Mason said:

    @havoc said:
    @Mason - speaking of yabs - could the defaults make a bit more aggressive assumptions on iperf. Takes forever cause its hitting a bunch of dead/overloaded servers last I used yabs. Tricky problem, but its bad enough that i usually switch bypass iperf entirely

    The only thing that could be tweaked is the number of retries. Right now I think it's set to 5, it was originally at 10, but maybe a revisit is necessary to bring it down even more (3?).

    For now, I'd recommend just using the -r flag, which reduces the tested iperf locations to just three instead of using the entire list. Those three are generally the most reliable.

    Honestly I'd consider straight up 1 retry and eliminate the most common failure ones.

    This specific point just sinks the overall experience of using yabs so much. idk...it is clearly a useful metric though so I can see the enthusiasm for keeping it despite flaws

    Fair enough, I'll consider it.

    Eventually, I'd like to code up a bot that monitors the available iperf servers and if one goes down for whatever reason (and is inaccessible for X minutes), then have the bot remove the location (or swap with a working one) and commit the changes to the repo without any involvement from me.

    The number of available public iperf locations started out really slim, but have somewhat increased over time, so there may be better iperf options to consider at nearby (or the same) locations as the ones currently in the script.

    I think a better way would be to have an external bot test a list of iperf locations and have the script call the list (via curl or something) so the git history won't be a mess of changes just because a host goes up or down.

    Just my $0.02. :)

    Cheers! Thanks for the recommendation. Something to consider for sure so the git commits are actually meaningful. Or perhaps a separate repo just for the iperf bot code, list of available locations, and of enabled locations.

    Thanked by (1)CamoYoshi

    Head Janitor @ LES • AboutRulesSupport

  • @Mason said: program

    Bonus point for not calling it an app. ;)

    Thanked by (2)Mason mikho

    It wisnae me! A big boy done it and ran away.
    NVMe2G for life! until death (the end is nigh)

  • @Mason said:

    @CamoYoshi said:

    I think a better way would be to have an external bot test a list of iperf locations and have the script call the list (via curl or something) so the git history won't be a mess of changes just because a host goes up or down.

    Cheers! Thanks for the recommendation. Something to consider for sure so the git commits are actually meaningful. Or perhaps a separate repo just for the iperf bot code, list of available locations, and of enabled locations.

    GitHub Actions can run periodical tasks.
    The list itself could be hosted in GitHub Pages (force push, no history).
    For additional intelligence, make a script on Cloudflare Workers to randomly give out two locations in each of US, EU, APAC.

    Thanked by (1)Mason

    ServerFactory aff best VPS; HostBrr aff best storage.

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @Mason said: Here's a quick comparison test on an Ubuntu box:

    [ . . . ]

    Wow! As a cluelessโ„ข guy I have to confess that I had no idea there would be so many libraries dynamically linked to fio.

    So now it's easy even for me to imagine why the Holy Build Box version of yabs works better across many varied Linux systems!

    I do not have either fio or iperf3 installed on Darkstar, but I did pick a binary and run ldd on it just to see how many libraries might be dynamically linked.

    root@darkstar:~# ldd /usr/local/bin/qemu-system-x86_64 | wc -l
    137
    root@darkstar:~# 
    

    Wow! Just wow! I never expected so many linked libraries!

    Special thanks to @Mason, @zxrlha, @Angstrom for kind replies to my question! Thanks also to everyone else here!

    Best wishes and kindest regards! ๐ŸŒŽ๐ŸŒ

    Thanked by (3)angstrom Mason flips

    I hope everyone gets the servers they want!

Sign In or Register to comment.