New! Dual E5-2683 Shared Bare Metal Server Offer -- Texas!!

Not_OlesNot_Oles Hosting ProviderContent Writer

Hello!

Want to share this fine server with me?

It has both RAID 10 SSD drives and an extra NVMe drive. There is an IPv4/28 and an IPv6/128 with an IPv6/48 routed to the /128.

Hopefully you know lots more than I do! And you are willing to teach me something.

Hopefully you also want to share the $99.95 monthly cost!

Not sure how long I want to keep this server. It is paid up to November 24.

Thanks!

Tom

root@tx:/NVME/tom# date
Wed Oct  1 11:01:12 PM UTC 2025
root@tx:/NVME/tom# cat /etc/debian_version 
forky/sid
root@tx:/NVME/tom# lscpu
  [ . . . ]
CPU(s):                      64
  On-line CPU(s) list:       0-63
Vendor ID:                   GenuineIntel
  Model name:                Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
  [ . . . ]
root@tx:/NVME/tom# free -h
               total        used        free      shared  buff/cache   available
Mem:           125Gi       1.9Gi       123Gi       4.7Mi       956Mi       123Gi
Swap:           89Gi          0B        89Gi
root@tx:/NVME/tom# dmidecode --type 17
  [ . . . ]
        Total Width: 72 bits
        Data Width: 64 bits
  [ . . . ]
root@tx:/NVME/tom# lsblk 
NAME                    MAJ:MIN RM   SIZE RO TYPE   MOUNTPOINTS
sda                       8:0    0 931.5G  0 disk   
├─md126                   9:126  0   1.7T  0 raid10 
│ ├─md126p1             259:2    0 976.6M  0 part   /boot/efi
│ ├─md126p2             259:3    0 976.6M  0 part   /boot
│ └─md126p3             259:4    0   1.7T  0 part   
│   ├─oplink--vg-root   253:0    0   1.6T  0 lvm    /
│   └─oplink--vg-swap_1 253:1    0  89.4G  0 lvm    [SWAP]
└─md127                   9:127  0     0B  0 md     
  [ . . . ]  
nvme0n1                 259:0    0   3.6T  0 disk   
└─nvme0n1p1             259:1    0   3.6T  0 part   /NVME
  [ . . . ]
root@tx:/NVME/tom# 
root@tx:~# curl -sL yabs.sh | bash
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
#              Yet-Another-Bench-Script              #
#                     v2025-04-20                    #
# https://github.com/masonr/yet-another-bench-script #
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #

Sun Sep 21 09:25:22 PM UTC 2025

Basic System Information:
---------------------------------
Uptime     : 2 days, 20 hours, 21 minutes
Processor  : Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
CPU cores  : 64 @ 1200.000 MHz
AES-NI     : ✔ Enabled
VM-x/AMD-V : ✔ Enabled
RAM        : 125.8 GiB
Swap       : 89.3 GiB
Disk       : 1.6 TiB
Distro     : Debian GNU/Linux forky/sid
Kernel     : 6.16.7+deb14-amd64
VM Type    : NONE
IPv4/IPv6  : ✔ Online / ✔ Online

IPv6 Network Information:
---------------------------------
ISP        : The Optimal Link Corporation
ASN        : AS40156 The Optimal Link Corporation
Host       : The Optimal Link Corporation
Location   : Houston, Texas (TX)
Country    : United States

fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/mapper/oplink--vg-root):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 266.05 MB/s  (66.5k) | 559.56 MB/s   (8.7k)
Write      | 266.75 MB/s  (66.6k) | 562.50 MB/s   (8.7k)
Total      | 532.81 MB/s (133.2k) | 1.12 GB/s    (17.5k)
           |                      |                     
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 549.60 MB/s   (1.0k) | 544.12 MB/s    (531)
Write      | 578.80 MB/s   (1.1k) | 580.35 MB/s    (566)
Total      | 1.12 GB/s     (2.2k) | 1.12 GB/s     (1.0k)

iperf3 Network Speed Tests (IPv4):
---------------------------------
Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping           
-----           | -----                     | ----            | ----            | ----           
Clouvider       | London, UK (10G)          | 862 Mbits/sec   | 839 Mbits/sec   | 104 ms         
Eranium         | Amsterdam, NL (100G)      | 273 Mbits/sec   | 800 Mbits/sec   | 118 ms         
Uztelecom       | Tashkent, UZ (10G)        | 141 Mbits/sec   | 674 Mbits/sec   | 203 ms         
Leaseweb        | Singapore, SG (10G)       | 279 Mbits/sec   | 717 Mbits/sec   | 206 ms         
Clouvider       | Los Angeles, CA, US (10G) | 919 Mbits/sec   | 835 Mbits/sec   | 36.2 ms        
Leaseweb        | NYC, NY, US (10G)         | 203 Mbits/sec   | busy            | 38.7 ms        
Edgoo           | Sao Paulo, BR (1G)        | 143 Mbits/sec   | 150 Mbits/sec   | 180 ms         

iperf3 Network Speed Tests (IPv6):
---------------------------------
Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping           
-----           | -----                     | ----            | ----            | ----           
Clouvider       | London, UK (10G)          | 852 Mbits/sec   | 840 Mbits/sec   | 104 ms         
Eranium         | Amsterdam, NL (100G)      | 854 Mbits/sec   | 795 Mbits/sec   | 118 ms         
Uztelecom       | Tashkent, UZ (10G)        | 233 Mbits/sec   | 687 Mbits/sec   | 203 ms         
Leaseweb        | Singapore, SG (10G)       | 255 Mbits/sec   | 694 Mbits/sec   | --             
Clouvider       | Los Angeles, CA, US (10G) | 906 Mbits/sec   | 910 Mbits/sec   | 36.2 ms        
Leaseweb        | NYC, NY, US (10G)         | 329 Mbits/sec   | 905 Mbits/sec   | 38.8 ms        
Edgoo           | Sao Paulo, BR (1G)        | 134 Mbits/sec   | 130 Mbits/sec   | 161 ms         

Geekbench 6 Benchmark Test:
---------------------------------
Test            | Value                         
                |                               
Single Core     | 1100                          
Multi Core      | 8114                          
Full Test       | https://browser.geekbench.com/v6/cpu/13977587

YABS completed in 13 min 5 sec
root@tx:~# 
fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/nvme0n1p1):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 645.61 MB/s (161.4k) | 691.91 MB/s  (10.8k)
Write      | 647.32 MB/s (161.8k) | 695.55 MB/s  (10.8k)
Total      | 1.29 GB/s   (323.2k) | 1.38 GB/s    (21.6k)
           |                      |                     
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 1.07 GB/s     (2.0k) | 1.40 GB/s     (1.3k)
Write      | 1.13 GB/s     (2.2k) | 1.49 GB/s     (1.4k)
Total      | 2.20 GB/s     (4.3k) | 2.89 GB/s     (2.8k)

I hope everyone gets the servers they want!

Tagged:

Comments

  • @Not_Oles said:
    Not sure how long I want to keep this server. It is paid up to November 24.

    24th Nov 2025 or Nov'24? :lol:

    Thanked by (1)Not_Oles

    It’s OK if you disagree with me. I can’t force you to be right!
    IPv4: 32 bits of stress. IPv6: 128 bits of... well, more stress... Have anyone seen my subnet?

  • A lot of swap :-)

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @somik 24 Nov 2025

    @risturiz Yeah, I also noticed what seemed like a lot of swap. Additionally, if I understand right, the gentleman who kindly did the install for me set up the RAID 10 from within the baseboard controller in a way somehow analogous to a hardware RAID controller. I want to learn more about how the RAID works on this particular setup. Here is the full lsblk output:

    root@tx:~# date; lsblk
    Thu Oct  2 04:13:44 PM UTC 2025
    NAME                    MAJ:MIN RM   SIZE RO TYPE   MOUNTPOINTS
    sda                       8:0    0 931.5G  0 disk   
    ├─md126                   9:126  0   1.7T  0 raid10 
    │ ├─md126p1             259:2    0 976.6M  0 part   /boot/efi
    │ ├─md126p2             259:3    0 976.6M  0 part   /boot
    │ └─md126p3             259:4    0   1.7T  0 part   
    │   ├─oplink--vg-root   253:0    0   1.6T  0 lvm    /
    │   └─oplink--vg-swap_1 253:1    0  89.4G  0 lvm    [SWAP]
    └─md127                   9:127  0     0B  0 md     
    sdb                       8:16   0 931.5G  0 disk   
    ├─md126                   9:126  0   1.7T  0 raid10 
    │ ├─md126p1             259:2    0 976.6M  0 part   /boot/efi
    │ ├─md126p2             259:3    0 976.6M  0 part   /boot
    │ └─md126p3             259:4    0   1.7T  0 part   
    │   ├─oplink--vg-root   253:0    0   1.6T  0 lvm    /
    │   └─oplink--vg-swap_1 253:1    0  89.4G  0 lvm    [SWAP]
    └─md127                   9:127  0     0B  0 md     
    sdc                       8:32   0 931.5G  0 disk   
    ├─md126                   9:126  0   1.7T  0 raid10 
    │ ├─md126p1             259:2    0 976.6M  0 part   /boot/efi
    │ ├─md126p2             259:3    0 976.6M  0 part   /boot
    │ └─md126p3             259:4    0   1.7T  0 part   
    │   ├─oplink--vg-root   253:0    0   1.6T  0 lvm    /
    │   └─oplink--vg-swap_1 253:1    0  89.4G  0 lvm    [SWAP]
    └─md127                   9:127  0     0B  0 md     
    sdd                       8:48   0 931.5G  0 disk   
    ├─md126                   9:126  0   1.7T  0 raid10 
    │ ├─md126p1             259:2    0 976.6M  0 part   /boot/efi
    │ ├─md126p2             259:3    0 976.6M  0 part   /boot
    │ └─md126p3             259:4    0   1.7T  0 part   
    │   ├─oplink--vg-root   253:0    0   1.6T  0 lvm    /
    │   └─oplink--vg-swap_1 253:1    0  89.4G  0 lvm    [SWAP]
    └─md127                   9:127  0     0B  0 md     
    sde                       8:64   1  58.6G  0 disk   
    └─sde1                    8:65   1  58.6G  0 part   
    nvme0n1                 259:0    0   3.6T  0 disk   
    └─nvme0n1p1             259:1    0   3.6T  0 part   /NVME
    root@tx:~# 
    

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @risturiz said:
    A lot of swap :-)

    @Not_Oles said:
    @risturiz Yeah, I also noticed what seemed like a lot of swap.

    In the old days, I seem to remember people saying that the best amount of swap was an amount equal to the total RAM. Then, later, the swap recommendation often was half the RAM. No, with virtualization, it's often no swap at all, partly because the swap can contain private information.

    Sometimes, with small VPSes, I add a swap file. Once the swap is there, impossible things become possible. For example, Yabs doesn't complain that there isn't enough memory. Or one can do a big compile, but a little slowly. :)

    If the server got really, really full, the seemingly large amount of swap might be very helpful.

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    I need to learn more about how the RAID works on this server. Here's a screenshot of the installer at the RAID step:

    A quick google: Intel Virtual RAID on CPU SATA Option ROM 6.2.0.1034 gave me:

    Google AI:

    The "Intel Virtual RAID on CPU SATA Option ROM 6.2.0.1034" is a legacy version of the firmware for the Intel Virtual RAID on CPU (VROC) SATA RAID controller, a technology that enables RAID functionality for SATA drives directly through the CPU.

    References:

    https://www.intel.com/content/www/us/en/download/19599/intel-virtual-raid-on-cpu-intel-vroc-sata-intel-rapid-storage-technology-enterprise-intel-rste-windows-driver-for-s1200sp-family.html

    https://www.intel.la/content/www/xl/es/software/virtual-raid-on-cpu-vroc.html

    I hope everyone gets the servers they want!

  • I used something similar in the past and regretted it. Poor performance then running a regular OS level soft raid and absolutely no recovery options...

    Thanked by (1)Not_Oles

    It’s OK if you disagree with me. I can’t force you to be right!
    IPv4: 32 bits of stress. IPv6: 128 bits of... well, more stress... Have anyone seen my subnet?

  • I played around with a server like this a few years back and I will echo the comments above. Intel's Virtual RAID is no good. Now with FreeBSD, RAIDZ2 and bhyve it is a whole different story :) What is your "Forever Clueless™ Administrator" business plan for this?

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    The business plan is to have a fun server shared with a few friends.

    The personal goal is that I get to learn something and have fun with friends.

    I do think this E5 is a little expensive.

    A different server could be used. For example, at the moment I also have one of the new Hetzner EX63 servers. But, almost any other server could be obtained.

    It concerns me that you and @somik both seem to think there might be issues with the Intel hardware RAID. This server is from Oplink, and I trust Ryan 100%. It is hard for me to believe that Ryan would have rented me the server if he thought there was any problem. But, I also trust you and @somik 100% each.

    What would you like to do? If anything. Again, please do not feel at all obligated to me.

    Thanks for your interest, as always! :star:

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @somik said: Poor performance then running a regular OS level soft raid and absolutely no recovery options...

    @Crab said: Intel's Virtual RAID is no good.

    Hmmm.

    I hope everyone gets the servers they want!

  • @Not_Oles said:
    It concerns me that you and @somik both seem to think there might be issues with the Intel hardware RAID. This server is from Oplink, and I trust Ryan 100%. It is hard for me to believe that Ryan would have rented me the server if he thought there was any problem. But, I also trust you and @somik 100% each.

    To be clear, our concern is Intel's virtual RAID built into some older intel motherboards or Xeon CPUs. I think they are called Intel VROC RAID controller. This one to be exact:
    https://www.supermicro.com/manuals/other/X12_Intel_VROC_RAID_Configuration_Guide.pdf

    In my testing, software RAID is much more reliable thæn this, specially if you dont want to go with a hardware RAID. Current linux allows you to setup a RAID during OS installation and that is what you should go with rather thæn messing with bios settings trying to setup a RAID that might bork your entire setup...

    Thanked by (1)Not_Oles

    It’s OK if you disagree with me. I can’t force you to be right!
    IPv4: 32 bits of stress. IPv6: 128 bits of... well, more stress... Have anyone seen my subnet?

  • edited October 29

    Let me clarify this a bit further. As V in VROC indicates, it is essentially a virtual RAID implementation primarily targeted to NVMEs. It was designed to have a lot of throughput, low latency and minimal overhead, but not much qualify of life in mind. If we distill this down, it basically means that by 'losing' some hardcore performance you will gain more convenience through better tooling and utilities if you use standard software raid. So it is ok and not great, but could be much better.

    Just recently Intel made some moves to improve VROC with a partnership. Time will tell whether this will bring some real benefits.

    https://graidtech.com/post/graid-technology-intel-vroc-announcement-pr

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited October 29

    @Crab @somik Anyone else?

    • What might you guys like to do, together with me, if anything, going forward, on this server or on another server?

    • If another server, which?

    Thanks!

    I hope everyone gets the servers they want!

  • @Not_Oles said:
    * What might you guys like to do, together with me, if anything, going forward, on this server or on another server?

    • If another server, which?

    Thanks!

    Sad to say, i just got a LA server on offer from oneprovider at $17.99: https://oneprovider.com/en/promotion

    Xeon E3-1230 v2 3.3 GHz 4c/8t
    16 GB DDR3
    1× 1 TB (HDD SATA)
    

    So playing around with it. Installed proxmox and now setting up small LXC containers, one for each app and linking them over the internal NAT. Basically a docker stack, with more steps :lol:

    Thanked by (1)Not_Oles

    It’s OK if you disagree with me. I can’t force you to be right!
    IPv4: 32 bits of stress. IPv6: 128 bits of... well, more stress... Have anyone seen my subnet?

  • AuroraZeroAuroraZero ModeratorHosting ProviderRetired

    @somik said: So playing around with it. Installed proxmox and now setting up small LXC containers, one for each app and linking them over the internal NAT. Basically a docker stack, with more steps

    That is his basic imagehost strategem right there. Collect vps cheap link em together and Bob's your uncle!!

    Thanked by (1)Not_Oles
  • @AuroraZero said:

    @somik said: So playing around with it. Installed proxmox and now setting up small LXC containers, one for each app and linking them over the internal NAT. Basically a docker stack, with more steps

    That is his basic imagehost strategem right there. Collect vps cheap link em together and Bob's your uncle!!

    I have no idea what you are talking about. I definitely did not setup this dedi and it's LXC containers as a trial run for a distributed CDN for my image host :sweat_smile:

    And I am definitely did not rent the server to figure out the best way to sync image data among the vps acting as CDNs.

    And DO NOT check the domain used in the following image...

    Thanked by (1)Not_Oles

    It’s OK if you disagree with me. I can’t force you to be right!
    IPv4: 32 bits of stress. IPv6: 128 bits of... well, more stress... Have anyone seen my subnet?

  • WSSWSS OG

    @AuroraZero said:

    @somik said: So playing around with it. Installed proxmox and now setting up small LXC containers, one for each app and linking them over the internal NAT. Basically a docker stack, with more steps

    That is his basic imagehost strategem right there. Collect vps cheap link em together and Bob's your uncle!!

    What sort of faggotry is this? Just run the webserver with writing and not reading/execution priority for upload, and let the service that serves them have read only access as separate users and processes.

    Thanked by (1)Not_Oles

    My pronouns are like/subscribe.

Sign In or Register to comment.