New Intel CPU vulnerability

13

Comments

  • MikeAMikeA Hosting ProviderOG
    edited January 2020

    ASRR boards have IPMI and cooling is nothing, I run a 16 core Ryzen on air and it's only 42*c under 70% load right now. ASRR has barebones (1U) servers for Ryzen. They are great.

    Anyway, good day.

  • ClouviderClouvider Hosting ProviderOG

    @MikeA said:
    ASRR boards have IPMI and cooling is nothing, I run a 16 core Ryzen on air and it's only 42*c under 70% load right now. ASRR has barebones (1U) servers for Ryzen. They are great.

    Anyway, good day.

    I’m genuinely interested in the positive way. I saw two barebones only from ASRR, one with and one with no hot swap. Neither supported NVMe on M.2 nor U.2, possibly one could put the daughter card to split NVMe out on this PCIe slot (but then sacrificing ability to upgrade to 10G as there’s only one slot..), assuming there is enough space to cool them. Would you be happy to share what chassis you run? I’m not judging anything. I’ve been attacked for buying Intel, my reasons is the platform of lack of it, I’m happy to re-evaluate if that’s not the case

  • @Cyder, this is lowend world. Ghetto setup works. Average users want Ryzen because of its ryzing fame. Gotta give kids toys that they want after all.

    Now, I don't think you've been specifically attacked for using intel. The trigger for some is your statement of, "Desktop CPU is not meant to be used for servers." Which is technically true. But then again, this is lowend segment. Kids want their toys.

    On a side note, doesn't Ryzen support ECC as long as mobo supports it?

    ♻ Amitz day is October 21.
    ♻ Join Nigh sect by adopting my avatar. Let us spread the joys of the end.

  • MikeAMikeA Hosting ProviderOG

    @Clouvider said:

    @MikeA said:
    ASRR boards have IPMI and cooling is nothing, I run a 16 core Ryzen on air and it's only 42*c under 70% load right now. ASRR has barebones (1U) servers for Ryzen. They are great.

    Anyway, good day.

    I’m genuinely interested in the positive way. I saw two barebones only from ASRR, one with and one with no hot swap. Neither supported NVMe on M.2 nor U.2, possibly one could put the daughter card to split NVMe out on this PCIe slot (but then sacrificing ability to upgrade to 10G as there’s only one slot..), assuming there is enough space to cool them. Would you be happy to share what chassis you run? I’m not judging anything. I’ve been attacked for buying Intel, my reasons is the platform of lack of it, I’m happy to re-evaluate if that’s not the case

    Well the stuff I run doesn't need 4 or more NVMe drives, and the drives I use are just 1 or 2 M.2 drive configs with adapters (or 1x onboard + 1x adapter) with the rest being SATA for boot and stuff. I don't hotswap, but you can technically hotswap without a sas card or whatever anyway?

    But of course if you're wanting a huge NVMe array or hotswap Epyc is obviously the way to go, there's no doubt.

  • MikeAMikeA Hosting ProviderOG

    @deank said:
    On a side note, doesn't Ryzen support ECC as long as mobo supports it?

    The majority of the boards support ECC, even many of the cheap $60-80 boards. There's no point of doing that with a super cheap board though really..

  • @aMike, you are giving kids (Spring-break hosts) ideas...

    ♻ Amitz day is October 21.
    ♻ Join Nigh sect by adopting my avatar. Let us spread the joys of the end.

  • @Harambe said:
    you're not dealing with multiple tenants and can just boot with mitigations=off

    Look at the scrub running a 5.x kernel

    Thanked by (1)Harambe

    My pronouns are like/subscribe.

  • joepie91joepie91 OGServices Provider

    @comi said:

    @AnthonySmith said:
    Ryzen is simply a better choice and ticks every box in this market segment especially and out performs/delivers over any intel server CPU in the same ballpark, I hate to say it especially as everyone knows how hard I ride the @clouvider train but sadly that train only seems to stop at intel stations now and I may need to consider another route.

    1 intel bug... ok, 2 hmm, 3, are you shitting me, 4,..... FOUR!!! ... oh fuck off intel.

    Ryzen shows better performance, but AFAIK it is not generally more secure. This particular vulnerability affects Intel specifically, but speculative execution attack vector is present in AMD as well.

    Also, are there platforms with Ryzen and IPMI?

    Having spoken to a few people who research these sorts of bugs, they all seem to be of the opinion that while AMD isn't perfect, there are far, far less cut corners than with Intel, and so these sorts of bugs are much less likely.

    Thanked by (2)comi poisson
  • Nothing is ever perfect.

    ♻ Amitz day is October 21.
    ♻ Join Nigh sect by adopting my avatar. Let us spread the joys of the end.

  • FranciscoFrancisco Hosting ProviderOG

    @Clouvider said:
    @Francisco to confirm, you’re doing it in 2U at the moment?

    I'm using them in 1U's.

    Cooling becomes a problem if you're rimming the cores 24/7, but then you'll just throttle anyway.

    You can disable turbo (at least in Linux) and that seems to keep it from throttling, but you aren't going to see past the stock clock.

    A 2U would probably be better, but that's a lot of rack space if you're in premium markets.

    For us and our CPU usage, 1U's are fine. The projections on our Ryzen nodes is they'll run at < 50% total CPU utilization with it being a good bit lower than that on quieter ones. It isn't going to help with the hosts that redline their nodes all the damn time, but that's life.

    Francisco

    Thanked by (2)Clouvider uptime
  • ClouviderClouvider Hosting ProviderOG

    Yeah, the risk here I suppose, when offering them as dedicated, is that someone actually might and then might expect the provider to get them to such a state they are usable at full load. You’re too using these Asrock Rack barebones, if you don’t mind me asking?

    D

  • MikeAMikeA Hosting ProviderOG

    Even under full load mine never got hot enough to be in throttling range, but I use 2U for all of my existing stuff. Anything in the future will be 1U though.

  • Thanked by (1)vimalware

    Hey teamacc. You're a dick. (c) Jon Biloh, 2020.

  • InceptionHostingInceptionHosting Hosting ProviderOG

    https://inceptionhosting.com
    Please do not use the PM system here for Inception Hosting support issues.

  • MikeAMikeA Hosting ProviderOG

    They're good, but if I recall that one is like $750.

  • Surely some ASRock boards support Ryzen and multiple NVMe drives, as Hetzner offers Ryzens with NVMe? That's not to say they not building their own boxes though?

  • ClouviderClouvider Hosting ProviderOG

    That’s the one I was saying has severely limited storage options (no NVMe, or one maybe 4 on a daughter, and no option to upgrade to 10G then)

    Thanked by (1)Mr_Tom
  • ClouviderClouvider Hosting ProviderOG
    edited January 2020

    @Mr_Tom said:
    Surely some ASRock boards support Ryzen and multiple NVMe drives, as Hetzner offers Ryzens with NVMe? That's not to say they not building their own boxes though?

    See I’d ideally want to see a barebone when they already validated it will be able to cool what’s inside.

  • It's got one of these in it according to dmidecode, which seems slightly different to the one linked above: https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U#Specifications

  • ClouviderClouvider Hosting ProviderOG

    @Mr_Tom said:
    It's got one of these in it according to dmidecode, which seems slightly different to the one linked above: https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U#Specifications

    Now this has 2x m.2 and could be at least sellable - but no barebone with this one as far as I can find

  • FranciscoFrancisco Hosting ProviderOG

    @Clouvider said:

    @Mr_Tom said:
    It's got one of these in it according to dmidecode, which seems slightly different to the one linked above: https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U#Specifications

    Now this has 2x m.2 and could be at least sellable - but no barebone with this one as far as I can find

    That's the board we use. They have a 2T version which has dual 10gig copper ports.

    For us, we use the D4U since our Infiniband 3X VPI's have a 2nd port on them that can operate in 10gig & 40gig ethernet simultaneously.

    Francisco

    Thanked by (1)Clouvider
  • A quick question.

    Does "copper" part really make a difference?

    ♻ Amitz day is October 21.
    ♻ Join Nigh sect by adopting my avatar. Let us spread the joys of the end.

  • FranciscoFrancisco Hosting ProviderOG

    @deank said:
    A quick question.

    Does "copper" part really make a difference?

    Yes, it uses a lot more power. It's something like 3W+ per side.

    Francisco

  • ClouviderClouvider Hosting ProviderOG

    @deank said:
    A quick question.

    Does "copper" part really make a difference?

    That, long ago switches were less available with J but i guess this now changed. And fibre looks better on pictures ;-)

    Not a deal breaker though, would just need an entire rack of those if that was to make sense.

    Thanked by (1)WSS
  • @Clouvider your Epyc servers are listed under the "unmetered" category. I presume it's not worth capping the bandwidth as it wouldn't reduce the cost that much?

  • @cybertech said:

    @yoursunny said:
    Everyone should disable Hyper-Threading. It causes the two threads in a core to compete with each other on L1 and L2 caches. Today the bottleneck is memory access, not number of instructions. Thus, Hyper-Threading makes programs run slower.

    But it means less vCPU then? Then per node there will be less VPS can be made?

    If you need more cores, choose a CPU with more cores or a dual-socket machine. Don't use the fake cores created by Hyper-Threading.
    Also, disable Virtualization as it's another performance bottleneck.

    @poisson said:
    Interesting. The bottleneck is still RAM for DDR4?

    I'm building a router with content caching capability.

    The bottleneck was Ethernet adapter, so we get 100Gbps adapters.
    The bottleneck moves to kernel, so we use user-space drivers.
    Now the bottleneck is RAM even at 2933MHz speeds.

    Another bottleneck is PCIe 3.
    Each Xeon can only support three PCIe 3 x16 slots.
    EPYC supposedly can have 128 lanes of PCIe 4, but I can't find a single-socket motherboard with eight PCIe 4 x16 slots.

  • @yoursunny said:
    I'm building a router with content caching capability.

    That is not a router.

    My pronouns are like/subscribe.

  • ClouviderClouvider Hosting ProviderOG

    @Mr_Tom said:
    @Clouvider your Epyc servers are listed under the "unmetered" category. I presume it's not worth capping the bandwidth as it wouldn't reduce the cost that much?

    It wouldn’t be terribly substantial but yeah, it could save some smallish number of £ :-).

    Thanked by (1)Mr_Tom
  • @WSS said:

    @yoursunny said:
    I'm building a router with content caching capability.

    That is not a router.

    It's normally called a "forwarder". "router" is an alias.
    See here: https://github.com/usnistgov/ndn-dpdk

    Thanked by (2)uptime vimalware
  • InceptionHostingInceptionHosting Hosting ProviderOG

    sounds like a cobalt raq 4 :)

    Thanked by (1)Amitz

    https://inceptionhosting.com
    Please do not use the PM system here for Inception Hosting support issues.

Sign In or Register to comment.