is ther a vps with gpu ?

is ther a vps with gpu

Tagged:

Comments

  • Reach out to @crunchbits.

    Thanked by (2)MannDude crunchbits

    For domain registrations, create an account at Dynadot (ref) and spend $9.99 within 48 hours to receive $5 DynaDollars!
    Looking for cost-effective Managed/Anycast/DDoS-Protected/Geo DNS Services? Try ClouDNS (aff).

  • MannDudeMannDude Hosting Provider

    This is @crunchbits territory.

    Thanked by (1)crunchbits

    [ IncogNET LLC ] - Privacy By Design
    We believe that privacy and freedom of expression are two very important things, so we offer solutions to accessing and publishing content safely.
    [ USA: Liberty Lake, WA | Kansas City, MO | Allentown, PA ] [EU: Naaldwijk, NL ] [ CL Shared | KVM VPS | VPN | Dedicated Servers | Domain Names ]

  • crunchbitscrunchbits Hosting Provider
    edited December 2023

    @gaithads said:
    is ther a vps with gpu

    If you need/can use:
    RTX 3070, RTX 3090, or RTX A6000 those are live.

    If you wanted to go to the bad part of town and slip into some AMD Mi50, rumor is they're going to be pretty affordable too.

  • try vultr. they have many gpu instances

  • havochavoc OGContent Writer

    @crunchbits said:

    If you wanted to go to the bad part of town and slip into some AMD Mi50, rumor is they're going to be pretty affordable too.

    Let me know when the Mi50 is live - keen to try that out. Very nearly bought a AMD card last round but chickened out because wasn't confident I could get the ai stuff running on amd stack at the time.

    Given that AMD dropped RoCM support for the Mi50 in September I suspect pricing will be key though.

  • crunchbitscrunchbits Hosting Provider

    @havoc said:

    @crunchbits said:

    If you wanted to go to the bad part of town and slip into some AMD Mi50, rumor is they're going to be pretty affordable too.

    Let me know when the Mi50 is live - keen to try that out. Very nearly bought a AMD card last round but chickened out because wasn't confident I could get the ai stuff running on amd stack at the time.

    Given that AMD dropped RoCM support for the Mi50 in September I suspect pricing will be key though.

    Unfortunately it seems like AMD keeps bumbling the software/support side for the enterprise use-cases. Everyone I know that bet big on them with hardware has been burned so far (despite some still holding out hope). Dropping 3-4 year old GPUs from ROCm was.. surprising. I get that they're way behind the curve on software and support stack but it's hard to financially back their GPU/stack when you see what keeps happening.

    I'll let you have a Mi50 for free for awhile if you write-up your experiences publicly (your new blog is fine--as long as I can reference it in some KB/wiki stuff). Pricing is not set, but... cheap if we do make them public. These Mi50's are basically free to us but they're still hard to justify when a 3070 beats it in most ways, is better supported, and lower TDP.

  • havochavoc OGContent Writer

    @crunchbits said:

    I'll let you have a Mi50 for free for awhile if you write-up your experiences publicly

    That would be cool. And yeah sure can document - similar to llama.cpp unicorn one...i.e. basics

    3070 beats it in most ways, is better supported, and lower TDP.

    Guessing a bit here but suspect you'd see a sizable difference for training rather than inference. The Mi50 has literally 2x the memory throughput...but it sounds like AMD side training is less developed so unsure if that even works

  • crunchbitscrunchbits Hosting Provider

    @havoc said:
    Guessing a bit here but suspect you'd see a sizable difference for training rather than inference. The Mi50 has literally 2x the memory throughput...but it sounds like AMD side training is less developed so unsure if that even works

    Like usual the AMD side generally has better "raw horsepower" but the rest of the car doesn't exist so-to-speak. Not just 2x mem throughput with that nice HBM, but also 2x memory at same price-point. I do think there were some advances in CUDA-->ROCm but last I checked they caught up to like ~2020 era CUDA.

    To be honest, there is just basically 0 demand for AMD stuff so I don't really know. Thats why I'd definitely be interested to see what you come up with and following the whole journey. The majority of users renting (so far) aren't too interested in saving a few dollars at the expense of major software compatibility issues or headaches. This becomes especially true at $0.50c/hr+ tier GPUs.

    The only people trying to push AMD for training/AI 'at-large' and telling us how amazing it is going to be any day were also (ironically) trying to sell us AMD cards. I'm sure there are plenty of use cases, specific ROCm-friendly stacks, etc but for us to sell infrastructure it has legitimately been 0 demand. I'm only considering the Mi50 stuff due to capex being approximately nothing and using existing chassis that were otherwise being phased out.

  • havochavoc OGContent Writer

    @crunchbits said: legitimately been 0 demand

    That unfortunately makes sense. I doubt an older AMD card will be straight competitive against nvidia.

    They have some use if one can get inference running on them though & I can think of 3 ways to potentially do that.

    No idea how the economics work out but if they're costing you near nothing then maybe as bolt-on to a more CPU centric machine. idk

    The only people trying to push AMD for training/AI 'at-large' and telling us how amazing it is going to be any day were also (ironically) trying to sell us AMD cards.

    There are a couple research projects going big on AMD card but obv they get direct support and using newer hw.

Sign In or Register to comment.