Is compiling software on a VPS... normal usage?

Usually providers have rules in place to prevent abuses, like not allowing use of high CPU - an aspect which many are aware of. This makes sense to prevent cryptocurrency mining and other types of abusive behavior which disrupts other customers.

However a legitimate usage of CPU could be considered as compiling software; some around here even compile the kernel. But compiling uses lots of CPU. So what do you think about it?

Thanked by (1)Not_Oles
«1

Comments

  • edited October 20

    I'd say it depends on how much/often you're compiling. If you compile 24/7, this could be considered overusage imo. Just like running running yabs once or twice vs continously running it.
    I've got a VPS where I'm building software packages from time to time. When developing, I compile like 3 or 4 times within an hour (or two) and then the rest of the day the vps is pretty much in idle. That's what my host considers ok (not at limit or anything, just abdolutely ok).

    Thanked by (2)root adly
  • overall i would get a dedicated core and compile anytime i want.
    otherwise i would open a ticket and ask to use the cpu for persistent 100% for 10min-3 hours daily..... if this is oky then fine otherwise just move on.

    Thanked by (2)skhron root
  • @webcraft said:
    Just like running running yabs once or twice vs continously running it.

    Tell that to @cybertech.

    Thanked by (1)root

    HostBrr aff best VPS; VirmAche aff worst VPS.
    Unable to push-up due to shoulder injury 😣

  • Normal usage implies what almost everyone use a vps for. So ask yourself, does the majority of users compile software on their vps?

    Of course you can always ask your provider about your use case, but I would get a dedicated server or dedicated core like @ehab mentioned.

    Thanked by (1)root

    Websites have ads, I have ad-blocker.

  • edited October 20

    I'd consider compiling a totally perfect use case. Even if done more often there will be no sustained load over hours or days. But instead there will be quite some extended breaks/idle periods in between.

    Compiling itself usually even is something that needs different ressources in different stages, so quite different from endless same pattern calculations for crypto or LLM

    I would even say running something like jenkins that uses quite some ressources during build jobs should be fine, especially with automations normally doing these jobs over night etc.

    Thanked by (2)skorous root
  • I personally never understood the need to compile anything on a system that's less powerful that average laptop in 2024.
    Deploying CI/CD for yourself/team is understandable, using the system as a remote RDP dev platform for thin client is understandable, but "I am using it to compile kernel 24/7" smh.. My personal opinion is those are just the same good old cryptobros lying about what they are doing.

    Thanked by (1)root
  • Yes compiling is a normal use case, however if you compile for hours, get some dedicated cores.

  • You can buy more resources and set a CPU limit. No issues if using more RAM and less Cores then.

    Thanked by (1)root
  • havochavoc OGContent Writer

    Not worth the hassle imo.

    Use a build service, a dedi or home hardware.

    Thanked by (1)root
  • Some basic kernel upgrades kind of require compiling [of 3rd party drivers] as part of the process, so it's a grey area before asking the question.

    Otherwise, given there's usually a human in the process that needs sleep and coffee I've never even stopped to consider it. You'd surely know intuitively if you were being a dick about it, as with any other 'usage'. Though as others have said, if it's long running compiles, you'd surely be better off with a local machine.

    Thanked by (1)root
  • Acceptable? Yes.
    Normal, as in "usually that's what's they are supposed to be used for"? Not really, unless we are talking about one-shot or occasional per-machine optimizations (OP is talking about people "compiling their kernel", setups like CentminMod come to mind too)
    Normal, as in "this won't break their ToS"? Normally it won't break any ToS, you won't really spike all of your allotted cores to 100% for the whole compilation process and this process won't last days; if you envisage some routines or nightly builds for your projects, and it's not going to be CPU-intensive for just a few hours but for days, well it's up to you to step up to at least a Dedizierte CPU-Kerne offering, out of convenience.

    Thanked by (1)root
  • I use some VPS as gitlab CI runner and compile software on it.
    These providers allow an average cpu usage of 50 to 70%. And I'm below that.
    Sometimes I wonder what you're doing with your servers other than idling. :)

    Thanked by (1)root
  • Barring genuine ToS exception provider doesn't care what I use my VPS for as long as I don't exceed fair use. I don't even worry about it.

    Thanked by (1)root
  • @skorous said:
    Barring genuine ToS exception provider doesn't care what I use my VPS for as long as I don't exceed fair use. I don't even worry about it.

    That is the rub fair use, alot of software that you need to compile uses all of the resources allocated for extended periods or else someone would have a package of it.

    Thanked by (1)root

    Free Hosting at YetiNode | Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?

  • @AuroraZero said:

    @skorous said:
    Barring genuine ToS exception provider doesn't care what I use my VPS for as long as I don't exceed fair use. I don't even worry about it.

    That is the rub fair use, alot of software that you need to compile uses all of the resources allocated for extended periods or else someone would have a package of it.

    You can always limit the build job to a single core and/or periodically send it SIGSTOP/SIGCONT signals to pause it for a bit.

    Thanked by (1)root
  • SGrafSGraf Hosting ProviderServices Provider

    The way i see it its always depends on what was actually ordered, and what the scope of that service is.

    As an example with MrvM and the upcoming KVM lineup. KVM will be dedicated resources, so users can compile all day long....
    The LXC series (being a replacement of the old OVZ plans) is a more on the fair-use side of things.

    Thanked by (1)root

    MyRoot.PW ★ Dedicated Servers ★ LIR-Services ★ | ★ SiteTide Web-Hosting ★
    MrVM ★ Virtual Servers ★ | ★ Blesta.Store ★ Blesta licenses and Add-ons at amazing Prices ★

  • Different providers have different limits in terms; for example @Virmach has something like this in their terms with regards to CPU usage:

    • High CPU: Customer’s Service cannot burst to 95-100% usage for more than fifteen (15) minutes and cannot average higher than 50% usage within any two (2) hour period. Packages advertised with additional fair use restrictions cannot average higher than the advertised amount (usually 25% or 33%) within any (6) hour period. Packages originally sold at lower clock rate or processing power may be scaled accordingly. Packages advertised to include dedicated CPU may burst to 100% at all times.
    • High Load: Customer’s Service cannot have a 15-minute load average higher than the number of full logical cores assigned and cannot have a 1-day load average higher than 70% of the number of full logical cores assigned. Customer’s Service cannot contribute to more than 10% of total load at any time.
  • @root said:
    Is compiling software on a VPS... normal usage?

    Yes.

    Just a couple of days ago some guy on OGF was looking for a new VPS suitable for compiling, as his current VPS isn't fast enough for that and he has to wait for too long.

    Thanked by (1)root
  • @AuroraZero said:

    @skorous said:
    Barring genuine ToS exception provider doesn't care what I use my VPS for as long as I don't exceed fair use. I don't even worry about it.

    That is the rub fair use, alot of software that you need to compile uses all of the resources allocated for extended periods or else someone would have a package of it.

    In which case it's not the compiling that's the problem was my point. Compiling is just an application like any other. If you're compiling some mega-application that's going to take an hour at 100% CPU then, yeah, it's likely a problem just the same as if I ran Plex transcoding or protein folding.

    Thanked by (1)root
  • @root said:
    Different providers have different limits in terms; for example @Virmach has something like this in their terms with regards to CPU usage:

    • High CPU: Customer’s Service cannot burst to 95-100% usage for more than fifteen (15) minutes and cannot average higher than 50% usage within any two (2) hour period. Packages advertised with additional fair use restrictions cannot average higher than the advertised amount (usually 25% or 33%) within any (6) hour period. Packages originally sold at lower clock rate or processing power may be scaled accordingly. Packages advertised to include dedicated CPU may burst to 100% at all times.
    • High Load: Customer’s Service cannot have a 15-minute load average higher than the number of full logical cores assigned and cannot have a 1-day load average higher than 70% of the number of full logical cores assigned. Customer’s Service cannot contribute to more than 10% of total load at any time.

    I strongly suspect that a VPS from VirMach wouldn't be the best choice for compiling :)

    That said, as others have said, a more powerful VPS -- for example, a Root-Server from netcup -- should be fine for compiling. Basically, any VPS with "dedicated vCores" should be fine for this purpose

    Thanked by (1)root

    "A single swap file or partition may be up to 128 MB in size. [...] [I]f you need 256 MB of swap, you can create two 128-MB swap partitions." (M. Welsh & L. Kaufman, Running Linux, 2e, 1996, p. 49)

  • @angstrom said:

    I strongly suspect that a VPS from VirMach wouldn't be the best choice for compiling :)

    I'm almost always an exceptionally good neighbor. I know how to limit cpu usage and actively do so. With that said, I've actually had really good luck on the occasions I've needed to compile stuff on my Virmach's. Now none of my compiles are multi-hour events either so that's probably a factor. :)

    Thanked by (3)angstrom AuroraZero root
  • @skorous said:

    @angstrom said:

    I strongly suspect that a VPS from VirMach wouldn't be the best choice for compiling :)

    I'm almost always an exceptionally good neighbor. I know how to limit cpu usage and actively do so. With that said, I've actually had really good luck on the occasions I've needed to compile stuff on my Virmach's. Now none of my compiles are multi-hour events either so that's probably a factor. :)

    Was not intention to accuse anyone here just to point out the obvious that was missing. I daily compile almost on my own machines because I am a Slacker and sometimes they do take an hour or so when upgrading.

    Thanked by (1)root

    Free Hosting at YetiNode | Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?

  • MikeAMikeA Hosting ProviderOG

    Simple answer: yes.

    If you're compiling software or running a build server I can't imagine any company would even notice the processor use from that honestly. Unless you're running something pegging 100% CPU 24/7 for like 5+ days and you're allocated 4+ cores maybe some would notice and care.

    You shouldn't ever have to worry with any host, just like you wouldn't worry with big name servers like OVH and Hetzner. Even if there's ever a concern on the host end they should manage it without ever needing to contact you about it.

  • @skorous said:

    @angstrom said:

    I strongly suspect that a VPS from VirMach wouldn't be the best choice for compiling :)

    I'm almost always an exceptionally good neighbor. I know how to limit cpu usage and actively do so. With that said, I've actually had really good luck on the occasions I've needed to compile stuff on my Virmach's. Now none of my compiles are multi-hour events either so that's probably a factor. :)

    I gave up my last VirMach VPS three years ago, so it would be hard for me to test their limits 🙂

    If one chooses wisely, one can use a VPS for compiling for hours at a time. For example, I used to regularly compile emacs with all of its dependencies (including gtk) on a relatively modest VPS (from a good provider). It all used to take a few hours, but happily, I was never reprimanded (but perhaps I was just lucky 🙂)

    Thanked by (2)Not_Oles root

    "A single swap file or partition may be up to 128 MB in size. [...] [I]f you need 256 MB of swap, you can create two 128-MB swap partitions." (M. Welsh & L. Kaufman, Running Linux, 2e, 1996, p. 49)

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Compiling NetBSD-current on a VPS has gone okay as shown by lots of detailed posts over in the LES BSD Thread.

    Depending on part whether you need to build the tools, I guess it takes an hour or three to do the compile, and maybe I've done it twice or three times in the past week or ten days? There are times during the build when it looks like 100% usage of the two cores, but for substantial periods during the build, the usage is substantially less.

    The VPS provider, @linveo, has been really helpful. He's been active in the BSD thread. He even increased my disk space when I needed a little more. The VPS is AMD Ryzen 9 7950X, so it's fast. It's in Phoenix, so close to me.

    Another option for people who want to compile open source projects might be a shell account on one of my MetalVPS E5-1650 v3 dedicated servers at Hetzner. Nowhere near as fast as Ryzen, but abundant memory and disk space. Debian sid, not BSD. I don't know if I can be as helpful and friendly as @linveo. But I can try.

    I hope everyone has fun compiling!

    Thanked by (2)root linveo

    I hope everyone gets the servers they want!

  • @angstrom said:

    I used to regularly compile emacs

    Oh @angstrom .... ( smh )

    ( giggle )

    Thanked by (3)Not_Oles root angstrom
  • @AuroraZero said: Was not intention to accuse anyone here just to point out the obvious that was missing. I daily compile almost on my own machines because I am a Slacker and sometimes they do take an hour or so when upgrading.

    No, no ... didn't take it as such. Just wanted to clarify my point in case it wasn't already.

    Thanked by (1)root
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @AuroraZero said: I daily compile almost on my own machines because I am a Slacker and sometimes they do take an hour or so when upgrading.

    @AuroraZero May I please ask where do you get the source code that you use for compiling Slackware? Thanks!

    Thanked by (1)root

    I hope everyone gets the servers they want!

  • rootroot OG
    edited October 20

    @Not_Oles said:

    @AuroraZero said: I daily compile almost on my own machines because I am a Slacker and sometimes they do take an hour or so when upgrading.

    @AuroraZero May I please ask where do you get the source code that you use for compiling Slackware? Thanks!

    I believe it is a secret machine deeply hidden within the Yeti's cave. You do not want to go in there :sweat_smile:

    However it is always good to host stuff in Yeti's cave, when you get the offer. Rumor has it not even police will catch your stash of Linux ISO.

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    There was a guy who had all the sources for all of Slackware organized in a repo. I remember looking at it, but I can't seem to find it now. Maybe somebody has a link? Maybe there are additional places to get Slackware sources?

    Additionally, I'm curious to hear where our beloved Yeti gets the sources on which he runs his cave based compiler?

    Thanks!

    I hope everyone gets the servers they want!

Sign In or Register to comment.