Do you use cloud/virtual servers or dedicated ones?
I recently switched from cloud servers to dedicated (Hetzner) and I love the better performance, but I am missing a little bit the flexibility I had with cloud servers (hourly billing, instant creation/deletion, easily expandable storage up to 10TB per volume and software support for Kubernetes for both block storage and load balancers). Some maintenance operations in K8s are more complicated with dedicated servers and adding more storage means adding up to 2 drives up to 2 1TB each if I want fast NVMe. Not too bad, but not as flexible.
What do you use/prefer and why?
Lead Platform Architect at the day job, Ethical Hacker/Bug Bounty Hunter on the side
Comments
I install Plesk on dedicated server directly without virtualization.
Amadex • Hosting Forums • Wie ist meine IP-Adresse? • AS215325
Forum for System Administrators: sysadminforum.com
Yeah I set up Kubernetes directly too, without virtualization so performance is max.
Lead Platform Architect at the day job, Ethical Hacker/Bug Bounty Hunter on the side
proxmox isn't resource intensive.
in fact with a hypervisor on a dedicated server has many many benefits that outweights the few cpu cycles gained without.
i can only imagine a hard reboot when some object is misconfigured or some buggy finalizer didn't unmount-release etc..
i only install k8s on virtualized env.
btw, what falvour of k8s are you using e.g: k0s, k3s, kubeadmin etc...
My private resources: virtual, because I have yet to have a use case that would require something as beefy as a dedicated server...
Resources we use at work: cloud, because even though we do need a lot of resources, so technically renting a dedi would have been cheaper, and regular VPS would help just a bit (in fact one of our environments is a de facto private clooud), using the (public) cloud makes life much easier, since having to manage stuff by yourself is a major PITA (for example, setting up Gluster for k8s PV)
Having said this, the public cloud costs a lot of money, so as far as my hobby projects are concerned, l don't think I'll ever use it :P (excl. the free tier of Oracle Cloud ;P)
By the way, there are some hourly dedicated servers too (admittedly, not available at pricing comparable to Hetzner AX series) - 2 low end examples:
Contribute your idling VPS/dedi (link), Android (link) or iOS (link) devices to medical research
Uhm.... interesting. I will look into it. Maybe I can temporarily move my stuff to Hetzner's cloud servers while I set up the dedicated ones. Thanks for the input!
Lead Platform Architect at the day job, Ethical Hacker/Bug Bounty Hunter on the side
This is the first time I hear of hourly-billed dedis. I didn't know they existed
Lead Platform Architect at the day job, Ethical Hacker/Bug Bounty Hunter on the side
FYI , @vitobotta i live in Tampere, so if your around and want to have party coffee ping me.
Yeah, what I miss the most is the ability of the node to recover when you use something like autoscaling group on AWS.
But maybe as long as you don’t need to “hyper scale” your cluster (you know, node autoscaler with many additional node at peak, and scale down at low traffic), dedicated server fleet with many spare resources seems to be enough? Just use metallb with ingress, and set the DNS to all of your node with DNS health check, and you are good to go
I use exclusively virtual servers.
Each virtual server costs $20/year or less, while every dedicated server would cost $20/month or more.
I do have workload that can fill a dedicated server.
For example, I have high speed networking program that can transmit at over 100 Gbps.
However, these do not speak TCP/IP, and require at least a VLAN and preferably a dedicated fiber.
(I hear @wdmg has 100Gbps line in their data center, but I don't dare to ask for prices; I'm sure "if you have to ask, you can't afford it").
Therefore, I just get two servers on a rack locally, and let them talk to each other…
Accepting submissions for IPv6 less than /64 Hall of Incompetence.
I use both. Depends on the purpose on what to get and use.
“Technology is best when it brings people together.” – Matt Mullenweg
How did you know I live in Finland? I live in Espoo btw
Lead Platform Architect at the day job, Ethical Hacker/Bug Bounty Hunter on the side
So far I'm not having problems, really, it's just that I know I will have more maintenance than before when upgrading K8s, the OS or Longhorn for the storage. Also the storage may be a problem because I will need downtime to add disks and there's a limit to the storage I can have. Other than that I'm loving the dedis!
Lead Platform Architect at the day job, Ethical Hacker/Bug Bounty Hunter on the side
Which virtual servers do you pay just $20/year or less? What specs and what provider? Never heard of those prices before.
Lead Platform Architect at the day job, Ethical Hacker/Bug Bounty Hunter on the side
you mentioned it at some new thread opening.
ah, I thought I had added to my profile but didn't remember about it
Lead Platform Architect at the day job, Ethical Hacker/Bug Bounty Hunter on the side
Wait for Black Friday and similar sales.
Accepting submissions for IPv6 less than /64 Hall of Incompetence.
Hi @yoursunny! Would you please share a little about the architecture and operating systems of the "two servers on a rack locally?" Thanks and best wishes from Mexico!
I hope everyone gets the servers they want!
Server specs are described in this paper, Section 7:
https://www.nist.gov/publications/ndn-dpdk-ndn-forwarding-100-gbps-commodity-hardware
YABS here:
https://talk.lowendspirit.com/discussion/comment/45943/#Comment_45943
Mellanox ConnectX-5 has PCIe 3.0, 16 lanes.
Xeon Scalable has 48 lanes per processor, but some are reserved by NVMe, etc.
To support three Ethernet adapters at full speed, the server must have dual processors.
Having two NUMA sockets increases memory access latency when packet ingress and egress are on different NUMA sockets.
My wishlist for a future order would be single EPYC 64-core processor and Mellanox ConnectX-6 200Gbps Ethernet adapter.
EPYC has 128 PCI lanes, and it's PCIe 4.0.
I can potentially install six Ethernet adapters on one processor, without dealing with memory latency caused by NUMA sockets.
However, I haven't found a motherboard for single EPYC that has six PCIe 4.0 slots; the most I've found is four slots.
Accepting submissions for IPv6 less than /64 Hall of Incompetence.
Using a mix (though generally LES shared VPS not dedi).
I don't think the two are even competition. The value proposition of something like a big LES VPS is that it is affordable to run it 365. The value proposition of bigcloud (GCP etc) is that it's an ecosystem not a server.
If you just need a server and go for a cloud provider then you're a moron imo...you're paying for access to all their shiny toys in the ecosystem but not leveraging it
Lately intrigued by bigcloud more though. Some of their free tier offerings are ridiculous...and the harder it is for the layman to pick up the more generous the offering is. i.e. if you can architect your app to fit their paradigm you can get quite far
For convenient reference, the following is a brief excerpt from the cited paper:
Thanks to @yoursunny for explaining and for terrific work!
I hope everyone gets the servers they want!