@vimalware said:
Next step : unmetered private networks. ?
And here I was about to ask what you guys wanted to see next from MaxKVM. Internal private networking is definitely on the roadmap!
We appreciate any and all feedback. What features/promos would you like to see next?
Yo. Hmu if you figure out a way to fo this with virtualizor.
I have the whole thing setup on the network level but VPSs won't talk to each other over local ip, if they are on a different hardware
We use SolusVM my brother. It sounds like you might need a VLAN to get those internal IPs to communicate across nodes in the same DC, but not too sure of your network setup.
@vimalware said:
Next step : unmetered private networks. ?
And here I was about to ask what you guys wanted to see next from MaxKVM. Internal private networking is definitely on the roadmap!
We appreciate any and all feedback. What features/promos would you like to see next?
Yo. Hmu if you figure out a way to fo this with virtualizor.
I have the whole thing setup on the network level but VPSs won't talk to each other over local ip, if they are on a different hardware
We use SolusVM my brother. It sounds like you might need a VLAN to get those internal IPs to communicate across nodes in the same DC, but not too sure of your network setup.
Node to node communication works. Virtualizor doesn't allow VM to communicate across different nodes over local IP and have 0 intention of manually fiddling with their configs.
SolusVM supports internal networking across nodes? Or is it part of solus io?
@ vimalware said:
Next step : unmetered private networks. ?
And here I was about to ask what you guys wanted to see next from MaxKVM. Internal private networking is definitely on the roadmap!
We appreciate any and all feedback. What features/promos would you like to see next?
Yo. Hmu if you figure out a way to fo this with virtualizor.
I have the whole thing setup on the network level but VPSs won't talk to each other over local ip, if they are on a different hardware
We use SolusVM my brother. It sounds like you might need a VLAN to get those internal IPs to communicate across nodes in the same DC, but not too sure of your network setup.
Node to node communication works. Virtualizor doesn't allow VM to communicate across different nodes over local IP and have 0 intention of manually fiddling with their configs.
SolusVM supports internal networking across nodes? Or is it part of solus io?
That's a good question. The only documentation available details how to enable private/internal networking for individual nodes with nothing regarding private network communication between separate nodes. The assumption was that getting this to function as expected (one private network for VMs to communicate between each other on separate nodes in each DC) would take some custom network configuration.. and many migraines..
@seriesn said:
Many migraines for me, when I attempted to hack out a custom solution last year. End result, without building own panel, it won’t be worth the effort.
Of course it would be worth the effort! There might even be more than 1 or 2 people who would make use of this feature..
We actually do have quite a few with multiple VMs.. and have had a couple of requests for internal private networking. One other feature that has been requested a couple of times is an external/cloud firewall, which will probably involve many, many more migraines. But that might be pretty awesome to get working.
@seriesn said:
Many migraines for me, when I attempted to hack out a custom solution last year. End result, without building own panel, it won’t be worth the effort.
Of course it would be worth the effort! There might even be more than 1 or 2 people who would make use of this feature..
We actually do have quite a few with multiple VMs.. and have had a couple of requests for internal private networking. One other feature that has been requested a couple of times is an external/cloud firewall, which will probably involve many, many more migraines. But that might be pretty awesome to get working.
Edge firewall would definitely be an awesome add on
@MaxKVM said: We just received notice that there was a re-route completed for Indonesia
I don't remember there was an issue with ID routing? The test is always good;
tracert lg.sin.maxkvm.net
Tracing route to lg.sin.maxkvm.net [2402:9e80:8:1::100:1]
over a maximum of 30 hops:
1 5 ms 3 ms 3 ms 2404:8000:1004:ccf:da07:b6ff:fe83:7bd4
2 6 ms 6 ms 5 ms 2404:8000:1:1110::7d
3 7 ms 7 ms 7 ms 2404:8000:1:7451:99::66
4 10 ms 6 ms 9 ms 2404:8000:1745:1::50e
5 18 ms 20 ms 21 ms 2404:8000:1745:1::57a
6 32 ms 17 ms 17 ms as22822.singapore.megaport.com [2001:ded::10]
7 20 ms 17 ms 17 ms ve5.fr4.sin.ipv6.llnw.net [2402:6800:730:1001::2]
8 31 ms 52 ms 43 ms 2604:8380:3400:0:107:155:95:253
9 17 ms 17 ms 17 ms lg.sin.maxkvm.net [2402:9e80:8:1::100:1]
Trace complete.
tracert -4 lg.sin.maxkvm.net
Tracing route to lg.sin.maxkvm.net [107.155.95.100]
over a maximum of 30 hops:
1 3 ms 2 ms 2 ms 192.168.10.1
2 8 ms 5 ms 5 ms 10.156.0.1
3 9 ms 7 ms 6 ms id-jkt-mid-8.biznetnetworks.com [182.253.99.33]
4 9 ms 12 ms 11 ms id-jkt-mid-1.biznetnetworks.com [182.253.99.105]
5 21 ms 20 ms 20 ms jkt-gs-7.biznetnetworks.com [182.253.187.26]
6 18 ms 17 ms 17 ms as22822.singapore.megaport.com [103.41.12.16]
7 36 ms 38 ms 29 ms 117.121.248.217
8 17 ms 16 ms 17 ms lg.sin.maxkvm.net [107.155.95.100]
Trace complete.
@MaxKVM said: We just received notice that there was a re-route completed for Indonesia
I don't remember there was an issue with ID routing? The test is always good;
> tracert lg.sin.maxkvm.net
>
> Tracing route to lg.sin.maxkvm.net [2402:9e80:8:1::100:1]
> over a maximum of 30 hops:
>
> 1 5 ms 3 ms 3 ms 2404:8000:1004:ccf:da07:b6ff:fe83:7bd4
> 2 6 ms 6 ms 5 ms 2404:8000:1:1110::7d
> 3 7 ms 7 ms 7 ms 2404:8000:1:7451:99::66
> 4 10 ms 6 ms 9 ms 2404:8000:1745:1::50e
> 5 18 ms 20 ms 21 ms 2404:8000:1745:1::57a
> 6 32 ms 17 ms 17 ms as22822.singapore.megaport.com [2001:ded::10]
> 7 20 ms 17 ms 17 ms ve5.fr4.sin.ipv6.llnw.net [2402:6800:730:1001::2]
> 8 31 ms 52 ms 43 ms 2604:8380:3400:0:107:155:95:253
> 9 17 ms 17 ms 17 ms lg.sin.maxkvm.net [2402:9e80:8:1::100:1]
>
> Trace complete.
>
> tracert -4 lg.sin.maxkvm.net
>
> Tracing route to lg.sin.maxkvm.net [107.155.95.100]
> over a maximum of 30 hops:
>
> 1 3 ms 2 ms 2 ms 192.168.10.1
> 2 8 ms 5 ms 5 ms 10.156.0.1
> 3 9 ms 7 ms 6 ms id-jkt-mid-8.biznetnetworks.com [182.253.99.33]
> 4 9 ms 12 ms 11 ms id-jkt-mid-1.biznetnetworks.com [182.253.99.105]
> 5 21 ms 20 ms 20 ms jkt-gs-7.biznetnetworks.com [182.253.187.26]
> 6 18 ms 17 ms 17 ms as22822.singapore.megaport.com [103.41.12.16]
> 7 36 ms 38 ms 29 ms 117.121.248.217
> 8 17 ms 16 ms 17 ms lg.sin.maxkvm.net [107.155.95.100]
>
> Trace complete.
>
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.
@MaxKVM said: We just received notice that there was a re-route completed for Indonesia
I don't remember there was an issue with ID routing? The test is always good;
You're lucky, on my way to my sg MaxKVM service, my connection flies through Singapore, out to Stockholm and France, it seems
And from my sg MaxKVM service I get a detour via India. Definitely racking up the frequent flyer miles
Plotting pings on a graph results in this:
Oh lord...
That's not good. Is IPv4 taking a worldwide trip to Stockholm and France or IPv6? or both?
Did a bit more testing, looks like v6 routes properly and sits around 100ms ping (which is what I expected), so problem has been narrowed down to being v4 only.
@MaxKVM said: We just received notice that there was a re-route completed for Indonesia
I don't remember there was an issue with ID routing? The test is always good;
You're lucky, on my way to my sg MaxKVM service, my connection flies through Singapore, out to Stockholm and France, it seems
And from my sg MaxKVM service I get a detour via India. Definitely racking up the frequent flyer miles
Plotting pings on a graph results in this:
Oh lord...
That's not good. Is IPv4 taking a worldwide trip to Stockholm and France or IPv6? or both?
Did a bit more testing, looks like v6 routes properly and sits around 100ms ping (which is what I expected), so problem has been narrowed down to being v4 only.
What we really want to see is an mtr/tracert of this scenic route
@MaxKVM said: We just received notice that there was a re-route completed for Indonesia
I don't remember there was an issue with ID routing? The test is always good;
You're lucky, on my way to my sg MaxKVM service, my connection flies through Singapore, out to Stockholm and France, it seems
And from my sg MaxKVM service I get a detour via India. Definitely racking up the frequent flyer miles
Plotting pings on a graph results in this:
Oh lord...
That's not good. Is IPv4 taking a worldwide trip to Stockholm and France or IPv6? or both?
Did a bit more testing, looks like v6 routes properly and sits around 100ms ping (which is what I expected), so problem has been narrowed down to being v4 only.
What we really want to see is an mtr/tracert of this scenic route
@bdl said:
Did a bit more testing, looks like v6 routes properly and sits around 100ms ping (which is what I expected), so problem has been narrowed down to being v4 only.
What we really want to see is an mtr/tracert of this scenic route
singing "We're all going on a ... summer holiday"
It seems IPv6 is taking a break from vacations and IPv4 is taking its place!
Actually surprised to hear proper latency for IPv6 since there was no mention of optimization and mtr.sh shows IPv6 still travelling to the US from Anexia Sydney.
Edit: I take that back. We have a direct route with IPv6 from Anexia Sydney to MaxKVM SG:
What it looks like to me is that STO is a faulty rDNS, that router seems to be in Frankfurt.
However, what’s really happening here is that @bdl ’s ISP seems to buy transit at RETN, they peer with LLNW so they scream ‘we have the fastest route, it’s almost direct’ whereas with other carriers there’d be a transit provider in the middle.
Only problem is that RETN and LLNW peer with eachother in Frankfurt and not Singapore.
I’d think that this is something that has to be changed at the end of your ISP. There’s only so much you can do on the other end.
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.
What it looks like to me is that STO is a faulty rDNS, that router seems to be in Frankfurt.
However, what’s really happening here is that @bdl ’s ISP seems to buy transit at RETN, they peer with LLNW so they scream ‘we have the fastest route, it’s almost direct’ whereas with other carriers there’d be a transit provider in the middle.
Only problem is that RETN and LLNW peer with eachother in Frankfurt and not Singapore.
I’d think that this is something that has to be changed at the end of your ISP. There’s only so much you can do on the other end.
Thankyou @debaser . That does correlate with what I'm experiencing with other sites which use other ISPs/connectivity, they have no problem achieving 100ms taking other routes with providers that peer much closer to Singapore.
It seems there is still an opportunity to optimise the route in the opposite direction from my .sg MaxKVM service, the mtr doesn't show much however that fourth hop is where it seems to go via India...
What it looks like to me is that STO is a faulty rDNS, that router seems to be in Frankfurt.
However, what’s really happening here is that @bdl ’s ISP seems to buy transit at RETN, they peer with LLNW so they scream ‘we have the fastest route, it’s almost direct’ whereas with other carriers there’d be a transit provider in the middle.
Only problem is that RETN and LLNW peer with eachother in Frankfurt and not Singapore.
I’d think that this is something that has to be changed at the end of your ISP. There’s only so much you can do on the other end.
Thankyou @debaser . That does correlate with what I'm experiencing with other sites which use other ISPs/connectivity, they have no problem achieving 100ms taking other routes with providers that peer much closer to Singapore.
Problem is that it boils down to the interconnect between a carrier of your ISP and a carrier of Hivelocity. The most drastic improvement would come from a manual change at your ISP. They could (for insance) route it through NTT. But access providers are usually not open for this kind of shenanigans.
It seems there is still an opportunity to optimise the route in the opposite direction from my .sg MaxKVM service, the mtr doesn't show much however that fourth hop is where it seems to go via India...
It's not India though. It's (dramatic sound) Los Angeles. So yeah, Hivelocity could make some changes though (not that it would do too much about the high latency).
Hivelocity is apparently trying to save some money over bandwith cost, in order to deliver a server with the same specs for the same price as in Europe and the US. That's understandable, but gives these kinds of problems due to the TIER-2 and TIER-3 networks they're using for upstream.
Seeing the routes in this thread makes me very happy to live in Western Europe. Almost everything just goes from point A to B here. And when it doesn't its routes that go (for instance) not directly from Amsterdam to New York, but from Amsterdam to Paris to Washington DC to New York. Or routes to Scandinavia that travel through Frankfurt. That kind of detours is in the early double digits of extra latency. Hundreds of kilometres, not thousands.
@debaser said:
Seeing the routes in this thread makes me very happy to live in Western Europe. Almost everything just goes from point A to B here. And when it doesn't its routes that go (for instance) not directly from Amsterdam to New York, but from Amsterdam to Paris to Washington DC to New York. Or routes to Scandinavia that travel through Frankfurt. That kind of detours is in the early double digits of extra latency. Hundreds of kilometres, not thousands.
You wouldn't happen to have a spare room I could borrow?
@debaser said:
Seeing the routes in this thread makes me very happy to live in Western Europe. Almost everything just goes from point A to B here. And when it doesn't its routes that go (for instance) not directly from Amsterdam to New York, but from Amsterdam to Paris to Washington DC to New York. Or routes to Scandinavia that travel through Frankfurt. That kind of detours is in the early double digits of extra latency. Hundreds of kilometres, not thousands.
You wouldn't happen to have a spare room I could borrow?
I do, but that's the room without internet so...
I had an Ozzie colleague who moved back to Australia after some years in The Netherlands. I asked him what he was going to miss the most (except from us of course) and he answered: 'affordable internet and good speeds no matter what'. So yeah.
Comments
We use SolusVM my brother. It sounds like you might need a VLAN to get those internal IPs to communicate across nodes in the same DC, but not too sure of your network setup.
AMD EPYC powered Performance NVMe VPS - Los Angeles, Dallas, New York, Amsterdam, Singapore | Support | Status
Node to node communication works. Virtualizor doesn't allow VM to communicate across different nodes over local IP and have 0 intention of manually fiddling with their configs.
SolusVM supports internal networking across nodes? Or is it part of solus io?
Nexus Bytes Ryzen Powered NVMe VPS | NYC|Miami|LA|London|Netherlands| Singapore|Tokyo
Storage VPS | LiteSpeed Powered Web Hosting + SSH access | Switcher Special |
Pliss untag me.
That's a good question. The only documentation available details how to enable private/internal networking for individual nodes with nothing regarding private network communication between separate nodes. The assumption was that getting this to function as expected (one private network for VMs to communicate between each other on separate nodes in each DC) would take some custom network configuration.. and many migraines..
Never
AMD EPYC powered Performance NVMe VPS - Los Angeles, Dallas, New York, Amsterdam, Singapore | Support | Status
Many migraines for me, when I attempted to hack out a custom solution last year. End result, without building own panel, it won’t be worth the effort.
Nexus Bytes Ryzen Powered NVMe VPS | NYC|Miami|LA|London|Netherlands| Singapore|Tokyo
Storage VPS | LiteSpeed Powered Web Hosting + SSH access | Switcher Special |
Of course it would be worth the effort! There might even be more than 1 or 2 people who would make use of this feature..
We actually do have quite a few with multiple VMs.. and have had a couple of requests for internal private networking. One other feature that has been requested a couple of times is an external/cloud firewall, which will probably involve many, many more migraines. But that might be pretty awesome to get working.
AMD EPYC powered Performance NVMe VPS - Los Angeles, Dallas, New York, Amsterdam, Singapore | Support | Status
Edge firewall would definitely be an awesome add on
Nexus Bytes Ryzen Powered NVMe VPS | NYC|Miami|LA|London|Netherlands| Singapore|Tokyo
Storage VPS | LiteSpeed Powered Web Hosting + SSH access | Switcher Special |
+1 private network
https://phpbackend.com/
I don't remember there was an issue with ID routing? The test is always good;
You're lucky, on my way to my sg MaxKVM service, my connection flies through Singapore, out to Stockholm and France, it seems
And from my sg MaxKVM service I get a detour via India. Definitely racking up the frequent flyer miles
Plotting pings on a graph results in this:
I wish my internet provider have IPv6..
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.
Oh lord...
That's not good. Is IPv4 taking a worldwide trip to Stockholm and France or IPv6? or both?
AMD EPYC powered Performance NVMe VPS - Los Angeles, Dallas, New York, Amsterdam, Singapore | Support | Status
Did a bit more testing, looks like v6 routes properly and sits around 100ms ping (which is what I expected), so problem has been narrowed down to being v4 only.
What we really want to see is an mtr/tracert of this scenic route
singing "We're all going on a ... summer holiday"
Mars.. Hmmm.
or maybe a French Mistress.
both are plausible explanations for the detours.
blog | exploring visually |
It seems IPv6 is taking a break from vacations and IPv4 is taking its place!
Actually surprised to hear proper latency for IPv6 since there was no mention of optimization and mtr.sh shows IPv6 still travelling to the US from Anexia Sydney.
Edit: I take that back. We have a direct route with IPv6 from Anexia Sydney to MaxKVM SG:
AMD EPYC powered Performance NVMe VPS - Los Angeles, Dallas, New York, Amsterdam, Singapore | Support | Status
or Duck à l'orange instead of Chinese chicken
@MaxKVM's choice...
Fragrant... like French Parfum
blog | exploring visually |
You heard it here, folks. Hostloc is completely wrong and we are unquestionably fragrant.
AMD EPYC powered Performance NVMe VPS - Los Angeles, Dallas, New York, Amsterdam, Singapore | Support | Status
What it looks like to me is that STO is a faulty rDNS, that router seems to be in Frankfurt.
However, what’s really happening here is that @bdl ’s ISP seems to buy transit at RETN, they peer with LLNW so they scream ‘we have the fastest route, it’s almost direct’ whereas with other carriers there’d be a transit provider in the middle.
Only problem is that RETN and LLNW peer with eachother in Frankfurt and not Singapore.
I’d think that this is something that has to be changed at the end of your ISP. There’s only so much you can do on the other end.
My traceroute [v0.93]
Serv.tub.co (2402:9e) 2020-08-27T18:17:21+0000
Keys: Help Display mode Restart statistics Order of fields quit Packets Pings
Host Loss% Snt Last Avg Best Wrst StDev
1. 2402:9e8 0.0% 107 2.0 7.5 0.4 81.3 10.9
2. 2604:8380:3400:0:107:155 0.0% 107 0.2 0.3 0.2 3.9 0.4
3. limelight.singapore2.sin 0.0% 107 0.3 0.7 0.3 26.5 2.6
4. 2403:e800:e804:9::1 58.9% 107 77.0 75.8 75.5 78.8 0.7
5. 2403:e800:508:100::e6 0.0% 107 135.9 128.6 107.5 159.3 8.8
6. 2400:5200:1800:69::2 0.0% 107 135.8 132.5 112.1 145.4 7.7
7. 2400:5200:1800:69::1 36.8% 107 144.2 140.2 118.7 149.1 6.7
8. fd00:abcd:abcd:128::1 0.0% 106 139.1 132.7 112.9 144.2 7.5
9. 2402:3a80:1831:fd04:0:4: 0.0% 106 140.0 132.7 113.2 144.2 7.8
10. 2402:3a80:1831:fd04:7904 3.8% 106 197.3 188.5 146.1 237.4 14.5
@deepak_leb, formating MTR result with
<pre>
would be great⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.
Is someone able to check and confirm if the IP range is detected as Singapore?
What do you mean by that, sir? The geoIP location?
AMD EPYC powered Performance NVMe VPS - Los Angeles, Dallas, New York, Amsterdam, Singapore | Support | Status
Thankyou @debaser . That does correlate with what I'm experiencing with other sites which use other ISPs/connectivity, they have no problem achieving 100ms taking other routes with providers that peer much closer to Singapore.
It seems there is still an opportunity to optimise the route in the opposite direction from my .sg MaxKVM service, the mtr doesn't show much however that fourth hop is where it seems to go via India...
(yes, quite a few packets have been sent, I accidentally left this going all night )
Indeed good sir.
Problem is that it boils down to the interconnect between a carrier of your ISP and a carrier of Hivelocity. The most drastic improvement would come from a manual change at your ISP. They could (for insance) route it through NTT. But access providers are usually not open for this kind of shenanigans.
It's not India though. It's (dramatic sound) Los Angeles. So yeah, Hivelocity could make some changes though (not that it would do too much about the high latency).
Hivelocity is apparently trying to save some money over bandwith cost, in order to deliver a server with the same specs for the same price as in Europe and the US. That's understandable, but gives these kinds of problems due to the TIER-2 and TIER-3 networks they're using for upstream.
Seeing the routes in this thread makes me very happy to live in Western Europe. Almost everything just goes from point A to B here. And when it doesn't its routes that go (for instance) not directly from Amsterdam to New York, but from Amsterdam to Paris to Washington DC to New York. Or routes to Scandinavia that travel through Frankfurt. That kind of detours is in the early double digits of extra latency. Hundreds of kilometres, not thousands.
You wouldn't happen to have a spare room I could borrow?
I do, but that's the room without internet so...
I had an Ozzie colleague who moved back to Australia after some years in The Netherlands. I asked him what he was going to miss the most (except from us of course) and he answered: 'affordable internet and good speeds no matter what'. So yeah.