What do you guys like for your /etc/resolv.conf?

Not_OlesNot_Oles Hosting ProviderContent Writer

D-Existential and N-phenomenological S-investigations: :)

Who should Darkstar use to resolve DNA DNS requests? Why!?

Eeeks! We can get IPv6 addresses via DNS requests sent over IPv4, but don't we need an IPv6 address or two in our /etc/resolv.conf?

If the DNS system experienced "trouble," would we be better with a longer list of resolvers than two each IPv4 and IPv6? Back in the old days I remember seeing a long list which included, among other things, all of the DNS root servers. Perhaps sending DNS requests to the root servers might not work nowadays.

D-Mumbles, N-rumbles, S-tumbles

tom@darkstar:~$ cat /etc/resolv.conf
options single-request
search metalvps.com
nameserver 1.1.1.1
nameserver 1.0.0.1

tom@darkstar:~$ 
## Perhaps 20 times as long to ping 1.1.1.1 as 8.8.8.8? Really!!??

tom@darkstar:~$ date; time ping -c 2 1.1.1.1
Sat Dec 18 21:17:41 UTC 2021
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=21.2 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=21.2 ms

--- 1.1.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 21.221/21.226/21.231/0.005 ms

real    0m1.030s
user    0m0.004s
sys     0m0.001s
tom@darkstar:~$ date; time ping -c 2 8.8.8.8
Sat Dec 18 21:17:55 UTC 2021
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=115 time=0.952 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=115 time=0.872 ms

--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.872/0.912/0.952/0.040 ms

real    0m1.008s
user    0m0.003s
sys     0m0.001s
tom@darkstar:~$ 

References:

"Up to MAXNS (currently 3, see <resolv.h>) name servers may be listed, one per keyword. "
-- https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html

"DNS NSS improvement" "Add “single-request” to the options in /etc/resolv.conf. "
-- http://udrepper.livejournal.com/20948.html

I hope everyone gets the servers they want!

Thanked by (2)uptime dahartigan
«1

Comments

  • Typically, I configure two resolvers.

    • one IPv4 and one IPv6, unless the server is single stack and the other protocol is tunneled
    • different providers, typically Google and Cloudflare; in case one of these has high latency, Quad9 or HE or Neustar may be substituted
    Thanked by (1)Not_Oles

    ServerFactory aff best VPS; HostBrr aff best storage.

  • I use 1.1.1.1 and 8.8.8.8 plus Google's ipv6 resolver.

    No other fancy stuff he he

    Thanked by (1)Not_Oles

    Get the best deal on your next VPS or Shared/Reseller hosting from RacknerdTracker.com - The original aff garden.

  • I use quad9 as well.

    Google DNS might be in the same DC as your machine and Cloudflare somewhere further, therefore the difference between them.

    Thanked by (1)Not_Oles

    The all seeing eye sees everything...

  • cybertechcybertech OGBenchmark King

    @dahartigan said:
    I use 1.1.1.1 and 8.8.8.8 plus Google's ipv6 resolver.

    No other fancy stuff he he

    same here

    Thanked by (2)Not_Oles dahartigan

    I bench YABS 24/7/365 unless it's a leap year.

  • @dahartigan said:
    I use 1.1.1.1 and 8.8.8.8 plus Google's ipv6 resolver.

    No other fancy stuff he he

    You bring shame to the MJJ family.
    No Ali/Tencent/Baidu resolvers.

    Thanked by (1)dahartigan
  • @dahartigan said:
    I use 1.1.1.1 and 8.8.8.8 plus Google's ipv6 resolver.

    No other fancy stuff he he

    dns.he.net? :p

    Thanked by (2)Not_Oles dahartigan
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @bdl said:

    @dahartigan said:
    I use 1.1.1.1 and 8.8.8.8 plus Google's ipv6 resolver.

    No other fancy stuff he he

    dns.he.net? :p

    This one I have to quote in full because of what he he he HE @bdl said. Hehe! :)

    Thanked by (2)dahartigan bdl

    I hope everyone gets the servers they want!

  • howhow
    edited December 2021
    nameserver 8.8.8.8
    nameserver 8.8.4.4
    

    And disable IPv6

    Thanked by (1)Not_Oles
  • johnkjohnk Hosting Provider

    If the DNS system experienced "trouble," would we be better with a longer list of resolvers than two each IPv4 and IPv6?

    Not really. Typically, 2, maybe 3 "large" providers are more than enough. Seems Google is fastest for your case, so I'd use that first, with Cloudflare/Q9, Cisco, as backup.

    Perhaps sending DNS requests to the root servers might not work nowadays.

    Not a good idea.

    Thanked by (1)Not_Oles
  • 1.1.1.1
    9.9.9.9

  • bruh21bruh21 Hosting Provider

    1.1.1.1 is good enough

  • If we are talking about servers (virtual or not), my resolv.conf usually includes just a
    nameserver 127.0.0.1
    which leads to a dnsdist service.
    dnsdist may be configured to fetch results from your poison of choice (including 1.1.1.1, 9.9.9.9, 8.8.8.8 or whatever) or your local resolver; in the latter case your local named-chroot (or w/e) should pick a different privileged port on the listen-on port directive (if your resolver isn't on a remote box)(possibly not an open resolver)
    For a while I played with alternative roots (it's quite sad to see that the ORSN domain has been squatted by someone else), some boxes still rely on OpenNIC Tier 1 servers for their main resolver
    On my to-do list there's handshake DNS (maybe during holidays I'll give a more serious look); the famous public resolver nextdns.io is "Handshake domain"-compatible already

    Thanked by (2)bibble Not_Oles
  • Usually I do not change DNS myself.
    However, some VPS use cloudflare DNS which failed occasionally, so I changed them into 8.8.8.8

    Thanked by (1)Not_Oles
  • 8.8.8.8 and 9.9.9.9

    Thanked by (1)Not_Oles
  • I use DNSBench from GRC to test what are the fastest DNS servers on a given internet connection and use those. https://www.grc.com/dns/benchmark.htm

    (Runs both on Windows and in WINE)

    I also put into consideration the privacy policy for the DNS servers I will be using, as well.

    My personal go-to favorite is Hurricane Electric's (74.82.42.42).

    Thanked by (2)Not_Oles uptime

    Cheap dedis are my drug, and I'm too far gone to turn back.

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited December 2021

    @CamoYoshi said:
    I use DNSBench from GRC to test what are the fastest DNS servers on a given internet connection and use those. https://www.grc.com/dns/benchmark.htm

    (Runs both on Windows and in WINE)

    I also put into consideration the privacy policy for the DNS servers I will be using, as well.

    My personal go-to favorite is Hurricane Electric's (74.82.42.42).

    Hi @CamoYoshi ! Nice to see you! Just for fun:

    tom@darkstar:~$ ping -c 2 74.82.42.42
    PING 74.82.42.42 (74.82.42.42) 56(84) bytes of data.
    64 bytes from 74.82.42.42: icmp_seq=1 ttl=56 time=0.723 ms
    64 bytes from 74.82.42.42: icmp_seq=2 ttl=56 time=0.735 ms
    
    --- 74.82.42.42 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1018ms
    rtt min/avg/max/mdev = 0.723/0.729/0.735/0.006 ms
    tom@darkstar:~$ ping -c 2 1.1.1.1
    PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
    64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=21.1 ms
    64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=21.5 ms
    
    --- 1.1.1.1 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1001ms
    rtt min/avg/max/mdev = 21.094/21.306/21.518/0.212 ms
    tom@darkstar:~$ ping -c 2 8.8.8.8
    PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
    64 bytes from 8.8.8.8: icmp_seq=1 ttl=115 time=37.6 ms
    64 bytes from 8.8.8.8: icmp_seq=2 ttl=115 time=37.2 ms
    
    --- 8.8.8.8 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1002ms
    rtt min/avg/max/mdev = 37.200/37.392/37.584/0.192 ms
    tom@darkstar:~$ 
    

    Friendly greetings from NYC and Sonora, MX! 🗽🇺🇸🇲🇽🏜️

    Thanked by (1)CamoYoshi

    I hope everyone gets the servers they want!

  • @Not_Oles said:

    Perhaps 20 times as long to ping 1.1.1.1 as 8.8.8.8? Really!!??

    Google likely colo some servers in the same data center.

    In reality, you're really not going to notice the difference between 1ms and 21ms. The difference is only slightly longer than the time taken to render a single frame at 60fps (~16.6667ms). Generally humans don't notice anything less than ~40ms. Plus, any commonly accessed hosts will be in the local cache anyways.

    Thanked by (1)Not_Oles
  • edited December 2021

    For the first nameserver in /etc/resolv.conf, I typically use the provider's dns if they have one (most do not, these days). Then I ping each of the following and list them in order of quickest to slowest, up to 3.

    google public dns - 8.8.8.8 or 2001:4860:4860::8888 with ipv6 support
    cloudflare - 1.1.1.1
    he - 74.82.42.42 or 2001:470:20::2

    On rare occasions, I've had to use another from this list https://gist.github.com/mutin-sa/5dcbd35ee436eb629db7872581093bc5

    Thanked by (3)Not_Oles uptime yoursunny
  • I usually add options rotate so that load balancing is done among the three nameservers

    Thanked by (2)Not_Oles skorous

    "A single swap file or partition may be up to 128 MB in size. [...] [I]f you need 256 MB of swap, you can create two 128-MB swap partitions." (M. Welsh & L. Kaufman, Running Linux, 2e, 1996, p. 49)

  • 8.8.8.8
    Provider's DNS or (if provider's DNS is not available) 1.1.1.1
    2001:4860:4860::8888

    Thanked by (1)Not_Oles

    Recommend: MyRoot.PW|BuyVM|Inception Hosting|Prometeus

  • FreeBSDFreeBSD OG
    edited December 2021

    Provider HDNS: 103.196.38.38, 103.196.38.39

    Thanked by (2)Not_Oles uptime
  • edited December 2021

    DNS resolvers are overrated. :relieved:
    I use pure non-GMO p2p internet.

    PS: Via telepathic layer 2.

  • I've recently been stumbling around @Not_Oles Darkstar server in a semi-demented haze, probulating the relative responsiveness of 1.1.1.1 vs 8.8.8.8 vs 74.82.42.42 ...

    (Trying to figure out source of occasional 5-second delays in DNS lookups, under the possibly dubious assumption that it may relate to how ipv6 lookups are handled as per https://udrepper.livejournal.com/20948.html - or could it be something else ...?)

    Whatever's going on is not at all clear to me (nor would it be, I'm a noob when it comes to networks) - but so far it seems like the HE dns is doing a lot better than 1.1.1.1 and 8.8.8.8 - at least when testing on the Darkstar server (levelone network in Dallas routing through path ddos then hivelocity)

    uptime@darkstar:~/notes$ for x in $f $g $h; do echo $x; n=$(grep -- -- $x | wc -l); m=$(grep -- '--.*0:' $x | wc -l); echo $n samples / $m delays; grep -- '--.*0:' $x; done
    /home/uptime/notes/20211223_shock_http_1.1.1.1_dns
    72 samples / 10 delays
    100   612  100   612    0     0    121      0  0:00:05  0:00:05 --:--:--   152
    100   612  100   612    0     0    121      0  0:00:05  0:00:05 --:--:--   175
    100   612  100   612    0     0    119      0  0:00:05  0:00:05 --:--:--   152
    100   612  100   612    0     0    121      0  0:00:05  0:00:05 --:--:--   152
    100   612  100   612    0     0    121      0  0:00:05  0:00:05 --:--:--   154
    100   612  100   612    0     0    117      0  0:00:05  0:00:05 --:--:--   146
    100   612  100   612    0     0    121      0  0:00:05  0:00:05 --:--:--   152
    100   612  100   612    0     0    121      0  0:00:05  0:00:05 --:--:--   152
    100   612  100   612    0     0    121      0  0:00:05  0:00:05 --:--:--   152
    100   612  100   612    0     0    121      0  0:00:05  0:00:05 --:--:--   154
    /home/uptime/notes/20211223_shock_http_8.8.8.8_dns
    69 samples / 7 delays
    100   612  100   612    0     0    121      0  0:00:05  0:00:05 --:--:--   155
    100   612  100   612    0     0    119      0  0:00:05  0:00:05 --:--:--   152
    100   612  100   612    0     0    549      0  0:00:01  0:00:01 --:--:--   550
    100   612  100   612    0     0    559      0  0:00:01  0:00:01 --:--:--   560
    100   612  100   612    0     0    582      0  0:00:01  0:00:01 --:--:--   582
    100   612  100   612    0     0    585      0  0:00:01  0:00:01 --:--:--   585
    100   612  100   612    0     0    584      0  0:00:01  0:00:01 --:--:--   585
    /home/uptime/notes/20211223_shock_http_74.82.42.42_dns
    64 samples / 0 delays
    

    some more notes here, doing similar tests from various providers in Dallas https://wiki.metalvps.com/chat/20211223/?updated

    no idea what's going on, so it's kind of "interesting" .... Hoping to eventually understand some of this a bit better!

    Thanked by (1)Not_Oles

    HS4LIFE (+ (* 3 4) (* 5 6))

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited December 2021

    Hi @uptime! Is it unambiguous in the above results whether the http transfer being timed took place over IPv4 or IPv6?

    Even though the DNS inquiries apparently took place only over IPv4 the results presumably were returned in both IPv4 and IPv6. I believe both ends of the http transfer are IPv4 and IPv6 enabled.

    How might we know whether the specific http transfer delay being logged was due to DNS delay or perhaps was due to some delay related to the sender's or the receiver's IPv4 or IPv6 use or switching between the two? Maybe something like what Drepper was talking about on the page to which you linked?

    Hope this is helpful! Friendly greetings from Sonora! 🏜️

    I hope everyone gets the servers they want!

  • @Not_Oles said: Is it unambiguous in the above results whether the http transfer being timed took place over IPv4 or IPv6?

    I think so, if the --ipv4 parameter to curl does what it says on the tin:

    uptime@darkstar:~/notes$ man curl | grep -A 7 -- --ipv4
           -4, --ipv4
                  This  option tells curl to resolve names to IPv4 addresses only,
                  and not for example try IPv6.
    
                  Example:
                   curl --ipv4 https://example.com
    
                  See also  --http1.1  and  --http2.  This  option  overrides  -6,
                  --ipv6.
    [...]
    uptime@darkstar:~/notes$ typeset -f jankycurl
    jankycurl () 
    { 
        ip=${1:-74.82.42.42};
        date;
        curl -4 --dns-servers ${ip} http://shock.metalvps.com 2>&1 > /dev/null;
        date
    }
    uptime@darkstar:~/notes$ jobs
    [1]   Running                 while :; do
        jankycurl 1.1.1.1; zzz;
    done > $f &
    [2]-  Running                 while :; do
        jankycurl 8.8.8.8; zzz;
    done > $g &
    [3]+  Running                 while :; do
        jankycurl 74.82.42.42; zzz;
    done > $h &
    uptime@darkstar:~/notes$ printf '%s\n%s\n%s\n' $f $g $h
    /home/uptime/notes/20211223_shock_http_1.1.1.1_dns
    /home/uptime/notes/20211223_shock_http_8.8.8.8_dns
    /home/uptime/notes/20211223_shock_http_74.82.42.42_dns
    uptime@darkstar:~/notes$ for x in $f $g $h; do echo $x; n=$(grep -- -- $x | wc -l); m=$(grep -- '--.*0:' $x | wc -l); echo $n samples / $m delays; done
    /home/uptime/notes/20211223_shock_http_1.1.1.1_dns
    269 samples / 52 delays
    /home/uptime/notes/20211223_shock_http_8.8.8.8_dns
    260 samples / 17 delays
    /home/uptime/notes/20211223_shock_http_74.82.42.42_dns
    269 samples / 2 delays
    uptime@darkstar:~/notes$ grep -- '--.*0:' /home/uptime/notes/20211223_shock_http_74.82.42.42_dns 
    100   612  100   612    0     0    198      0  0:00:03  0:00:03 --:--:--   198
    100   612  100   612    0     0    390      0  0:00:01  0:00:01 --:--:--   390
    

    that's my story, and I'm sticking to it!

    Thanked by (1)Not_Oles

    HS4LIFE (+ (* 3 4) (* 5 6))

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @uptime Thanks! Maybe there might remain nevertheless a bit of IPv4/IPv6 ambiguity.

    Suppose curl wants IPv4 because it saw the -ipv4 parameter, but DNS wants to hand curl an IPv6 address, and the result is a delay. Is this too crazy?

    Suppose we put the IPv4 address for the target http server into /etc/hosts and tried the test that way. . . . Presumably the IPv4 address might be consistently taken from /etc/hosts. The five second delays might or might not remain, and we could try to figure out why.

    Maybe some kind of verbose logging of the DNS requests might be possible and helpful?

    Friendly greetings! :)

    I hope everyone gets the servers they want!

  • uptimeuptime OG
    edited December 2021

    Suppose curl wants IPv4 because it saw the -ipv4 parameter, but DNS wants to hand curl an IPv6 address, and the result is a delay. Is this too crazy?

    seems like a long shot ...? but here we are. Lol!

    I think some judicious packet captures (using tcpdump ) could be helpful to take a closer look.

    Might could go through the curl code (just a few tens of thousands of possibly relevant lines of c)

    I've seen mention of an "ipv4-only" build for curl (from a quick peek at the source) ...

    uptime@darkstar:~/curl/curl/lib$ wc -l $(grep -l -i -e 'ipv[46]' -e dns *.c)
        936 asyn-ares.c
        760 asyn-thread.c
        603 conncache.c
       1677 connect.c
       1766 cookie.c
        595 curl_addrinfo.c
        993 doh.c
       1259 easy.c
        365 easyoptions.c
       4402 ftp.c
        125 hostasyn.c
       1306 hostip.c
        298 hostip4.c
        160 hostip6.c
        106 hostsyn.c
       4375 http.c
       1067 http_proxy.c
        246 if2ip.c
        197 inet_ntop.c
        237 inet_pton.c
       3680 multi.c
       3077 setopt.c
        260 share.c
       1033 socks.c
       1122 strerror.c
       1409 tftp.c
       4194 url.c
       1707 urlapi.c
        612 version.c
       1329 x509asn1.c
      39896 total
    

    hmmm ...

    uptime@darkstar:~/curl/curl/lib$ grep -A 20 -i -e 'ipv4' -e dns asyn-ares.c | head -20
      struct curltime happy_eyeballs_dns_time; /* when this timer started, or 0 */
    #endif
    };
    
    /* How long we are willing to wait for additional parallel responses after
       obtaining a "definitive" one.
    
       This is intended to equal the c-ares default timeout.  cURL always uses that
       default value.  Unfortunately, c-ares doesn't expose its default timeout in
       its API, but it is officially documented as 5 seconds.
    
       See query_completed_cb() for an explanation of how this is used.
     */
    #define HAPPY_EYEBALLS_DNS_TIMEOUT 5000
    
    /*
     * Curl_resolver_global_init() - the generic low-level asynchronous name
     * resolve API.  Called from curl_global_init() to initialize global resolver
     * environment.  Initializes ares library.
     */
    

    something something HAPPY_EYEBALLS_DNS_TIMEOUT ftw ...?

    Thanked by (2)Not_Oles vyas

    HS4LIFE (+ (* 3 4) (* 5 6))

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited December 2021

    from https://man7.org/linux/man-pages/man3/resolver.3.html

       RES_DEBUG
              Print debugging messages.  This option is available only
              if glibc was built with debugging enabled, which is not
              the default.
    

    Just wondering if something like setting a resolver debug option might be helpful? Dunno. Have to see if the debug code includes timestamps? Or maybe add them?

    ¡Saludos! :)

    Thanked by (1)uptime

    I hope everyone gets the servers they want!

  • uptimeuptime OG
    edited December 2021

    All options are on the table.

    Shave every yak, I say!

    And let root sort them out.

    Anyhoo. Here's a fun read:

    IPv6, Large UDP Packets and the DNS

    [...] the underlying host of this name server is configured with a local maximum packet size of 1,280 octets. This means that in the first case the response being sent to the Google resolver is a single unfragmented IPv6 UDP packet, and the second case the response is broken into two fragmented IPv6 UDP packets. And it is this single change that triggers the Google Public DNS Server to provide the intended answer in the first case, but to return a SERVFAIL failure notice in response to the fragmented IPv6 response. When the local MTU on the server is lifted from 1,280 octets to 1,500 octets the Google resolver returns the server’s DNS response in both cases.

    Edit2: Moar, better, from June 2020, explains Happy Eyeballs and more -> https://www.potaroo.net/ispcol/2020-07/dns6.html

    Today the dual-stack Internet comes to the rescue and what does not or cannot happen in IPv6 is seamlessly fixed using IPv4, but that's not a course of action that will be sustainable forever. We really need to address this appalling packet drop rate for fragmented IPv6 packets in the DNS, and on all end-to-end IPv6 paths in the Internet. Our choices are to either try and fix this problem in the switches in the network, or we can alter end systems and applications to simply work around the problem.

    HS4LIFE (+ (* 3 4) (* 5 6))

  • 4.4.4.4 if you are MjJ

    @dahartigan

    Thanked by (1)bdl
Sign In or Register to comment.