Extremely slow download speeds from Github Container Registry (ghcr.io)

I'm having a weird issue that I'm trying to debug but don't know where to start. I'm facing extremely slow download speeds from Github Container Registry (ghcr.io). All other connectivity seems to be working fine. I don't have any issues with ghcr.io from my home network or any other provider. I already rebooted the server, that did not fix the issue. Below are some peed comparisons and troubleshooting info I gathered.

Difference in speed between Github Container Registry (ghcr.io) and Docker Hub downloading the same image:

freek@gitlab:/tmp$ time docker pull ghcr.io/nginx-proxy/nginx-proxy:latest
latest: Pulling from nginx-proxy/nginx-proxy
e1caac4eb9d2: Pull complete
504c1e01744e: Pull complete
a1330b43d726: Pull complete
5e8995dba715: Pull complete
d5181593591e: Pull complete
74a4f808141d: Pull complete
330fd09f4306: Pull complete
b81a631e2224: Pull complete
7dc00ea44895: Pull complete
5a573f99fd4b: Pull complete
946f18ab3de6: Pull complete
0c35715da43d: Pull complete
4f4fb700ef54: Pull complete
Digest: sha256:87c3434637f01c00f9b790a210d9027093e08b939b293bb2b8ee597ee000bf80
Status: Downloaded newer image for ghcr.io/nginx-proxy/nginx-proxy:latest
ghcr.io/nginx-proxy/nginx-proxy:latest

real    33m25.193s
user    0m0.088s
sys     0m0.051s
freek@gitlab:/tmp$ time docker pull nginxproxy/nginx-proxy:latest
latest: Pulling from nginxproxy/nginx-proxy
e1caac4eb9d2: Pull complete
504c1e01744e: Pull complete
a1330b43d726: Pull complete
5e8995dba715: Pull complete
d5181593591e: Pull complete
74a4f808141d: Pull complete
330fd09f4306: Pull complete
b81a631e2224: Pull complete
7dc00ea44895: Pull complete
5a573f99fd4b: Pull complete
946f18ab3de6: Pull complete
0c35715da43d: Pull complete
4f4fb700ef54: Pull complete
Digest: sha256:87c3434637f01c00f9b790a210d9027093e08b939b293bb2b8ee597ee000bf80
Status: Downloaded newer image for nginxproxy/nginx-proxy:latest
docker.io/nginxproxy/nginx-proxy:latest

real    0m5.477s
user    0m0.017s
sys     0m0.025s

Traceroute to ghcr.io:

freek@gitlab:/tmp$ mtr --report ghcr.io
Start: 2024-02-16T14:32:22+0100
HOST: gitlab           Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 5.2.71.2                   0.0%    10    0.7   0.7   0.3   3.3   0.9
  2.|-- 5.255.66.194              50.0%    10    2.1   1.1   0.6   2.1   0.6
  3.|-- 185.8.179.33               0.0%    10    0.7   0.8   0.7   1.3   0.2
  4.|-- ae22-404.ams10.core-backb  0.0%    10    2.3   2.3   2.2   2.5   0.1
  5.|-- ae2-2021.fra20.core-backb  0.0%    10    8.3   8.3   8.1   8.5   0.1
  6.|-- de-cix2.fra.github.com     0.0%    10    8.8   8.6   8.4   9.3   0.2
  7.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
  8.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
  9.|-- lb-140-82-121-33-fra.gith  0.0%    10    8.6   8.6   8.6   8.8   0.1

Traceroute to Docker Hub (seems to be using IPv6):

freek@gitlab:/tmp$ mtr --report docker.io
Start: 2024-02-16T14:42:54+0100
HOST: gitlab           Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 2a04:52c0:1674::2          0.0%    10    0.4   0.6   0.3   1.4   0.3
  2.|-- 2a00:1ca8:1::170           0.0%    10    0.6   0.8   0.6   1.1   0.2
  3.|-- 2a03:3f40::10:33           0.0%    10    1.3   1.7   1.3   2.5   0.4
  4.|-- 2001:978:2:40::3:1        80.0%    10    3.8   3.8   3.7   3.8   0.1
  5.|-- be3458.ccr42.ams03.atlas. 80.0%    10    3.6   3.6   3.5   3.6   0.0
  6.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
  7.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
  8.|-- be2090.rcr21.ymq02.atlas. 30.0%    10   83.3  83.2  83.1  83.3   0.1
  9.|-- 2001:550:2:6::4c:2         0.0%    10   81.9  83.9  81.7  96.2   4.5
 10.|-- 2620:107:4000:a::1e       20.0%    10   83.3  90.2  83.3 136.7  18.8
 11.|-- 2620:107:4000:a::17        0.0%    10   92.7  85.0  82.9  92.7   3.3
 12.|-- 2620:107:4000:8001::60     0.0%    10   93.5  95.2  93.5 101.9   2.9
 13.|-- 2620:107:4000:c5c0::f3fd:  0.0%    10   93.3  93.5  93.3  94.4   0.3
 14.|-- 2620:107:4000:cfff::f20d:  0.0%    10   93.6  93.9  93.5  94.3   0.3
 15.|-- 2620:107:4000:a7d3::f000:  0.0%    10   93.9  94.0  93.9  94.2   0.1
 16.|-- 2620:107:4000:cfff::f201:  0.0%    10   94.2 108.7  94.1 180.5  31.1
 17.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0

iperf speedtest from yabs:

iperf3 Network Speed Tests (IPv4):
---------------------------------
Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping
-----           | -----                     | ----            | ----            | ----
Clouvider       | London, UK (10G)          | 794 Mbits/sec   | 903 Mbits/sec   | 7.75 ms
Scaleway        | Paris, FR (10G)           | 759 Mbits/sec   | 932 Mbits/sec   | 11.7 ms
NovoServe       | North Holland, NL (40G)   | 800 Mbits/sec   | 937 Mbits/sec   | 2.54 ms
Uztelecom       | Tashkent, UZ (10G)        | 709 Mbits/sec   | 228 Mbits/sec   | --
Clouvider       | NYC, NY, US (10G)         | 673 Mbits/sec   | 409 Mbits/sec   | 77.0 ms
Clouvider       | Dallas, TX, US (10G)      | 535 Mbits/sec   | 231 Mbits/sec   | 116 ms
Clouvider       | Los Angeles, CA, US (10G) | busy            | 306 Mbits/sec   | --

iperf3 Network Speed Tests (IPv6):
---------------------------------
Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping
-----           | -----                     | ----            | ----            | ----
Clouvider       | London, UK (10G)          | 820 Mbits/sec   | 907 Mbits/sec   | 7.86 ms
Scaleway        | Paris, FR (10G)           | 773 Mbits/sec   | 916 Mbits/sec   | 10.7 ms
NovoServe       | North Holland, NL (40G)   | 819 Mbits/sec   | 923 Mbits/sec   | 2.52 ms
Uztelecom       | Tashkent, UZ (10G)        | 689 Mbits/sec   | 417 Mbits/sec   | 86.7 ms
Clouvider       | NYC, NY, US (10G)         | 660 Mbits/sec   | 311 Mbits/sec   | 77.2 ms
Clouvider       | Dallas, TX, US (10G)      | 520 Mbits/sec   | 168 Mbits/sec   | 116 ms
Clouvider       | Los Angeles, CA, US (10G) | 138 Mbits/sec   | 139 Mbits/sec   | 200 ms

Has any of you experienced this issue before? Could this be a routing issue?

Thanks

Comments

  • avelineaveline Hosting ProviderOG

    It's not a latency/throughput issue with GHCR.io. It pulls from Fastly, and it's very possible that your ISP is having problems with Fastly's AnyCast IPv6 prefixes (we've had these problems, too).

    The affected IPv6 prefixes are 2606:50c0:8000::/48 and all others within 2606:50c0:8000::/46 (e.g. 2606:50c0:8000::154)

    Misaka.io | Blazing fast AnyCast DNS with 60+ POPs GeoDNS, AXFR, DNSSEC supported.
    And Reliable high-performance virtual server | Ashburn, New York, Seattle, San Jose, Hong Kong, Tokyo, Singapore, São Paulo, Johannesburg

    ping.sx | Ping any server from global locations in parallel

  • @aveline said:
    It's not a latency/throughput issue with GHCR.io. It pulls from Fastly, and it's very possible that your ISP is having problems with Fastly's AnyCast IPv6 prefixes (we've had these problems, too).

    The affected IPv6 prefixes are 2606:50c0:8000::/48 and all others within 2606:50c0:8000::/46 (e.g. 2606:50c0:8000::154)

    That seems to be it! I temporary disabled IPv6 connectivity and badaboom, Docker pulls from ghcr.io are fast as lightning again. Thank you very very much! I'll address this to my hosting provider. In the mean time I'm looking if I can force docker to pull images over ipv4, but no luck yet.

    Thanks again!

  • The issue likely isn't related to latency or throughput with GHCR.io. It pulls content from Fastly, and it's plausible that your Internet Service Provider (ISP) is encountering difficulties with Fastly's AnyCast IPv6 prefixes. We've faced similar problems in the past.

    The affected IPv6 prefixes include 2606:50c0:8000::/48 and all others within 2606:50c0:8000::/46 (such as 2606:50c0:8000::154).

    Thanked by (1)ehab
  • YmpkerYmpker OGContent Writer
    edited May 4

    @Freek said:

    @aveline said:
    It's not a latency/throughput issue with GHCR.io. It pulls from Fastly, and it's very possible that your ISP is having problems with Fastly's AnyCast IPv6 prefixes (we've had these problems, too).

    The affected IPv6 prefixes are 2606:50c0:8000::/48 and all others within 2606:50c0:8000::/46 (e.g. 2606:50c0:8000::154)

    That seems to be it! I temporary disabled IPv6 connectivity and badaboom, Docker pulls from ghcr.io are fast as lightning again. Thank you very very much! I'll address this to my hosting provider. In the mean time I'm looking if I can force docker to pull images over ipv4, but no luck yet.

    Thanks again!

    This seems like another case for the thread "Why do people still use IPv4?" . If IPv6 would just work for everything, and be adopted reliably, it'd be far more widespread.

    Thanked by (3)Freek zgato Otus9051
  • IPv4 =

    IPv6 =

    Thanked by (3)Ympker ehab Janevski
  • YmpkerYmpker OGContent Writer

    @don_keedic said:
    IPv4 =

    IPv6 =

    I didn't know I needed this gif, but turns out I did.

    Thanked by (1)ehab
  • @Michael785 said:
    The issue likely isn't related to latency or throughput with GHCR.io. It pulls content from Fastly, and it's plausible that your Internet Service Provider (ISP) is encountering difficulties with Fastly's AnyCast IPv6 prefixes. We've faced similar problems in the past.

    The affected IPv6 prefixes include 2606:50c0:8000::/48 and all others within 2606:50c0:8000::/46 (such as 2606:50c0:8000::154).

    Yep, that was indeed the case. @aveline already pointed me into the right direction :)
    Unfortunately my hosting provider was unable to solve the connectivity issues and since I was unable to force docker pulls over IPv4, I disabled IPv6. I guess I owe @yoursunny some pushups now.

    Thanked by (1)Ympker
  • @Freek did you manage to find a better solution other than disabling IPv6 entirely? I'm having this issue too and was hoping to avoid disabling.

  • IPv6 networks are weaker networks than IPv4 networks.
    You can see in the images.


  • @atth said:
    @Freek did you manage to find a better solution other than disabling IPv6 entirely? I'm having this issue too and was hoping to avoid disabling.

    Unfortunately not. My provider said he would be unable to fix these routing issues, so I had to disable IPv6 all together.
    You can't tell/force docker to pull images over IPv4, that would be a nice workaround.

    Thanked by (1)atth
  • AuroraZeroAuroraZero ModeratorHosting Provider

    @Freek said:

    @atth said:
    @Freek did you manage to find a better solution other than disabling IPv6 entirely? I'm having this issue too and was hoping to avoid disabling.

    Unfortunately not. My provider said he would be unable to fix these routing issues, so I had to disable IPv6 all together.
    You can't tell/force docker to pull images over IPv4, that would be a nice workaround.

    Can you connect to a VPN that is dual stack and have it use IPV6 to pull or push?

    Free Hosting at YetiNode | Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?

  • @AuroraZero said:

    @Freek said:

    @atth said:
    @Freek did you manage to find a better solution other than disabling IPv6 entirely? I'm having this issue too and was hoping to avoid disabling.

    Unfortunately not. My provider said he would be unable to fix these routing issues, so I had to disable IPv6 all together.
    You can't tell/force docker to pull images over IPv4, that would be a nice workaround.

    Can you connect to a VPN that is dual stack and have it use IPV6 to pull or push?

    That could work but... yeah, I rather just switch off IPv6 for now ;)

Sign In or Register to comment.