Hilton Chain <[email protected]> writes:

> Hilton Chain <[email protected]> writes:
> [...]
>> So for best querying performance:
>>
>>     a suitable mirror > Cloudflare (cached) > Cloudflare (uncached)
>>
>> Although Cloudflare might be better for downloading speed.
>>
>> Then the goal will be: deploy more mirrors for better coverage and use
>> Cloudflare as the default mirror for stable experience.
>>
>> To achieve the latter, performance of uncached requests should be improved, 
>> then
>> there're two approaches:
>>
>>     1. Sync narinfo files to Cloudflare as well, making it similiar to a full
>> mirror.  There're low-latency storage choices but adding more expenses.
>
> Since it's included in the plan, I tried to move narinfo files into 
> Cloudflare's
> key-value storage, making cache-cdn a full mirror.
>
> Performance on uncached lookups should be improved for all regions now, though
> at best a cached lookup takes 0.02s per request, while a suitable mirrors may
> take only 0.005s.


Found the issue, thanks to iyzsong!  The reason of the low performance is that
Cloudflare Workers doesn't support the Keep-Alive header.  So to improve the
performance, there're two approaches:

1. Support HTTP/2 in Guix side.
2. Stop using Cloudflare Workers + KV, and use Cloudflare as a cache for the
head node.

The fastest approach we can have is (2.).  Since I have already set up a full
mirror on Cloudflare, it can be replicated again in the future when we have
HTTP/2 support.


>>     2. The worker is deployed near the user, so narinfo files should be 
>> accessed
>> from a mirror in nearest region, sounds more reasonable given the former 
>> goal,
>> I'll have a try.
>
> For anyone who reads the thread and interested in them, please let me know if
> the mirrors are slow, so I can know whether they are working well and which
> regions needs mirrors more.
>
> Also there's an option (the second mirroring approach in my blog post) to
> selfhost a narinfo mirror and reverse proxy substitutes requests to Cloudflare
> R2.  This can be done locally as well, with the lowest latency.


I'll update the setup and add a new performance comparison soon.

Reply via email to