Andrew Gregory <andrew.gregor...@gmail.com> on Mon, 2022/10/10 19:55:
> None of these links looks to me like what you want pacman to do with the
> no-cache header.  The caching is still all per-object.  Getting a 404 with
> no-cache set means to retry the request for that same object, not to make
> inferences about whether requests for other objects will 404.
> 
> pacman isn't caching anything in the sense that Cache-Control was meant to
> affect, so I don't see how the Cache-Control header has any relevance here.

I have to agree with what you say.

But I can argue the other way round: A 404 response is for a specific
object, not the hole server. The server error limit excludes servers for some
missed objects, and does not try others - though it did not check the server
for that same object. Is that any better?

I can understand to skip a server for hard errors. For example I do not want
to run into the same timeout again and again, or resolve a non-existent host
again and again.

For soft errors (where the server exists and answers) this should be
different. Well, perhaps the current behavior is even fine for default
behavior and the majority of people, but there should by a way to opt out.

In an older thread someone denied to add an option to disable the server
error limit. How about a configuration options that disables the server error
limit for soft errors only (and keep the behavior for hard errors as is)?
Something like `NoServerSoftErrorLimit` (and possibly
`--noserversofterrorlimit`) that makes cache servers work?

Soft errors are not resource hungry in any way. This es even more true with
parallel downloads where no time is wasted for the request when a download
is still running.
-- 
main(a){char*c=/*    Schoene Gruesse                         */"B?IJj;MEH"
"CX:;",b;for(a/*    Best regards             my address:    */=0;b=c[a++];)
putchar(b-1/(/*    Chris            cc -ox -xc - && ./x    */b/42*2-3)*42);}

Attachment: pgpkD0adHMOUD.pgp
Description: OpenPGP digital signature

Reply via email to