On Thu, 2012-06-28 at 16:17 -0400, Mike McLean wrote: > On Thu, Jun 28, 2012 at 12:20 PM, Mike McLean <mikem....@gmail.com> wrote: > > On Thu, Jun 28, 2012 at 10:50 AM, Zdenek Pavlas <zpav...@redhat.com> wrote: > >> You'll retry each mirror configured number of times, but only > >> for the 1st file requested. Is that intentional? > >> Anyway, I like the 'skip-but-retry' idea. > > > > Ah, I see. My data dictionary is only getting defined once at the > > start. I see a couple options: > > 1) pass this failure_callback as a kw arg to urlgrab instead > > 2) move this logic to MirrorGroup > > Updated patch and added a small patch to urlgrabber to track the > per-mirror failure counts (both total and sequential). > > Note this patch also removes a mirror from the master list if it gets > $retries sequential failures. Seems like the right thing to do, but I > think it will be difficult for this to happen unless there are also > many failures across all the other mirrors.
I was hoping Zdenek would reply, as I know even less about urlgrabbing now than I ever did!:) Saying that ... "retries" is documented as (in yum): retries: Set the number of times any attempt to retrieve a file should retry before returning an error. Setting this to `0' makes yum try forever. Default is `10'. So I'm pretty sure we don't want to change that to mean something like "if you have 50 mirrors (usual in Fedora) and retries=10, we'll retry a file 500 times" ... which is what I _think_ this patch set does. I'm also tempted to NAK/ignore the 503 problem in general, unless we can deal with Retry-After headers the whole things seems like a guessing game on if we really should immediately retry (Yes, for Fedora it is, but I did not think this was universal -- in fact the opposite). _______________________________________________ Yum-devel mailing list Yum-devel@lists.baseurl.org http://lists.baseurl.org/mailman/listinfo/yum-devel