On Mon, 2011-11-07 at 11:55 +0100, Zdeněk Pavlas wrote: > Hi, > > This is the current parallel downloader patchset, with some > of the changes James suggested merged in. I'd be very thankfull > for any comments. > > Zdenek > > [PATCH 02/10] Implement 'failfunc' callback. > [PATCH 03/10] Obsolete the _make_callback() method > > New, somewhat simpler implementation using the default failfunc. > MG now removes the callback before passing request to urlgrabber. > _callback renamed to _run_callback.
Seems fine. > [PATCH 04/10] Implement parallel urlgrab()s > [PATCH 05/10] Reuse curl objects (per host) > > Some small implementation changes. MirrorGroup: max_connections > moved from the mirror object to 'kwarg' hash and documented. > Curl object reuse to enable connection keepalives. This still uses curl multi, again given the giant bugs we've had with basic curl downloading I don't want to introduce this for 1.0 ... it's just asking for everything to not work properly. > [PATCH 06/10] _dumps + _loads: custom serializer/parser > [PATCH 07/10] Downloader process > [PATCH 08/10] External downloading > > New builtin serializer instead of the simplejson one. > _readlines() function to read from the pipe. Seems fine. > [PATCH 10/10] Optional/throttled progress reporting This last one isn't the correct solution, you can't just drop updates. Doesn't it matter a lot to fix this for 1.0? If so then the better solution would be to work out why we are doing a lot of small reads +writes and try to merge them at that layer. You have access to push now? Can you remove the curl multi bits and the last patch and push to a branch? _______________________________________________ Yum-devel mailing list Yum-devel@lists.baseurl.org http://lists.baseurl.org/mailman/listinfo/yum-devel