Hi, > This still uses curl multi, again given the giant bugs we've had > with basic curl downloading I don't want to introduce this for 1.0 ... > it's just asking for everything to not work properly.
Ok, I'm going to implement multiple downloader processes, and their management and polling in the parent. > > [PATCH 10/10] Optional/throttled progress reporting > > This last one isn't the correct solution, you can't just drop > updates. Why not? The size reported is cumulative, and when download finishes it's sent unconditionally. Do we have users relying on faster than 0.3s updates? Do we have to 1:1 match write() and update()? I've noticed curl (both multi and single) processes data in 1348-byte chunks and easily does few hundred updates/s. > Doesn't it matter a lot to fix this for 1.0? If so then the better > solution would be to work out why we are doing a lot of small reads > +writes and try to merge them at that layer. I believe urlgrabber users never see individual reads and writes (there's no download-to-fileobject API), so merging is not necessary. > You have access to push now? Can you remove the curl multi bits and > the last patch and push to a branch? Yes, but avoiding curl multi is not trivial, as it complicates the ExternalDownloader class. _______________________________________________ Yum-devel mailing list [email protected] http://lists.baseurl.org/mailman/listinfo/yum-devel
