On Mon, 2011-11-21 at 04:57 -0500, Zdenek Pavlas wrote: > > > [PATCH 10/10] Optional/throttled progress reporting > > > > This last one isn't the correct solution, you can't just drop > > updates. > > Why not? The size reported is cumulative, and when download finishes > it's sent unconditionally. Do we have users relying on faster than > 0.3s updates? Do we have to 1:1 match write() and update()?
No, and no. The problem is the weird edge cases like: 1. Update, got 1 byte. 2. Update, got 6666 bytes. 3. Network glitch for N seconds. ...here if you happen to "skip" sending the second update then you can be giving very false information to the user for N seconds of time. Obviously it's not going to happen a lot, but I'd rather just not have to worry about it. > I've noticed curl (both multi and single) processes data in 1348-byte > chunks and easily does few hundred updates/s. Well we tie the progress update directly to curl atm. ... so are we doing 100s of updates now? Does it matter now? Does it start to matter if they go over a pipe? I'd only worry about it a lot, if it's obviously doing something suboptimal ... in theory we could add something like "only do the update when we've got 50k of new data, or 1% of the currently downloaded data" then we can "guarantee" users won't care (haha). _______________________________________________ Yum-devel mailing list [email protected] http://lists.baseurl.org/mailman/listinfo/yum-devel
