On 10/13/07, Micah Cowan <[EMAIL PROTECTED]> wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > > > On 10/13/07, Tony Godshall <[EMAIL PROTECTED]> wrote: > >> OK, so let's go back to basics for a moment. > >> > >> wget's default behavior is to use all available bandwidth. > >> > >> Is this the right thing to do? > >> > >> Or is it better to back off a little after a bit? > > Heh. Well, some people are saying that Wget should support "accelerated > downloads"; several connections to download a single resource, which can > sometimes give a speed increase at the expense of nice-ness. > > So you could say we're at a happy medium between those options! :) > > Actually, Wget probably will get support for multiple simultaneous > connections; but number of connections to one host will be limited to a > max of two. > > It's impossible for Wget to know how much is appropriate to back off, > and in most situations I can think of, backing off isn't appropriate. > > In general, though, I agree that Wget's policy should be "nice by default".
If it was me, I'd have it default to backing off to 95% by default and have options for more aggressive behavior, like the multiple connections, etc. I'm surprised multiple connections would buy you anything, though. I guess I'll take a look through the archives and see what the argument is. Does one tcp connection back off on a lost packet and the other one gets to keep going? Hmmm. > Josh Williams wrote: > > That's one of the reasons I believe this > > should be a module instead, because it's more or less a hack to patch > > what the environment should be doing for wget, not vice versa. > > At this point, since it seems to have some demand, I'll probably put it > in for 1.12.x; but I may very well move it to a module when we have > support for that. Thanks, yes that makes sense. > Of course, Tony G indicated that he would prefer it to be > conditionally-compiled, for concerns that the plugin architecture will > add overhead to the wget binary. Wget is such a lightweight app, though, > I'm not thinking that the plugin architecture is going to be very > significant. It would be interesting to see if we can add support for > some modules to be linked in directly, rather than dynamically; however, > it'd still probably have to use the same mechanisms as the normal > modules in order to work. Anyway, I'm sure we'll think about those > things more when the time comes. Makes sense. > Or you could be proactive and start work on > http://wget.addictivecode.org/FeatureSpecifications/Plugins > (non-existent, but already linked to from FeatureSpecifications). :) I'll look into that. > On 10/14/07, Hrvoje Niksic <[EMAIL PROTECTED]> wrote: > > "Tony Godshall" <[EMAIL PROTECTED]> writes: > > > > > OK, so let's go back to basics for a moment. > > > > > > wget's default behavior is to use all available bandwidth. > > > > And so is the default behavior of curl, Firefox, Opera, and so on. > > The expected behavior of a program that receives data over a TCP > > stream is to consume data as fast as it arrives. What was your point exactly? All the other kids do it? Tony G