> > 3) Look at max_connections code in MirrorGroup._process_kwargs().
> >    If you set max_connections=1 and first mirror has a limit of 2,
> >    then we'd fail.
> 
>  No ... if the mirror has a max_connection of N and the configuration
> has a max_connection of M ... then the max_connection for that mirror
> is min(N, M).

Tried to think as a user.. We limit per-host #conn, because that's
what server admins expect and require.  But IMO it makes little 
sense to limit per-repo #conn.

When I run a 100+ package download on a memory-starved system
all I need is to limit the total number of child processes
forked.  

Using max_connections as a hard global limit should not be very hard to
implement, as it's tracked already, and I'd just delay starting
of a new download when the global limit is maxed out.

This still leaves open the selection of mirrors.  I like the
idea of selecting the 'top N' mirrors, but we need to guess
a suitable value somehow, eg global limit divided by number
of repos we download from..

_______________________________________________
Yum-devel mailing list
[email protected]
http://lists.baseurl.org/mailman/listinfo/yum-devel

Reply via email to