[EMAIL PROTECTED] wrote:
Webrev:
http://cr.opensolaris.org/~tmueller/cr-2402,2415/
The changes that you've made to filelist break existing functionality
for retries. These changes will also make it more difficult for me to
retry failed requests against different mirrors.
Can you please explain how this breaks existing functionality for
retries? The files are still put into the download area just as they
were before.
-1
filelist.py:
- line 111-113: You've taken out a check that we use to incrementally
download files as we fill up a filelist. The while is here in case
we don't successfully download all of the files in the first try.
Yes, this is intentional. In fact, I was surprised that filelist
started downloading files before all of the actions had been processed,
as this misses the opportunity to download files with the same hashval
only once (unless they happen to be close to each other in the manifest).
Also, what is the purpose of the action.preinstall method if files start
getting download before all of the preinstall methods are called? Note:
preinstall up to this point has been a placeholder since it didn't do
anything before for any action.
- line 129 & 130: Why are you making the filelist hold an arbitrarily
large number of files? Dan did a bunch of work to tune the size of
a filelist to optimize its retrieval time. It seems like you're
un-doing most of that work.
I don't think I'm un-doing any of Dan's work. It is true that filelist
will need to contain references to all of the actions that exist for the
package, but those Action objects already exist anyway as part of
constructing the pkgplan. The only extra data here is the list itself
which is not significant.
The size of the retrieve operations to the repository is still the
same. The check at line 284 is the same check that was done previously
as the list is filled up.
- line 283-285: Why are we deferring the work when we can do it
earlier? It seems like we're making this needlessly complex by
stuffing this functionality into _get_files. This is also going to
bloat the size of a filelist object. We've specifically spent time
trying to optimize our memory footprint. This seems ripe for
regression.
The work is deferred so that the action.preinstall method can be called
on all actions before the download starts. If the purpose of the
preinstall method is to check whether this install has any chance of
working, why would we want to download a bunch of files first before
actually calling the method for all actions in the package (and for all
packages within an imageplan)?
I can do some memory size measurements with and without this change to
see if the size of the filelist adds significantly so the overall
process size.
- line 335-343: Why did you remove this check?
The _is_full method was only called by add_action and _add_action, and
those calls were removed. No other place calls the method so I removed
the method.
An equivalent check is now at line 284 in the _get_files method. This
limits the amount of data that is downloaded in any single HTTP request.
Tom
-j
begin:vcard
fn:Tom Mueller
n:Mueller;Tom
org:Sun Microsystems, Inc.;Update Center Software
adr:;;21915 Hillandale Dr;Elkhorn;NE;68022;USA
email;internet:[EMAIL PROTECTED]
title:Senior Staff Engineer
tel;work:877-250-4011
tel;fax:877-250-4011
tel;home:402-916-9943
x-mozilla-html:TRUE
version:2.1
end:vcard
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss