On Sat, 16 Dec 2006, seth vidal wrote:
On Sat, 2006-12-16 at 13:48 +0200, Panu Matilainen wrote:
The reason I didn't go this route for FC5 anaconda is that it's just
the same problem as having hdlist, etc. Multiple versions of the same
metadata, the problem we were trying to avoid by moving to repodata.
I'd strongly argue this is the wrong approach.
+1
I suggest looking closer at where the time is *really* spent. Remember the
libxml2 "slowness" which turned out to be something in the way things are
copied between C and python? Is it really the xml parsing where most of
the time is spent, or is it something else like sqlite interactions or...?
Parsing those xml files sure isn't cheap, but it's not *that* slow in
C/C++ - I'd look for other places first.
Here's one easy target for optimization (the time difference is
consistent over successive runs):
Here's the output of the same on the olpc box:
multiple runs here so read through them all:
-bash-3.1# time yum --disablerepo='*' --enablerepo='extras' list
available >> /dev/null
^^^^^^^^^^^^
looks like it is less important on the olpc machine. given the numbers.
It shaves a little less than a second. Keep in mind in that run of 1m38s
it is only parsing primary.xml.gz for extras.
That's because you're redirecting output of both to /dev/null. Time it
without that redirection like it was in my examples.
- Panu -
_______________________________________________
Yum-devel mailing list
[email protected]
https://lists.dulug.duke.edu/mailman/listinfo/yum-devel