Hey guys, I got access to a login on an olpc machine (364mhz geode processor with 241MB of ram). I was given access so I can see how craptastic yum is running on there. I have to admit, it's bad in some place, not as bad in others.
Here are some interesting numbers I got from some tests. xml-metadata parsing using the C-based parser is the longest part of the whole process. I can't imagine how long it would be if we weren't using the C-based metadata parser. I don't think I want to imagine it, either. It took about 8 minutes for everything. For fedora extras primary.xml (4919 pkgs) it took 1m38s. (I timed each one individually) a script like this: import yum import os import sys sys.exit() takes 3.7s to run if you remove the 'import yum' it takes 0.7s to run. if you put in 'import optik' instead of yum it takes 2.5s. yum imports a lot of stuff and it looks like all the other checks is one of the things that eats up the time on load. yum --version takes 7s -bash-3.1# time rpm -qa >> /dev/null real 0m14.632s user 0m14.060s sys 0m0.220s -bash-3.1# time yum list installed >>/dev/null real 0m9.797s user 0m8.910s sys 0m0.630s Depsolving actually wasn't as big a portion of the time as I thought it would be for yum install/update commands. However, this machine was on a very good network connection so there was no delay in downloading metadata files or headers for depsolving. Once the metadata is parsed in b/c of the caching things zip along much better than I expected on multiple commands. Much worse than I'd want in other places, though. So, the things we will need to figure out: 1. can we store less data in primary.xml and still get what we need? primary is fairly trimmed down, actually, so I'm not sure how'd we get there from here. Also changing the format would be a big pain in the arse for a variety of reasons. Not impossible, just not a quicker thing to fix. The only way I can think of would be a different format so we don't have to parse the xml or pre-parsing the metadata into a sqlite db. This would make downloads of the metadata larger but maybe it would be faster for operations. For example - fedora extras: -rw-r--r-- 1 root root 1.6M Dec 16 03:06 primary.xml.gz -rw-r--r-- 1 root root 2.2M Dec 16 03:09 primary.xml.sqlite.bz2 bzipped the primary xml sqlite db is 2.2M vs 1.6M for the xml itself. 2. How we can make the startup time for running yum faster than what it is. It's fairly abysmal on this machine. - I was going to run 'import yum; foo = yum.YumBase()' under the profiler to see what is taking so long. any other ideas of obvious things to look at and/or interesting optimization ideas? thanks, -sv _______________________________________________ Yum-devel mailing list [email protected] https://lists.dulug.duke.edu/mailman/listinfo/yum-devel
