Ian Jackson <[EMAIL PROTECTED]> writes: > On `small memory' systems dpkg switches to a different data structure > which is about twice as slow for general access on a big machine, but > has a much smaller working set so is much faster for setup and access > on small machines. dpkg uses sysinfo(2) to guess which algorithm to > use, and you can force one or the other using command line options. I > have checked this on a 3Mb system and it worked as expected.
You might want to tune the threshold upwards. At least on this 16M machine --smallmem is noticeably faster and causes less disk thrashing. > (The resulting structures will still be editable with emacs.) Yay, of course everything is editable with emacs... > It does ask the kernel to confirm that changes have been committed to disk > before it continues. Using fsync, you realize Linux's fsync implementation is to do a full sync. It's probably the cause of much of the disk activity, but quite important. Steve Dunham <[EMAIL PROTECTED]> writes: > I think a single text file would be noticably faster than a bunch of *.list > files, but I don't know how much time is spent on I/O and how much is spent > on building data structures in memory. (It would save the time of scanning > the directory, opening and closing all the files.) Opening files in a large directory can be extremely inefficient in many Unix varieties. The kernel has to do a linear search for each the file. Linux 2.1 should be faster because of the dentry stuff, but even so it would be more efficient to use a directory for each package with the various control files inside. greg -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]