I'm using reiserfs, and response time for single random URL is OK, even if the 
host contains thousands (6000+) files. 

But when I request HTTP index of such host (or even lasttime index if it 
contains many items, or wwwoffle-ls) I have to wait sometimes more than a 
minute. This is caused not primarily by big directories, but by big number of 
U* files. When building an index, wwwoffle has to look into every U* file and 
OS is forced to read thousands of files scattered on whole hard disk. IMHO 
hash->URL resolving using U* files is the worsest possible solution. I think 
Berkeley DB is best suited for such a purpose. 

But it is isn't any hot problem for me because I only use these indexes 
occassionally.

Juraj

On Saturday 06 September 2003 15:24, Andrew M. Bishop wrote:
> Andy Rabagliati <[EMAIL PROTECTED]> writes:
> > At the moment a sites files, and URL unhash, are kept in a flat
> > directory /var/spool/wwwoffle/http/www.domain.com/*.
> >
> > These directories can get really big, and can take a significant
> > time to open.



Reply via email to