I might suggest a pluggable algorithm... Nothing fancy like .so's or anything, but maybe at least just clear comments for replacing it. I did this for another project and used two level 256 directory structure, which gets an even distribution among 65k dirs. Here's the algo if you want it: BOOL CMembernameHash::Hash(LPSTR mname, int *d1, int *d2) { if (!d1 || !d2) return false; LPSTR ptr = mname; UINT hashVal = 0, ltr; while (*ptr) { ltr = isupper(*ptr) ? tolower(*ptr++) : (*ptr++); hashVal = (hashVal << 5)+ hashVal + ltr; } *d1 = hashVal % 250; // 250 per low level dir *d2 = (hashVal - (hashVal % 250))/250 % 250; // 250 per second level return true; } -----Original Message----- From: zad [mailto:[EMAIL PROTECTED]] Sent: Thursday, May 17, 2001 4:09 AM To: [EMAIL PROTECTED] Subject: xdb_file -- Was: RE: [JDEV] mod_auth_radius Hello David, others, We can still use the file system and get around administering a DBMS, which can be cumbersome, by using a directory hash algorithm to save user.xmls. I have already done a patch to xdb_file.c that does that and have been using that for a while (works fine for me). The logic is to calculate a directory name of 0 to 999 for each user, based on the username provided. So eventually, under spool/myserver we will have 1000 directories names 0-999. The directories are created as needed. The advantage of this option is that your user.xml files get distributed pretty evenly across 1000 subdirectories, so you won't have a performance hit untill each directory grows to 10000 files, which is pretty high to reach. This is the logic behind some mail systems. I have already uploaded this into download.jabber.org (xdb_hash.tar.gz) and "really wish" to see it make it into the standard server code. Any advice or input is appreciated. zad > -----Original Message----- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On > Behalf Of David Waite > Sent: Wednesday, May 16, 2001 4:23 AM > To: [EMAIL PROTECTED] > Subject: Re: [JDEV] mod_auth_radius > > > The filesystem actually does pretty well under ext2 until you get > to a certain > # of users (10,000?). The inode tables basically become a linked > list once you > get over a certain size, so you end up having a big performance hit > finding/opening/saving files. Reiser could probably handle this > much better; > its my understanding that it creates a btree for inodes. > > -David Waite > > temas wrote: > > > Mostly just because it is on the disk. It does do cacheing and other > > things like that so it's not horrid but cacheing does cost RAM. Just > > considerations. Don't know about any numbers comparing xdb_file on ext2 > > vs Reiser or anything like that. > > > > --temas > > > > On 16 May 2001 10:02:30 +1000, Robert Norris wrote: > > > > can easily accomplish that with a jabber setup. The > current xdb_file > > > > would work ok, but not well, plenty of others out there now though. > > > > > > Does xdb_file not scale well because of the way its implemented, or > > > because its stores its data on disk? What if I was to use > (say) ReiserFS, > > > for the spool, would it do any better? > > > > > > Rob. > > > > _______________________________________________ > > jdev mailing list > > [EMAIL PROTECTED] > > http://mailman.jabber.org/listinfo/jdev > > _______________________________________________ > jdev mailing list > [EMAIL PROTECTED] > http://mailman.jabber.org/listinfo/jdev > _______________________________________________ jdev mailing list [EMAIL PROTECTED] http://mailman.jabber.org/listinfo/jdev _______________________________________________ jdev mailing list [EMAIL PROTECTED] http://mailman.jabber.org/listinfo/jdev