Re: optimal number of files in directory on ext2

2000-03-19 Thread Hans Reiser

[EMAIL PROTECTED] wrote:

> 
> Finally, note that by using multiple directories, what you are
> effectively doing is a Radix search, which for certain datasets (and
> from what I can tell, your file name space may very well qualify), a
> carefully designed Radix Search tree can be more efficient than a
> B-tree.

I don't understand this point.  Can you elaborate?

Hans

-- 
You can get ReiserFS at http://devlinux.org/namesys, and customizations and
industrial grade support at [EMAIL PROTECTED]



Re: optimal number of files in directory on ext2

2000-03-19 Thread tytso

   Date:   Wed, 15 Mar 2000 23:08:38 +0100 (CET)
   From: Michal Pise <[EMAIL PROTECTED]>

   I was asked to create the application which will handle about 100 000 000
   relatively small (about 1-2k) files. I decided to make a
   multi-level directory tree, but I don't know which is the optimal number
   of files in one directory. Is it 10, 100, 1000 or even more? 

   Someone told me that directory should be so small so all the entries fits
   in directly adressed blocks - on 4kB block its 12*4kB = 48kB of space for
   filenames, for short filenames its at least 1000 files in one directory.

   Another guy told me thah directory should be so small so all the entries
   fits in the first block. Its something about 200 short filenames in one
   directory.

   But if I create 14 directories each with 14 files, the average lookup time
   will be 2 * 7 * time_to_explore_directory_entry + time_to_descend_into_dir
   instead of 100 * time_to_explore_directory_entry. And I think that
   time_to_descend_into_dir < 80 * time_to_explore_directory_entry. So its
   better to create tree with many levels and only a few files in each
   directory.

   Am I right? Did I miss something?

The important point to consider is that actually searching a directory
block is cheap (mere CPU time).  What is expensive is disk I/O.  This
comes from (a) reading the directory block, and (b) reading the inode
block to find out where the directory entries are.

Given that you will need to read the directory blocks to get at the data
regardless, the only cost to consider between the two approaches is
reading the inode data.  By using a flatter directory hierarchy, Linux
will need to read in fewer inodes, and those inodes, once read into
memory, will more likely be cached.

One futher trick you can play is to pre-allocate space in the
directories so that their blocks are contiguously allocated.  See the
sources to mklost+found (included below, since it's so small),  for an
example of how to do this.   This will further speed up the search, and
make the fewer directory levels approach even more attractive, since you
avoid having to force the disk to seek all over the place.

Finally, note that by using multiple directories, what you are
effectively doing is a Radix search, which for certain datasets (and
from what I can tell, your file name space may very well qualify), a
carefully designed Radix Search tree can be more efficient than a
B-tree.  So even if in the future, if you decide to move to a different
filesystem that has B-tree directories, such as xfs, reseirfs, or a
future version of ext3, you may still want to consider keeping this
structure in your program.

- Ted

/*
 * mklost+found.c   - Creates a directory lost+found on a mounted second
 *extended file system
 *
 * Copyright (C) 1992, 1993  Remy Card <[EMAIL PROTECTED]>
 *
 * This file can be redistributed under the terms of the GNU General
 * Public License
 */

#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 

#define LPF "lost+found"

int main (int argc, char ** argv)
{
char name [EXT2_NAME_LEN];
char path [sizeof (LPF) + 1 + 256];
struct stat st;
int i, j;
int d;

if (mkdir (LPF, 0755) == -1) {
perror ("mkdir");
exit(1);
}

i = 0;
memset (name, 'x', 252);
do {
sprintf (name + 252, "%02d", i);
strcpy (path, LPF);
strcat (path, "/");
strcat (path, name);
if ((d = creat (path, 0644)) == -1) {
perror ("creat");
exit (1);
}
i++;
close (d);
if (stat (LPF, &st) == -1) {
perror ("stat");
exit (1);
}
} while (st.st_size <= (EXT2_NDIR_BLOCKS - 1) * st.st_blksize);
for (j = 0; j < i; j++) {
sprintf (name + 252, "%02d", j);
strcpy (path, LPF);
strcat (path, "/");
strcat (path, name);
if (unlink (path) == -1) {
perror ("unlink");
exit (1);
}
}
exit (0);
}






Re: optimal number of files in directory on ext2

2000-03-17 Thread Kurt Garloff

On Thu, Mar 16, 2000 at 08:36:41AM +0100, Michal Pise wrote:
> > Forgive me, but this is what reiserfs handles best.  If you use reiserfs, you
> > can get log N search time and just put everything in one directory.  
> 
> Yes. I know about ReiserFS and I wish you to get it into standard kernel
> as soon as possible, but from discussion on linux-kernel I have feeling
> that ReiserFS is not still 100% stable. So we decided to use ext2 till
> ReiserFS will be considered as not experimental.

From what I understood, the criticism was about the dcache handling design
and some other things in reiserfs which did not take into account some of
the VFS changes being done during the last year.

Al Viro is afraid that this could lead to races especially in the 2.3 port
of reiserfs and has maintenance nightmares because of that. For him, it
would be probably even easy to create a multithreaded program concurrently
doing renames and other nasty stuff to actually show a bug in reiserfs ...
He's right: That should be cleaned up.
But note, that the ext2 which in pratice has proven to be extremely reliable
also had a lot of those races. The known ones were fixed during the 2.1, 2.2
and 2.3 fs cleanups.

While the 2.2 port is probably also not perfect, there are less issues and
it turns out that it is _very_ stable in real life. A lot of machines at
SuSE are running it nowadays and also some customers are using it; without
any serious problems so far. I'd be a little bit more careful, if you want
to use 2.3 kernels, as there is not that much experience with it yet.

I'd give it a try for your application, as you actually will profit a lot
from the tree based design.

Regards,
-- 
Kurt Garloff  <[EMAIL PROTECTED]>  Eindhoven, NL
GPG key: See mail header, key servers Linux kernel development
SuSE GmbH, Nuernberg, FRG   SCSI, Security

 PGP signature


Re: optimal number of files in directory on ext2

2000-03-16 Thread Michal Pise

> Forgive me, but this is what reiserfs handles best.  If you use reiserfs, you
> can get log N search time and just put everything in one directory.  

Yes. I know about ReiserFS and I wish you to get it into standard kernel
as soon as possible, but from discussion on linux-kernel I have feeling
that ReiserFS is not still 100% stable. So we decided to use ext2 till
ReiserFS will be considered as not experimental.

Michal Pise



Re: optimal number of files in directory on ext2

2000-03-16 Thread Hans Reiser

Michal Pise wrote:
> 
> Hi,
> I was asked to create the application which will handle about 100 000 000
> relatively small (about 1-2k) files. I decided to make a
> multi-level directory tree, but I don't know which is the optimal number
> of files in one directory. Is it 10, 100, 1000 or even more?
> 
> Someone told me that directory should be so small so all the entries fits
> in directly adressed blocks - on 4kB block its 12*4kB = 48kB of space for
> filenames, for short filenames its at least 1000 files in one directory.
> 
> Another guy told me thah directory should be so small so all the entries
> fits in the first block. Its something about 200 short filenames in one
> directory.
> 
> But if I create 14 directories each with 14 files, the average lookup time
> will be 2 * 7 * time_to_explore_directory_entry + time_to_descend_into_dir
> instead of 100 * time_to_explore_directory_entry. And I think that
> time_to_descend_into_dir < 80 * time_to_explore_directory_entry. So its
> better to create tree with many levels and only a few files in each
> directory.
> 
> Am I right? Did I miss something?
> 
> Thanks in advance, please cc: on me, I am not in list.
> 
> Michal Pise

Forgive me, but this is what reiserfs handles best.  If you use reiserfs, you
can get log N search time and just put everything in one directory.  

Hans
-- 
You can get ReiserFS at http://devlinux.org/namesys, and customizations and
industrial grade support at [EMAIL PROTECTED]



optimal number of files in directory on ext2

2000-03-15 Thread Michal Pise

Hi,
I was asked to create the application which will handle about 100 000 000
relatively small (about 1-2k) files. I decided to make a
multi-level directory tree, but I don't know which is the optimal number
of files in one directory. Is it 10, 100, 1000 or even more? 

Someone told me that directory should be so small so all the entries fits
in directly adressed blocks - on 4kB block its 12*4kB = 48kB of space for
filenames, for short filenames its at least 1000 files in one directory.

Another guy told me thah directory should be so small so all the entries
fits in the first block. Its something about 200 short filenames in one
directory.

But if I create 14 directories each with 14 files, the average lookup time
will be 2 * 7 * time_to_explore_directory_entry + time_to_descend_into_dir
instead of 100 * time_to_explore_directory_entry. And I think that
time_to_descend_into_dir < 80 * time_to_explore_directory_entry. So its
better to create tree with many levels and only a few files in each
directory.

Am I right? Did I miss something?

Thanks in advance, please cc: on me, I am not in list.

Michal Pise