> The limit is documented as "1 million inodes per TB".
>  So something
> ust not have gone right.  But many people have
> complained and
> you could take the newfs source and fix the
> limitation.

"Patching" the source ourselves would not fly very far, but thanks for the 
clarification. I guess I have to assume, then, that somewhere around this 
million mark we also ran out of inodes. With the wide range in file sizes for 
the files, this doesn't surprise me. There was no way to tune the file system 
for anything.

> The discontinuity when going from <1TB to over 1TB is
> appaling.
> (<1TB allows for 137million inodes; >= 1TB allows for
> 1million per).

Either way, we were stuck. Our test/devl environment goes way beyond 1 million 
files (read: inodes). I think we hit the ceiling half-way into our data copy, 
if memory serves.

I think the argument I saw for this inode disparity was that a >1TB FS "was 
only for database files" and not the binaries, or something to that effect.

> The rationale is fsck time (but logging is forced
> anyway)

I can't remember for sure, but this might have been mentioned in one of the 
notes I found.

> The 1 million limit is arbitrary and too low...
> 
> Casper

Thank you very much for the clarification, and for the candor. It is greatly 
appreciated.

Rainer
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to