> I dont think we want to change thed default density. Larger
> parttitions already gets larger blocks and fragment, and as a
> consequence lower number of inodes.
>

>> Otto,
>> In my tests on AMD64, if FFS partition size increases beyond 30GB,
>> fsck starts taking exponential time even if you have zero used inodes.
>> This is a for i () for j() loop and if you reduce the for j() inner
>> loop it is a win.
>
> Yes, it becomes very slow, but I don't think it is exponential.

Wowwww, even with ***existing code*** because I did a newfs -b 65536
-f 8192 wd0m (this has an implicit -i 32768)

fsck chewed through a 80G partition with 2 clang static analyzer runs
(2100 files of 200 Kb each) within 1 minute. When before this, it
never went past pass1 for over 5 hours.

Insanely fast fsck runs. Thanks Stuart and Otto. Why don't you make
the newfs default? What does everybody say?
newfs -b 65536 -f 8192 -i 32768

Somebody ought to change the section in FAQ too.!!!!!!!!!!!!!!

I will try out your diff right now.

>>
>> dumpfs -m /downloads
>> # newfs command for /dev/wd0o
>> newfs -O 1 -b 16384 -e 4096 -f 2048 -g 16384 -h 64 -m 5 -o time -s
>> 172714816 /dev/wd0o
>>
>> So, if I read it correctly, setting just the block size higher to say
>> 64Kb does auto tune frag size to 1/8 which is 8Kb (newfs complains
>> appropriately) but the auto tune inode length to 4 times frag which is
>> 32Kb is not implemented now? Is this the proposed formula?
>
> There's no such thing as inode length.
>

Sorry what I meant was the size required to consider storing a single inode?

Reply via email to