On 10/4/11 6:53 PM, Ric Wheeler wrote:
> On 10/04/2011 07:19 PM, Przemek Klosowski wrote:
>> On 10/03/2011 06:33 PM, Eric Sandeen wrote:
>>> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
>>>> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
>>>>> testing something more real-world (20T ... 500T?) might still be 
>>>>> interesting.
>>>> Here's my test script:
>>>>
>>>>     qemu-img create -f qcow2 test1.img 500T&&   \
>>>>       guestfish -a test1.img \
>>>>         memsize 4096 : run : \
>>>>         part-disk /dev/vda gpt : mkfs ext4 /dev/vda1
>> ...
>>>> At 100T it doesn't run out of memory, but the man behind the curtain
>>>> starts to show.  The underlying qcow2 file grows to several gigs and I
>>>> had to kill it.  I need to play with the lazy init features of ext4.
>>>>
>>>> Rich.
>>>>
>>> Bleah.  Care to use xfs? ;)
>> WHy not btrfs? I am testing a 24TB physical server and ext4 creation
>> took forever while btrfs was almost instant. I understand it's still
>> experimental (I hear storing virtual disk images on btrfs still has
>> unresolved performance problems) but vm disk storage should be fine.
>> FWIW I have been using btrfs as my /home at home for some time now;
>> so far so good.
> 
> Creating an XFS file system is also a matter of seconds (both xfs and btrfs 
> do 
> dynamic inode allocation).
> 
> Note that ext4 has a new feature that allows inodes to be initialized in the 
> background, so you will see much quicker mkfs.ext4 times as well :)

right; for large ext4 fs use (or testing), try 

# mkfs.ext4 -E lazy_itable_init=1 /dev/blah

this will cause it to skip inode table initialization, and speed up mkfs a LOT.
It'll also keep sparse test images smaller.

IMHO this should probably be made default above a certain size.

The tradeoff is that inode table initialization happens in kernelspace, 
post-mount -
with efforts made to do it in the background, and not impact other I/O too much.

-Eric

> ric
> 

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Reply via email to