Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-12 Thread Daniel P. Berrange
On Wed, Nov 11, 2009 at 09:05:20PM +, Richard W.M. Jones wrote:
 On Wed, Nov 11, 2009 at 01:24:20PM -0600, Eric Sandeen wrote:
  Anybody got actual numbers?  I don't disagree that mkfs.ext4 is slow in  
  the default config, but I don't think it should be slower than mkfs.ext3  
  for the same sized disks.
 
 Easy with guestfish:
 
   $ guestfish --version
   guestfish 1.0.78
   $ for fs in ext2 ext3 ext4 xfs jfs ; do guestfish sparse /tmp/test.img 10G 
 : run : echo $fs : sfdiskM /dev/sda , : time mkfs $fs /dev/sda1 ; done
   ext2
   elapsed time: 5.21 seconds
   ext3
   elapsed time: 7.87 seconds
   ext4
   elapsed time: 6.10 seconds
   xfs
   elapsed time: 0.45 seconds
   jfs
   elapsed time: 0.78 seconds
 
 Note that because this is using a sparsely allocated disk each write
 to the virtual disk is very slow.  Change 'sparse' to 'alloc' to test
 this with a non-sparse file-backed disk.

You really want to avoid using sparse files at all when doing any kind of
benchmark / performance tests in VMs. The combo of a sparse file store on
a journalling filesystem in the host, w/ virt can cause very pathelogically
bad I/O performance until the file has all its extents fully allocated on
the host FS. So the use of a sparse file may well be exagarating the real
difference in elapsed time between these different mkfs calls in the 
guest.

Regards,
Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

-- 
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-12 Thread Richard W.M. Jones
On Thu, Nov 12, 2009 at 10:18:15AM +, Richard W.M. Jones wrote:
 [...] done
--- line break here
 ext2
 elapsed time: 3.48 seconds

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
libguestfs lets you edit virtual machines.  Supports shell scripting,
bindings from many languages.  http://et.redhat.com/~rjones/libguestfs/
See what it can do: http://et.redhat.com/~rjones/libguestfs/recipes.html

-- 
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-12 Thread Dennis J.

On 11/12/2009 04:03 PM, Eric Sandeen wrote:

Richard W.M. Jones wrote:

On Thu, Nov 12, 2009 at 09:54:12AM +, Daniel P. Berrange wrote:

On Wed, Nov 11, 2009 at 09:05:20PM +, Richard W.M. Jones wrote:

On Wed, Nov 11, 2009 at 01:24:20PM -0600, Eric Sandeen wrote:

Anybody got actual numbers? I don't disagree that mkfs.ext4 is slow
in the default config, but I don't think it should be slower than
mkfs.ext3 for the same sized disks.

Easy with guestfish:

$ guestfish --version
guestfish 1.0.78
$ for fs in ext2 ext3 ext4 xfs jfs ; do guestfish sparse
/tmp/test.img 10G : run : echo $fs : sfdiskM /dev/sda , : time mkfs
$fs /dev/sda1 ; done
ext2
elapsed time: 5.21 seconds
ext3
elapsed time: 7.87 seconds
ext4
elapsed time: 6.10 seconds
xfs
elapsed time: 0.45 seconds
jfs
elapsed time: 0.78 seconds

Note that because this is using a sparsely allocated disk each write
to the virtual disk is very slow. Change 'sparse' to 'alloc' to test
this with a non-sparse file-backed disk.

You really want to avoid using sparse files at all when doing any
kind of
benchmark / performance tests in VMs. The combo of a sparse file
store on
a journalling filesystem in the host, w/ virt can cause very
pathelogically
bad I/O performance until the file has all its extents fully
allocated on
the host FS. So the use of a sparse file may well be exagarating the
real
difference in elapsed time between these different mkfs calls in the
guest.


Again, this time backed by a 10 GB logical volume in the host, so this
should remove pretty much all host effects:

$ for fs in ext2 ext3 ext4 xfs jfs reiserfs nilfs2 ntfs msdos btrfs
hfs hfsplus gfs gfs2 ; do guestfish add /dev/mapper/vg_trick-Temp :
run : zero /dev/sda : echo $fs : sfdiskM /dev/sda , : time mkfs $fs
/dev/sda1 ; done




ext2
elapsed time: 3.48 seconds



ext3
elapsed time: 5.45 seconds
ext4
elapsed time: 5.19 seconds


so here we have ext4 slightly faster, which was the original question... ;)

(dropping caches in between might be best, too...)


xfs
elapsed time: 0.35 seconds
jfs
elapsed time: 0.66 seconds
reiserfs
elapsed time: 0.73 seconds
nilfs2
elapsed time: 0.19 seconds
ntfs
elapsed time: 2.33 seconds
msdos
elapsed time: 0.29 seconds
btrfs
elapsed time: 0.16 seconds
hfs
elapsed time: 0.44 seconds
hfsplus
elapsed time: 0.46 seconds
gfs
elapsed time: 1.60 seconds
gfs2
elapsed time: 3.98 seconds

I'd like to repeat my proviso: I think this test is meaningless for
most users.


Until users have 8TB raids at home, which is not really that far off ...


Let's hope btrfs is production ready before then because extX doesn't look 
like a fitting filesystem for such big drives due their lack of online fsck.


Regards,
  Dennis

--
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-12 Thread Richard W.M. Jones
On Thu, Nov 12, 2009 at 09:03:02AM -0600, Eric Sandeen wrote:
 so here we have ext4 slightly faster, which was the original question... ;)

 (dropping caches in between might be best, too...)

It starts a whole new VM between each test.

 Until users have 8TB raids at home, which is not really that far off ...

:-)

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-p2v converts physical machines to virtual machines.  Boot with a
live CD or over the network (PXE) and turn machines into Xen guests.
http://et.redhat.com/~rjones/virt-p2v

-- 
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-12 Thread Eric Sandeen

Dennis J. wrote:

On 11/12/2009 04:03 PM, Eric Sandeen wrote:

Richard W.M. Jones wrote:


...


I'd like to repeat my proviso: I think this test is meaningless for
most users.


Until users have 8TB raids at home, which is not really that far off ...


Let's hope btrfs is production ready before then because extX doesn't 
look like a fitting filesystem for such big drives due their lack of 
online fsck.


ext4's fsck is much faster than ext3's, and xfs's repair tool is also 
pretty speedy.


Both are offline, but so far online fsck for btrfs is just a goal, no 
(released, anyway) code yet AFAIK.


-Eric


Regards,
  Dennis



--
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-12 Thread Dennis J.

On 11/12/2009 05:59 PM, Eric Sandeen wrote:

Dennis J. wrote:

On 11/12/2009 04:03 PM, Eric Sandeen wrote:

Richard W.M. Jones wrote:


...


I'd like to repeat my proviso: I think this test is meaningless for
most users.


Until users have 8TB raids at home, which is not really that far off ...


Let's hope btrfs is production ready before then because extX doesn't
look like a fitting filesystem for such big drives due their lack of
online fsck.


ext4's fsck is much faster than ext3's, and xfs's repair tool is also
pretty speedy.

Both are offline, but so far online fsck for btrfs is just a goal, no
(released, anyway) code yet AFAIK.


Isn't the speed improvement of ext4 achieved by not dealing with empty 
extends/blocks? If so that wouldn't help you much if those 8TB are really 
used. But even a speedy fsck is going to take longer and longer as 
filesystem size grows which is why I believe we will soon reach a point 
were offline-fsck simply isn't a viable option anymore.
I have a 30TB storage system that I chopped into ten individual volumes 
because current filesystems don't really make creating a single 30TB fs a 
wise choice even though I'd like to be able to do that.


Regards,
  Dennis

--
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-12 Thread Ric Wheeler

On 11/12/2009 01:30 PM, Dennis J. wrote:

On 11/12/2009 05:59 PM, Eric Sandeen wrote:

Dennis J. wrote:

On 11/12/2009 04:03 PM, Eric Sandeen wrote:

Richard W.M. Jones wrote:


...


I'd like to repeat my proviso: I think this test is meaningless for
most users.


Until users have 8TB raids at home, which is not really that far off
...


Let's hope btrfs is production ready before then because extX doesn't
look like a fitting filesystem for such big drives due their lack of
online fsck.


ext4's fsck is much faster than ext3's, and xfs's repair tool is also
pretty speedy.

Both are offline, but so far online fsck for btrfs is just a goal, no
(released, anyway) code yet AFAIK.


Isn't the speed improvement of ext4 achieved by not dealing with empty
extends/blocks? If so that wouldn't help you much if those 8TB are
really used. But even a speedy fsck is going to take longer and longer
as filesystem size grows which is why I believe we will soon reach a
point were offline-fsck simply isn't a viable option anymore.
I have a 30TB storage system that I chopped into ten individual volumes
because current filesystems don't really make creating a single 30TB fs
a wise choice even though I'd like to be able to do that.

Regards,
Dennis



In our testing with f12, I build a 60TB ext4 file system with 1 billion small 
files. A forced fsck of ext4 finished in 2.5 hours give or take a bit :-) The 
fill was artificial and the file system was not aged, so real world results will 
probably be slower.


fsck time scales mostly with the number of allocated files in my experience. 
Allocated blocks (fewer very large files) are quite quick.


ric

--
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-12 Thread Roberto Ragusa
Ric Wheeler wrote:
 In our testing with f12, I build a 60TB ext4 file system with 1 billion
 small files. A forced fsck of ext4 finished in 2.5 hours give or take a
 bit :-) The fill was artificial and the file system was not aged, so
 real world results will probably be slower.
 
 fsck time scales mostly with the number of allocated files in my
 experience. Allocated blocks (fewer very large files) are quite quick.
 

What kind of machine did you use?

With 60TB a simple allocation bitmap for 4k-blocks takes almost 2GB;
and this is just to detect free space or double allocation of blocks.
Wow.

-- 
   Roberto Ragusamail at robertoragusa.it

-- 
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-12 Thread Eric Sandeen
Roberto Ragusa wrote:
 Ric Wheeler wrote:
 In our testing with f12, I build a 60TB ext4 file system with 1 billion
 small files. A forced fsck of ext4 finished in 2.5 hours give or take a
 bit :-) The fill was artificial and the file system was not aged, so
 real world results will probably be slower.

 fsck time scales mostly with the number of allocated files in my
 experience. Allocated blocks (fewer very large files) are quite quick.

 
 What kind of machine did you use?
 
 With 60TB a simple allocation bitmap for 4k-blocks takes almost 2GB;
 and this is just to detect free space or double allocation of blocks.
 Wow.
 

The box did have a lot of memory, it's true :)

But ext4 also uses the uninit_bg feature:

uninit_bg
  Create  a filesystem without initializing all of the
  block groups.  This feature also  enables  checksums
  and  highest-inode-used  statistics  in  each block-
  group.  This feature can speed  up  filesystem  cre-
  ation   time   noticeably  (if  lazy_itable_init  is
  enabled), and can also reduce e2fsck  time  dramati-
  cally.   It is only supported by the ext4 filesystem
  in recent Linux kernels.

-Eric

-- 
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-12 Thread Ric Wheeler

On 11/12/2009 03:27 PM, Eric Sandeen wrote:

Roberto Ragusa wrote:

Ric Wheeler wrote:

In our testing with f12, I build a 60TB ext4 file system with 1 billion
small files. A forced fsck of ext4 finished in 2.5 hours give or take a
bit :-) The fill was artificial and the file system was not aged, so
real world results will probably be slower.

fsck time scales mostly with the number of allocated files in my
experience. Allocated blocks (fewer very large files) are quite quick.



What kind of machine did you use?

With 60TB a simple allocation bitmap for 4k-blocks takes almost 2GB;
and this is just to detect free space or double allocation of blocks.
Wow.



The box did have a lot of memory, it's true :)

But ext4 also uses the uninit_bg feature:

uninit_bg
   Create  a filesystem without initializing all of the
   block groups.  This feature also  enables  checksums
   and  highest-inode-used  statistics  in  each block-
   group.  This feature can speed  up  filesystem  cre-
   ation   time   noticeably  (if  lazy_itable_init  is
   enabled), and can also reduce e2fsck  time  dramati-
   cally.   It is only supported by the ext4 filesystem
   in recent Linux kernels.

-Eric



A lot in this case was 40GB of DRAM - fsck (iirc) consumed about 13GB of virtual 
space during the run?


Ric

--
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-11 Thread Richard W.M. Jones
Create a 128 MB input file:

  cd /tmp
  dd if=/dev/zero of=input bs=1024k count=128

and then create a cpio file from that to various target filesystems:

  echo input | time cpio --quiet -o -H newc  /path/to/fs/output

I created ext2, ext3, ext4, xfs and tmpfs filesystems and mounted them
(all default options).  All timings on baremetal, quiet machine, with
a hot cache, and then averaged over three runs:

  tmpfs  0.77 sx 1.0
  ext2   1.12 sx 1.5
  xfs1.66 sx 2.1
  ext3   2.58 sx 3.4
  ext4   5.59 sx 7.3

You can see that ext4 seems to do significantly worse than the others.

I looked at the strace of cpio and it does 512 byte writes.  I'm going
to try to fix that so it does larger writes, but I'm not sure if that
matters (shouldn't the kernel combine these writes?)  The reason I'm
concentrating on cpio (instead of cp) is that it was while creating a
cpio format archive that I noticed the ext4 was performing very
poorly.

Rich.

kernel 2.6.31.1-56.fc12.x86_64
cpio-2.10-3.fc12.x86_64

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-top is 'top' for virtual machines.  Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://et.redhat.com/~rjones/virt-top

-- 
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-11 Thread Richard W.M. Jones
On Wed, Nov 11, 2009 at 10:14:21AM +, Richard W.M. Jones wrote:
   echo input | time cpio --quiet -o -H newc  /path/to/fs/output

Update: I found the -C option that lets me specify the blocksize, and
raising it to something sensible (65536) shows major improvements in
performance for all filesystems.  

  echo input | time cpio -C 65536 --quiet -o -H newc  /path/to/fs/output

   tmpfs  0.77 sx 1.0
   ext2   1.12 sx 1.5
   xfs1.66 sx 2.1
   ext3   2.58 sx 3.4
   ext4   5.59 sx 7.3

The new times are:

  tmpfs 0.20 sx 1.0
  ext2  0.30 sx 1.5
  xfs   0.41 sx 2.1
  ext3  0.57 sx 2.9
  ext4  0.44 sx 2.2

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
New in Fedora 11: Fedora Windows cross-compiler. Compile Windows
programs, test, and build Windows installers. Over 70 libraries supprt'd
http://fedoraproject.org/wiki/MinGW http://www.annexia.org/fedora_mingw

-- 
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-11 Thread Farkas Levente
On 11/11/2009 11:53 AM, Richard W.M. Jones wrote:
 On Wed, Nov 11, 2009 at 10:14:21AM +, Richard W.M. Jones wrote:
   echo input | time cpio --quiet -o -H newc  /path/to/fs/output
 
 Update: I found the -C option that lets me specify the blocksize, and
 raising it to something sensible (65536) shows major improvements in
 performance for all filesystems.  
 
   echo input | time cpio -C 65536 --quiet -o -H newc  /path/to/fs/output
 
   tmpfs  0.77 sx 1.0
   ext2   1.12 sx 1.5
   xfs1.66 sx 2.1
   ext3   2.58 sx 3.4
   ext4   5.59 sx 7.3
 
 The new times are:
 
   tmpfs 0.20 sx 1.0
   ext2  0.30 sx 1.5
   xfs   0.41 sx 2.1
   ext3  0.57 sx 2.9
   ext4  0.44 sx 2.2

imho it's still a bug. wouldn't somehow rise the default or make the
writes buffered or ... since the current situation is not correct.

-- 
  Levente   Si vis pacem para bellum!

-- 
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-11 Thread Gene Czarcinski
On Wednesday 11 November 2009 06:41:58 Farkas Levente wrote:
 On 11/11/2009 11:53 AM, Richard W.M. Jones wrote:
  On Wed, Nov 11, 2009 at 10:14:21AM +, Richard W.M. Jones wrote:
echo input | time cpio --quiet -o -H newc  /path/to/fs/output
 
  Update: I found the -C option that lets me specify the blocksize, and
  raising it to something sensible (65536) shows major improvements in
  performance for all filesystems.
 
echo input | time cpio -C 65536 --quiet -o -H newc  /path/to/fs/output
 
tmpfs  0.77 sx 1.0
ext2   1.12 sx 1.5
xfs1.66 sx 2.1
ext3   2.58 sx 3.4
ext4   5.59 sx 7.3
 
  The new times are:
 
tmpfs 0.20 sx 1.0
ext2  0.30 sx 1.5
xfs   0.41 sx 2.1
ext3  0.57 sx 2.9
ext4  0.44 sx 2.2
 
 imho it's still a bug. wouldn't somehow rise the default or make the
 writes buffered or ... since the current situation is not correct.
 
I am not sure if this is related or not ...

During the F12 development cycle, I have done a number of installs on both 
bare hardware and qemu-kvm guests.

In all cases, I have formatted the root (/) partition as ext4.  I have 
noticed that formatting the partition for ext4 seems to take considerably more 
wall-clock time for ext4 partitions than my previous experience with ext3 
partitions.

I do not know if this is because ext4 formatting needs to do a lot more work 
than ext3 or if there is a performance issue.

Gene

-- 
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-11 Thread Eric Sandeen

Gene Czarcinski wrote:

...


I am not sure if this is related or not ...

During the F12 development cycle, I have done a number of installs on both 
bare hardware and qemu-kvm guests.


In all cases, I have formatted the root (/) partition as ext4.  I have 
noticed that formatting the partition for ext4 seems to take considerably more 
wall-clock time for ext4 partitions than my previous experience with ext3 
partitions.


I do not know if this is because ext4 formatting needs to do a lot more work 
than ext3 or if there is a performance issue.


Gene



There shouldn't be a big difference, but if you want to do some tests, 
find a difference, and report back with some times, I'd be interested.


-Eric

--
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-11 Thread Frank Ch. Eigler
Gene Czarcinski g...@czarc.net writes:

 [...]  In all cases, I have formatted the root (/) partition as
 ext4.  I have noticed that formatting the partition for ext4 seems
 to take considerably more wall-clock time for ext4 partitions than
 my previous experience with ext3 partitions. [...]

I have seen the same thing; this sort of thing appeared to help:

  mkfs.ext4 -O uninit_bg -E lazy_itable_init=1

- FChE

-- 
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-11 Thread Eric Sandeen

Frank Ch. Eigler wrote:

Gene Czarcinski g...@czarc.net writes:


[...]  In all cases, I have formatted the root (/) partition as
ext4.  I have noticed that formatting the partition for ext4 seems
to take considerably more wall-clock time for ext4 partitions than
my previous experience with ext3 partitions. [...]


I have seen the same thing; this sort of thing appeared to help:

  mkfs.ext4 -O uninit_bg -E lazy_itable_init=1

- FChE



lazy_itable_init isn't yet safe, unfortunately, we still need a kernel 
background zeroing to make it so ...


Anybody got actual numbers?  I don't disagree that mkfs.ext4 is slow in 
the default config, but I don't think it should be slower than mkfs.ext3 
for the same sized disks.


You sure your disks didn't just get bigger since the F9 days? :)

-Eric

--
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-11 Thread Richard W.M. Jones
On Wed, Nov 11, 2009 at 09:05:20PM +, Richard W.M. Jones wrote:
   ext2
   elapsed time: 5.21 seconds
   ext3
   elapsed time: 7.87 seconds
   ext4
   elapsed time: 6.10 seconds
   xfs
   elapsed time: 0.45 seconds
   jfs
   elapsed time: 0.78 seconds

Sod it, let's do all the others too ...

$ for fs in reiserfs nilfs2 ntfs msdos btrfs hfs hfsplus gfs gfs2 ; do 
guestfish sparse /tmp/test.img 10G : run : echo $fs : sfdiskM /dev/sda , : time 
mkfs $fs /dev/sda1 ; done
reiserfs
elapsed time: 1.15 seconds
nilfs2
elapsed time: 0.12 seconds
ntfs
elapsed time: 3.09 seconds
msdos
elapsed time: 0.38 seconds
btrfs
elapsed time: 0.07 seconds
hfs
elapsed time: 0.42 seconds
hfsplus
elapsed time: 0.49 seconds
gfs
elapsed time: 5.37 seconds
gfs2
elapsed time: 4.93 seconds

(By the way I really don't think that mkfs time matters that much :-)

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine.  Supports Linux and Windows.
http://et.redhat.com/~rjones/virt-df/

-- 
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list


Re: cpio to ext4 seems much slower than to ext2, ext3 or xfs

2009-11-11 Thread Richard W.M. Jones
On Wed, Nov 11, 2009 at 01:24:20PM -0600, Eric Sandeen wrote:
 Anybody got actual numbers?  I don't disagree that mkfs.ext4 is slow in  
 the default config, but I don't think it should be slower than mkfs.ext3  
 for the same sized disks.

Easy with guestfish:

  $ guestfish --version
  guestfish 1.0.78
  $ for fs in ext2 ext3 ext4 xfs jfs ; do guestfish sparse /tmp/test.img 10G : 
run : echo $fs : sfdiskM /dev/sda , : time mkfs $fs /dev/sda1 ; done
  ext2
  elapsed time: 5.21 seconds
  ext3
  elapsed time: 7.87 seconds
  ext4
  elapsed time: 6.10 seconds
  xfs
  elapsed time: 0.45 seconds
  jfs
  elapsed time: 0.78 seconds

Note that because this is using a sparsely allocated disk each write
to the virtual disk is very slow.  Change 'sparse' to 'alloc' to test
this with a non-sparse file-backed disk.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-p2v converts physical machines to virtual machines.  Boot with a
live CD or over the network (PXE) and turn machines into Xen guests.
http://et.redhat.com/~rjones/virt-p2v

-- 
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list