On Mon, 2007-04-30 at 16:36 +0530, Aneesh Kumar wrote:
On 4/24/07, Avantika Mathur [EMAIL PROTECTED] wrote:
Ext4 Developer Interlock Call: 04/23/2007 Meeting Minutes
TESTING
- extents testing
- Discussed methods for testing extents on highly fragmented
filesystems.
- Jose
On 4/24/07, Avantika Mathur [EMAIL PROTECTED] wrote:
Ext4 Developer Interlock Call: 04/23/2007 Meeting Minutes
TESTING
- extents testing
- Discussed methods for testing extents on highly fragmented
filesystems.
- Jose will look into possible tests, including perhaps using the
'aged'
Aneesh Kumar wrote:
What i am doing for creating a large number of extents is
dd if=/dev/zero of=myfile count=10
seek=20
while [ 1 ]; do dd if=/dev/zero of=myfile count=10 seek=$seek;
seek=`expr $seek + 20`; done
with AGGRESSIVE_TEST defined in include/linux/ext4_fs_extents.h you may
get much
Avantika Mathur wrote:
TESTING
- extents testing
- Discussed methods for testing extents on highly fragmented
filesystems.
- Jose will look into possible tests, including perhaps using the
'aged' option in FFSB
- Ted suggested creating a mountoption that creates a bad block
allocator
Alex Tomas wrote:
- Large file deletion
- Valerie had recently tested large file deletion on ext3/4, but
did not see the expected performance gain with ext4 due to compact
metadata when using extents.
any details?
Ok, I found my mistake. There was a typo in my test script and the
Valerie Clement wrote:
Here are the results I obtain with a 2.6.17-rc7 kernel to delete a 100GB
file:
ext3 : real 2m35.048suser 0m0.000s sys 0m6.424s
ext4 : real 0m11.160suser 0m0.000s sys 0m5.532s
xfs : real 0m0.377s user 0m0.004s sys 0m0.004s
would be very
Avantika Mathur wrote:
- large filesystem
- We would like to perform more testing on large (16TB) filesystems
- currently hardware limitations are preventing this testing. We
have tested 10TB raid dists, and 16TB loopback devices. Avantika will
look into creating very large sparse