Andreas Dilger wrote:
> The test shows ext4 finishing marginally faster in the write case, and
> marginally slower in the read case. What happens if you have 4 parallel
> readers?
http://people.redhat.com/esandeen/seekwatcher/ext4_4_thread_par_read.png
http://people.redhat.com/esandeen/seekwatch
Shapor Naghibzadeh wrote:
> On Wed, Nov 07, 2007 at 04:42:59PM -0600, Eric Sandeen wrote:
>> Again this was on a decent HW raid so seek penalties are probably not
>> too bad.
>
> You may want to verify that by doing a benchmark on the raw device. I
> recently did some benchmarks doing random I/O
On Wed, Nov 07, 2007 at 04:42:59PM -0600, Eric Sandeen wrote:
> Again this was on a decent HW raid so seek penalties are probably not
> too bad.
You may want to verify that by doing a benchmark on the raw device. I
recently did some benchmarks doing random I/O on a Dell 2850 w/ a PERC
(megaraid)
On Wed, Nov 07, 2007 at 03:36:05PM +0100, Jan Kara wrote:
> > What if more than one application wants to use this facility?
>
> That should be fine - let's see: Each application keeps somewhere a time
> when
> it started a scan of a subtree (or it can actually remember a time when it
> set the f
Hi,
could you try to larger preallocation? like 512/1024/2048 blocks, please?
thanks, Alex
Eric Sandeen wrote:
I tried ext4 vs. xfs doing 4 parallel 2G IO writes in 1M units to 4
different subdirectories of the root of the filesystem:
http://people.redhat.com/esandeen/seekwatcher/ext4_4_threa
Andreas Dilger wrote:
> The question is what the "best" result is for this kind of workload?
> In HPC applications the common case is that you will also have the data
> files read back in parallel instead of serially.
Agreed, I'm not trying to argue what's better or worse, I'm just seeing
what it
On Nov 07, 2007 16:42 -0600, Eric Sandeen wrote:
> I tried ext4 vs. xfs doing 4 parallel 2G IO writes in 1M units to 4
> different subdirectories of the root of the filesystem:
>
> http://people.redhat.com/esandeen/seekwatcher/ext4_4_threads.png
> http://people.redhat.com/esandeen/seekwatcher/xfs
I tried ext4 vs. xfs doing 4 parallel 2G IO writes in 1M units to 4
different subdirectories of the root of the filesystem:
http://people.redhat.com/esandeen/seekwatcher/ext4_4_threads.png
http://people.redhat.com/esandeen/seekwatcher/xfs_4_threads.png
http://people.redhat.com/esandeen/seekwatcher
Eric Sandeen wrote:
> When mounting an ext4 filesystem with corrupted s_first_data_block, things
> can go very wrong and oops.
>
> Because blocks_count in ext4_fill_super is a u64, and we must use do_div,
> the calculation of db_count is done differently than on ext4.
Urgh... "than on ext3"
-E
When mounting an ext4 filesystem with corrupted s_first_data_block, things
can go very wrong and oops.
Because blocks_count in ext4_fill_super is a u64, and we must use do_div,
the calculation of db_count is done differently than on ext4. If
first_data_block is corrupted such that it is larger
Andreas Dilger wrote:
> On Nov 06, 2007 13:54 -0600, Eric Sandeen wrote:
>> Hmm bad news is when I add uninit_groups into the mix, it goes a little
>> south again, with some out-of-order extents. Not the end of the world,
>> but a little unexpected?
> I think part of the issue is that by default
On Nov 06, 2007 13:54 -0600, Eric Sandeen wrote:
> Hmm bad news is when I add uninit_groups into the mix, it goes a little
> south again, with some out-of-order extents. Not the end of the world,
> but a little unexpected?
>
>
> Discontinuity: Block 1430784 is at 24183810 (was 24181761)
> D
On Nov 06, 2007 13:51 -0500, Theodore Tso wrote:
> On Tue, Nov 06, 2007 at 09:12:55AM +0800, Andreas Dilger wrote:
> > What is needed is an ext2prepare-like step that involves resize2fs code
> > to move the file/dir blocks and then the move inode table, as if the
> > filesystem were going to be re
On Tue 06-11-07 18:01:00, Al Viro wrote:
> On Tue, Nov 06, 2007 at 06:19:45PM +0100, Jan Kara wrote:
> > Implement recursive mtime (rtime) feature for ext3. The feature works as
> > follows: In each directory we keep a flag EXT3_RTIME_FL (modifiable by a
> > user)
> > whether rtime should be updat
On Tue 06-11-07 14:40:12, Theodore Tso wrote:
> On Tue, Nov 06, 2007 at 06:19:45PM +0100, Jan Kara wrote:
> > Intended use case is that application which wants to watch any
> > modification in a subtree scans the subtree and sets flags for all
> > inodes there. Next time, it just needs to recurse i
Hello,
sorry for replying to myself but I've just found out that the patch I've
sent was and old version of the patch which had some problems. Attached is
a new version.
On Tue 06-11-07 12:31:42, Jan Kara wrote:
> it seems attached patch still did not get your attention. It makes
> e2fsprog
On Tue 06-11-07 10:04:47, H. Peter Anvin wrote:
> Arjan van de Ven wrote:
> >On Tue, 6 Nov 2007 18:19:45 +0100
> >Jan Kara <[EMAIL PROTECTED]> wrote:
> >
> >>Implement recursive mtime (rtime) feature for ext3. The feature works
> >>as follows: In each directory we keep a flag EXT3_RTIME_FL
> >>(mod
17 matches
Mail list logo