On Thu, Feb 10, 2005 at 12:30:23PM -0800, Bryan Henderson wrote:
> >Its possible that by doing larger
> >IOs we save CPU and use that CPU to push more data ?
> 
> This is absolutely right; my mistake -- the relevant number is CPU seconds 
> per megabyte moved, not CPU seconds per elapsed second.
> But I don't think we're close enough to 100% CPU utilization that this 
> explains much.
> 
> In fact, the curious thing here is that neither the disk nor the CPU seems 
> to be a bottleneck in the slow case.  Maybe there's some serialization I'm 
> not seeing that makes less parallelism between I/O and execution.  Is this 
> a single thread doing writes and syncs to a single file?

>From what I've seen, without writepages, the application thread itself
tends to do the writing by falling into balance_dirty_pages() during
it's write call, while in the writepages case, a pdflush thread seems
to do more of the writeback.    This also depends somewhat on
processor speed (and number) and amount of RAM.  

To try and isolate this more, I've limited RAM (1GB) and number of
CPUs (1)  on my testing setup.

So yes, there could be better parallelism in the writepages case, but
again this behavior could be a symptom and not a cause, but I'm not
sure how to figure that out, any suggestions ?

Sonny
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to