I don't buy this; the atime updates should be subject to caching,
and not get written to the disk more than the update daemon
(kflushd or whatever) forces.

Jan Edler
NEC Research Institute

On Thu, Jul 29, 1999 at 09:20:15AM -0500, Tim Walberg wrote:
> For pure reads, there should be no significant difference. However, it
> is extrememly difficult (with default mount options, anyway) to generate
> pure read access to a normal file system (discounting raw devices used
> for databases and such here), since every file access updates the at
> least the inode atime of the file that's being read. So, a pure read
> pattern (from the user perspective) still generates quite a few writes,
> which can easily lead to a perceived degradation in performance.
> 
> If atime is not important for this particular file system, you might
> want to consider turning it off (a la 'mount -o noatime').
> 
> 
>                               tw
> 
> 
> On 07/29/1999 01:10 -0400, Jan Edler wrote:
> >>    Can anyone explain why a software raid5 array of N disks has
> >>    significantly lower read performance than a raid0 array of N-1 disks?
> >>    I'm only considering the case where there are no drive failures.

Reply via email to