On Tue, Dec 22, 2009 at 2:45 PM, Mike Walter <mike.wal...@hewitt.com> wrote:

> The problem with bigger on SFS is that it is on SFS.
>
> Try writing, say, a big honking (historically for CMS, but NOTHING to
> Linux) 12G file into SFS.  No problem - writes are a little slower than to
> a minidisk, but not enough to complain about (unless, perhaps, many others
> are trying to do the same thing using the same SFS server!).
>
> But when you erase that 12G file, SFS takes almost "forever".  Unlike an
> ECKD CMS filesystem minidisk where the FST is cleared for that file, SFS
> has to go though each of the 4K blocks that constitute the file, turning
> off its allocation bit.  For a large file that can take 15+ minutes (in
> the case of one 12G file we were working with).  And CPU utilization due
> to the server pretty much peaks out during that time, too.  The exact
> details may be slightly off in that example, but the observable effect is
> pretty accurate.
>
> Even one of the original SFS authors, Scott Nettleship, said repeatedly of
> SFS: "If you want minidisk performance, use minidisks".
>

Not understanding something: writing the file has to write each of the 4K
blocks too. Are you saying that the ERASE is slower than the WRITE? Or just
that while it feels reasonable for the original WRITE to take a while, the
fact that the ERASE is so slow is anomalous?

Reply via email to