Excerpts from Bron Gondwana's message of 2010-11-16 07:54:45 -0500:
> Just posting this again more neatly formatted and just the
> 'meat':
> 
> a) program creates piles of small temporary files, hard
>    links them out to different directories, unlinks the
>    originals.
> 
> b) filesystem size: ~ 300Gb (backed by hardware RAID5)
> 
> c) as the filesystem grows (currently about 30% full) 
>    the unlink performance becomes horrible.  Watching
>    iostat, there's a lot of reading going on as well.
> 
> Is this expected?  Is there anything we can do about it?
> (short of rewrite Cyrus replication)

Hi,

It sounds like the unlink speed is limited by the reading, and the reads
are coming from one of two places.  We're either reading to cache cold
block groups or we're reading to find the directory entries.

Could you sysrq-w while the performance is bad?  That would narrow it
down.

Josef has the reads for caching block groups fixed, but we'll have to
look hard at the reads for the rest of unlink.

-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to