Excerpts from Bron Gondwana's message of 2010-11-16 23:11:48 -0500: > On Tue, Nov 16, 2010 at 08:38:13AM -0500, Chris Mason wrote: > > Excerpts from Bron Gondwana's message of 2010-11-16 07:54:45 -0500: > > > Just posting this again more neatly formatted and just the > > > 'meat': > > > > > > a) program creates piles of small temporary files, hard > > > links them out to different directories, unlinks the > > > originals. > > > > > > b) filesystem size: ~ 300Gb (backed by hardware RAID5) > > > > > > c) as the filesystem grows (currently about 30% full) > > > the unlink performance becomes horrible. Watching > > > iostat, there's a lot of reading going on as well. > > > > > > Is this expected? Is there anything we can do about it? > > > (short of rewrite Cyrus replication) > > > > Hi, > > > > It sounds like the unlink speed is limited by the reading, and the reads > > are coming from one of two places. We're either reading to cache cold > > block groups or we're reading to find the directory entries. > > All the unlinks for a single process will be happening in the same > directory (though the hard linked copies will be all over) > > > Could you sysrq-w while the performance is bad? That would narrow it > > down. > > Here's one: > > http://pastebin.com/Tg7agv42
Ok, we're mixing unlinks and fsyncs. If it fsyncing directories too? -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html