On 06/17/2014 08:46 PM, Filipe Brandenburger wrote:
> On Mon, Jun 16, 2014 at 6:13 PM, cwillu <cwi...@cwillu.com> wrote:
>> For the case of sequential writes (via write or mmap), padding writes
>> to page boundaries would help, if the wasted space isn't an issue.
>> Another approach, again assuming all other writes are appends, would
>> be to periodically (but frequently enough that the pages are still in
>> cache) read a chunk of the file and write it back in-place, with or
>> without an fsync. On the other hand, if you can afford to lose some
>> logs on a crash, not fsyncing/msyncing after each write will also
>> eliminate the fragmentation.
> 
> I was wondering if something could be done in btrfs to improve
> performance under this workload... Something like a "defrag on demand"
> for a case where mostly appends are happening.

Instead of inventing a strategy smarter than the (already smart) filesystem, 
would be more simple make an explicit defrag ?

In any case this "smart strategy" is filesystem specific, so it would be more 
simple (and less error prone) do an explicit defrag.

I tried this strategy with systemd-journald, getting good results (doing a 
ioctl BTRFS_IOC_DEFRAG during the journal opening).

BR
G.Baroncelli

-- 
gpg @keyserver.linux.it: Goffredo Baroncelli (kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to