On Sat, Mar 23, 2013 at 06:11:26PM -0700, Roger Binns wrote:
> On 23/03/13 15:40, Eric Sandeen wrote:
> > I imagine it depends on the details of the workload & storage as well.
> 
> If the people who write btrfs can't come up with some measures to deem
> appropriateness, then how can the administrators who have even less
> information :-)
> 
> I suspect file size has nothing to do with it, and it is entirely about
> the volume of random writes.  (But as a correlator smaller files are
> unlikely to get many random writes because they contain less useful
> information than larger files.)

I'd say not the volume but the write pattern. A single 4k write may read
and rewrite the surrounding 64k when autodefrag is on. So, several
rewrites close to each other may actually benefit from autodefrag, while
the same number of a completely random 4k rewrites may end up rewriting
16x more data.

(http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg18926.html)

david
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to