Chris Murphy posted on Tue, 25 Feb 2014 11:33:34 -0700 as excerpted:

> I've had a qcow2 image with more than 30,000 extents and didn't notice a
> performance drop. So I don't know that number of extents is the problem.
> Maybe it's how they're arranged on disk and what's causing the problem
> is excessive seeking on HDD? Or does this problem sometimes also still
> happen on SSD?

SSDs are interesting beasts.

1) Seek-times aren't an issue, so that disappears, BUT...

2) SSDs still have IOPS ratings/limits.  Typically these are in the 
(high) tens of thousands per second range, so a single file at 30K 
extents isn't likely to be terribly significant.  However, 300K extents 
could be, and several VMs @ 30K each, and/or mixed in with other traffic, 
likely also fragmented due simply to the number of VM image fragments...

3) There's also the read vs. write vs. erase-block size thing to think 
about, and how that can affect not reads, but writes.  As long as there's 
sufficient overprovisioning it shouldn't be a real problem, but fill the 
SSD more than about 2/3 full (as a significant number of non-
professionally managed systems likely will) and all those extra fragments 
being written is going to trigger erase-block garbage collection cycles 
more frequently, and that could massively up the latency jitter on a 
reasonably frequent basis.

#2 and 3 are a big part of the reason I still enable the autodefrag mount 
option here, even tho I'm on SSD and my use-case doesn't have a lot of 
huge internal-write files to worry about, plus I'm only about 50 percent 
partitioned on the SSDs so they have LOTS of room to do their wear-
leveling, etc. =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to