On 2014-09-02 14:31, G. Richard Bellamy wrote:
> I thought I'd follow-up and give everyone an update, in case anyone
> had further interest.
> 
> I've rebuilt the RAID10 volume in question with a Samsung 840 Pro for
> bcache front device.
> 
> It's 5x600GB SAS 15k RPM drives RAID10, with the 512MB SSD bcache.
> 
> 2014-09-02 11:23:16
> root@eanna i /var/lib/libvirt/images # lsblk
> NAME      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
> sda         8:0    0 558.9G  0 disk
> └─bcache3 254:3    0 558.9G  0 disk /var/lib/btrfs/data
> sdb         8:16   0 558.9G  0 disk
> └─bcache2 254:2    0 558.9G  0 disk
> sdc         8:32   0 558.9G  0 disk
> └─bcache1 254:1    0 558.9G  0 disk
> sdd         8:48   0 558.9G  0 disk
> └─bcache0 254:0    0 558.9G  0 disk
> sde         8:64   0 558.9G  0 disk
> └─bcache4 254:4    0 558.9G  0 disk
> sdf         8:80   0   1.8T  0 disk
> └─sdf1      8:81   0   1.8T  0 part
> sdg         8:96   0   477G  0 disk /var/lib/btrfs/system
> sdh         8:112  0   477G  0 disk
> sdi         8:128  0   477G  0 disk
> ├─bcache0 254:0    0 558.9G  0 disk
> ├─bcache1 254:1    0 558.9G  0 disk
> ├─bcache2 254:2    0 558.9G  0 disk
> ├─bcache3 254:3    0 558.9G  0 disk /var/lib/btrfs/data
> └─bcache4 254:4    0 558.9G  0 disk
> sr0        11:0    1  1024M  0 rom
> 
> I further split the system and data drives of the VM Win7 guest. It's
> very interesting to see the huge level of fragmentation I'm seeing,
> even with the help of ordered writes offered by bcache - in other
> words while bcache seems to be offering me stability and better
> behavior to the guest, the underlying the filesystem is still seeing a
> level of fragmentation that has me scratching my head.
> 
> That being said, I don't know what would be normal fragmentation of a
> VM Win7 guest system drive, so could be I'm just operating in my zone
> of ignorance again.
> 
> 2014-09-01 14:41:19
> root@eanna i /var/lib/libvirt/images # filefrag atlas-*
> atlas-data.qcow2: 7 extents found
> atlas-system.qcow2: 154 extents found
> 2014-09-01 18:12:27
> root@eanna i /var/lib/libvirt/images # filefrag atlas-*
> atlas-data.qcow2: 564 extents found
> atlas-system.qcow2: 28171 extents found
> 2014-09-02 08:22:00
> root@eanna i /var/lib/libvirt/images # filefrag atlas-*
> atlas-data.qcow2: 564 extents found
> atlas-system.qcow2: 35281 extents found
> 2014-09-02 08:44:43
> root@eanna i /var/lib/libvirt/images # filefrag atlas-*
> atlas-data.qcow2: 564 extents found
> atlas-system.qcow2: 37203 extents found
> 2014-09-02 10:14:32
> root@eanna i /var/lib/libvirt/images # filefrag atlas-*
> atlas-data.qcow2: 564 extents found
> atlas-system.qcow2: 40903 extents found
> 
This may sound odd, but are you exposing the disk to the Win7 guest as a
non-rotational device? Win7 and higher tend to have different write
behavior when they think they are on an SSD (or something else where
seek latency is effectively 0).  Most VMM's (at least, most that I've
seen) will use fallocate to punch holes for ranges that get TRIM'ed in
the guest, so if windows is sending TRIM commands, that may also be part
of the issue.  Also, you might try reducing the amount of logging in the
guest.

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to