I created two pools, one xfs one btrfs, default formatting and mount options. I
then created a qcow2 file on each using virt-manager, also using default
options. And default caching (whatever that is, I think it's writethrough but
don't hold me to it).
I then installed Windows 7 (not
On Sep 3, 2014, at 12:01 AM, Chris Murphy li...@colorremedies.com wrote:
I created two pools, one xfs one btrfs, default formatting and mount options.
I then created a qcow2 file on each using virt-manager, also using default
options. And default caching (whatever that is, I think it's
It is interesting that for me the number of extents before and after
bcache are essentially the same.
The lesson here for me there is that the fragmentation of a btrfs
nodatacow file is not mitigated by bcache. There seems to be nothing I
can do to prevent that fragmentation, and may in fact be
Hi Richard,
It is interesting that for me the number of extents before and after
bcache are essentially the same.
The lesson here for me there is that the fragmentation of a btrfs
nodatacow file is not mitigated by bcache. There seems to be nothing I
can do to prevent that fragmentation,
I thought I'd follow-up and give everyone an update, in case anyone
had further interest.
I've rebuilt the RAID10 volume in question with a Samsung 840 Pro for
bcache front device.
It's 5x600GB SAS 15k RPM drives RAID10, with the 512MB SSD bcache.
2014-09-02 11:23:16
root@eanna i
On Sep 2, 2014, at 12:31 PM, G. Richard Bellamy rbell...@pteradigm.com wrote:
I thought I'd follow-up and give everyone an update, in case anyone
had further interest.
I've rebuilt the RAID10 volume in question with a Samsung 840 Pro for
bcache front device.
It's 5x600GB SAS 15k RPM
On 2014-09-02 14:31, G. Richard Bellamy wrote:
I thought I'd follow-up and give everyone an update, in case anyone
had further interest.
I've rebuilt the RAID10 volume in question with a Samsung 840 Pro for
bcache front device.
It's 5x600GB SAS 15k RPM drives RAID10, with the 512MB SSD
Thanks @chris @austin. You both bring up interesting questions and points.
@chris: atlas-data.qcow2 isn't running any software or logging at this
time, I isolated my D:\ drive on that file via clonezilla and
virt-resize.
Microsoft DiskPart version 6.1.7601
Copyright (C) 1999-2008 Microsoft
On Wed, Aug 13, 2014 at 9:23 PM, Chris Murphy li...@colorremedies.com wrote:
lsattr /var/lib/libvirt/images/atlas.qcow2
Is the xattr actually in place on that file?
2014-08-14 07:07:36
$ filefrag /var/lib/libvirt/images/atlas.qcow2
/var/lib/libvirt/images/atlas.qcow2: 46378 extents found
On 2014-08-14 10:30, G. Richard Bellamy wrote:
On Wed, Aug 13, 2014 at 9:23 PM, Chris Murphy li...@colorremedies.com wrote:
lsattr /var/lib/libvirt/images/atlas.qcow2
Is the xattr actually in place on that file?
2014-08-14 07:07:36
$ filefrag /var/lib/libvirt/images/atlas.qcow2
On Thu, Aug 14, 2014 at 8:05 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
The fact that it is Windows using NTFS is probably part of the problem.
Here's some things you can do to decrease it's background disk
utilization (these also improve performance on real hardware):
1. Disable
On Aug 14, 2014, at 8:30 AM, G. Richard Bellamy rbell...@pteradigm.com wrote:
This is a p2v target, if that matters. Workload has been minimal since
virtualizing because I have yet to get usable performance with this
configuration. The filesystem in the guest is Win7 NTFS. I have seen
On Thu, Aug 14, 2014 at 11:40 AM, Chris Murphy li...@colorremedies.com wrote:
and there may be a fit for bcache here because you actually would get these
random writes committed to stable media much faster in that case, and a lot
of work has been done to make this more reliable than battery
On Aug 14, 2014, at 5:16 PM, G. Richard Bellamy rbell...@pteradigm.com wrote:
On Thu, Aug 14, 2014 at 11:40 AM, Chris Murphy li...@colorremedies.com
wrote:
and there may be a fit for bcache here because you actually would get these
random writes committed to stable media much faster in
On Mon, Aug 11, 2014 at 11:36 AM, G. Richard Bellamy
rbell...@pteradigm.com wrote:
That being said, how would I determine what the root issue is?
Specifically, the qcow2 file in question seems to have increasing
fragmentation, even with the No_COW attr.
[1]
$ mkfs.btrfs -m raid10 -d raid10
On Aug 13, 2014, at 9:57 PM, G. Richard Bellamy rbell...@pteradigm.com
wrote:
On Mon, Aug 11, 2014 at 11:36 AM, G. Richard Bellamy
rbell...@pteradigm.com wrote:
That being said, how would I determine what the root issue is?
Specifically, the qcow2 file in question seems to have increasing
I've been playing with btrfs as a backing store for my KVM images.
I've used 'chattr +C' on the directory where those images are stored.
You can see my recipe below [1]. I've read the gotchas found here [2]
I'm having continuing performance issues inside the Guest VM that is
created inside the
On Mon, 11 Aug 2014 11:36:46 -0700
G. Richard Bellamy rbell...@pteradigm.com wrote:
I've been playing with btrfs as a backing store for my KVM images.
I've used 'chattr +C' on the directory where those images are stored.
You can see my recipe below [1]. I've read the gotchas found here [2]
On Mon, Aug 11, 2014 at 12:14 PM, Roman Mamedov r...@romanrm.net wrote:
First of all, why do you require a COW filesystem in the first place... if all
you do is just use it in a NoCOW mode?
Second, why qcow2? It can also have internal fragmentation which is unlikely
to
do anything good for
On Aug 11, 2014, at 1:14 PM, Roman Mamedov r...@romanrm.net wrote:
Second, why qcow2? It can also have internal fragmentation which is unlikely
to
do anything good for performance.
It really depends on what version of libvirt and qemu-image you've got. I did
some testing during Fedora
20 matches
Mail list logo