On Tue, Oct 29, 2013 at 8:51 AM, Anand Avati wrote:
> Looks like what is happening is that qemu performs ioctls() on the backend
> to query logical_block_size (for direct IO alignment). That works on XFS,
> but fails on FUSE (hence qemu ends up performing IO with default 512
> alignment rather tha
>
> Jacob, are you using xfs on top of HDD or are you using somekind of RAID?
>
> We have disks with 4K sectors and we are using those in RAID-6 setup with
> LSI Megaraid controller. We haven't run into these issues and I wasn't able
> to reproduce it. I did only very quick tests tho, so it may be
6.11.2013 14:33, Jacob Yundt kirjoitti:
On Tue, Nov 5, 2013 at 10:56 PM, Bharata B Rao wrote:
My below mail didn't make it to the list, hence resending...
On Tue, Nov 5, 2013 at 8:04 PM, Bharata B Rao
wrote:
On Wed, Oct 30, 2013 at 11:26:48PM +0530, Bharata B Rao wrote:
On Tue, Oct 29, 2
On Tue, Nov 5, 2013 at 10:56 PM, Bharata B Rao wrote:
>
> My below mail didn't make it to the list, hence resending...
>
> On Tue, Nov 5, 2013 at 8:04 PM, Bharata B Rao
> wrote:
>>
>> On Wed, Oct 30, 2013 at 11:26:48PM +0530, Bharata B Rao wrote:
>> > On Tue, Oct 29, 2013 at 1:21 PM, Anand Avati
My below mail didn't make it to the list, hence resending...
On Tue, Nov 5, 2013 at 8:04 PM, Bharata B Rao wrote:
> On Wed, Oct 30, 2013 at 11:26:48PM +0530, Bharata B Rao wrote:
> > On Tue, Oct 29, 2013 at 1:21 PM, Anand Avati wrote:
> >
> > > Looks like what is happening is that qemu performs
> Jacob - In the first mail you sent on this subject, you mention that you
> don't see any issues when gluster volume is backed by ext4. Does this still
> hold true ?
>
Correct, everything works "as expected" when using gluster bricks
backed by ext4 filesystems (on top of LVM).
Let me know if y
On Tue, Oct 29, 2013 at 1:21 PM, Anand Avati wrote:
> Looks like what is happening is that qemu performs ioctls() on the backend
> to query logical_block_size (for direct IO alignment). That works on XFS,
> but fails on FUSE (hence qemu ends up performing IO with default 512
> alignment rather th
>
> Looks like what is happening is that qemu performs ioctls() on the backend to
> query logical_block_size (for direct IO alignment). That works on XFS, but
> fails on FUSE (hence qemu ends up performing IO with default 512 alignment
> rather than 4k).
>
> Looks like this might be something we
Looks like what is happening is that qemu performs ioctls() on the backend
to query logical_block_size (for direct IO alignment). That works on XFS,
but fails on FUSE (hence qemu ends up performing IO with default 512
alignment rather than 4k).
Looks like this might be something we can enhance glu
What happens when you try to use KVM on an image directly on XFS, without
involving gluster?
Avati
On Sun, Oct 27, 2013 at 5:53 PM, Jacob Yundt wrote:
> I think I finally made some progress on this bug!
>
> I noticed that all disks in my gluster server(s) have 4K sectors.
> Using an older disk
I think I finally made some progress on this bug!
I noticed that all disks in my gluster server(s) have 4K sectors.
Using an older disk with 512 byte sectors, I did _not_ get any errors
on my gluster client / KVM server. I switched back to using my newer
4K drives and manually set the XFS sector
> No I've never used raw, I've used lvm (local block device) and qcow2.
> I think you should use the libvirt tools to run VM's and not directly
> use qemu-kvm.
Sorry, I should have provided some more clarification: that was the
error I got from virt-manager. I'm doing all of my work from
virt-man
No I've never used raw, I've used lvm (local block device) and qcow2.
I think you should use the libvirt tools to run VM's and not directly
use qemu-kvm.
Are you creating the qcow2 file with qemu-img first? example:
qemu-img create -f qcow2 /var/lib/libvirt/images/xfs/kvm2.img 200G
[root@ ~]# vir
> I'm using gluster 3.3.0 and 3.3.1 with xfs bricks and kvm based VM's
> using qcow2 files on gluster volume fuse mounts. CentOS6.2 through 6.4
> w/CloudStack 3.0.2 - 4.1.0.
>
> I've not had any problems. Here is 1 host in a small 3 host cluster
> (using the cloudstack terminology). about 30 VM's a
> That error is at pread()
>
>
> Anand Avati wrote:
>>
>> It would be good if you can make sure a dd write with oflag=direct works
>> on the mount point (without involving KVM/qemu).
>>
>> Avati
>>
Avati-
The DD command with oflag=direct worked fine. Specifically, I used:
dd if=/dev/zero of=./t
> Try setting "performance.read-ahead off"
No dice, I still get similar errors I/O errors on the client with
"performance.read-ahead off". Snippet from the glusterfsd brick log:
[2013-07-16 11:15:03.610652] E [posix.c:1940:posix_readv] 0-xfs-posix:
read failed on fd=0x22a72fc: Invalid argument
[
That error is at pread()
Anand Avati wrote:
>It would be good if you can make sure a dd write with oflag=direct
>works on
>the mount point (without involving KVM/qemu).
>
>Avati
>
>
>On Mon, Jul 15, 2013 at 7:37 PM, Jacob Yundt wrote:
>
>> Unfortunately I'm hitting the same problem with 3.4.0 G
It would be good if you can make sure a dd write with oflag=direct works on
the mount point (without involving KVM/qemu).
Avati
On Mon, Jul 15, 2013 at 7:37 PM, Jacob Yundt wrote:
> Unfortunately I'm hitting the same problem with 3.4.0 GA. In case it
> helps, I increased both the client and s
I'm using gluster 3.3.0 and 3.3.1 with xfs bricks and kvm based VM's
using qcow2 files on gluster volume fuse mounts. CentOS6.2 through 6.4
w/CloudStack 3.0.2 - 4.1.0.
I've not had any problems. Here is 1 host in a small 3 host cluster
(using the cloudstack terminology). about 30 VM's are running
Try setting "performance.read-ahead off"
On 07/15/2013 07:37 PM, Jacob Yundt wrote:
Unfortunately I'm hitting the same problem with 3.4.0 GA. In case it
helps, I increased both the client and server brick logs to TRACE.
I've updated the BZ[1] and attached both logs + an strace.
Anyone else usi
Unfortunately I'm hitting the same problem with 3.4.0 GA. In case it
helps, I increased both the client and server brick logs to TRACE.
I've updated the BZ[1] and attached both logs + an strace.
Anyone else using XFS backed bricks for hosting KVM images? If so,
what xfs mkfs/mount options are yo
I was curious about that myself. However, the version (3.4.0beta4)
seems to be correctly identified at the top of the glusterfsd log:
[2013-07-04 19:07:55.146042] I [glusterfsd.c:1878:main]
0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version
3.4.0beta4 (/usr/sbin/glusterfsd -s 1j
On Jul 6, 2013 1:46 PM, "Jacob Yundt" wrote:
>
> I'm still getting the same errors with 3.4.0beta4. I increased the
> brick log level to TRACE and repeated my same tests. I keep getting
> these errors:
>
> 0-rpc-service: submitted reply for rpc-message (XID: 0x411x, Program:
> GlusterFS 3.3, Pr
I'm still getting the same errors with 3.4.0beta4. I increased the
brick log level to TRACE and repeated my same tests. I keep getting
these errors:
[2013-07-04 19:10:13.905877] E [posix.c:1940:posix_readv] 0-xfs-posix:
read failed on fd=0x1e2e44c: Invalid argument
[2013-07-04 19:10:13.905970] I
Still having this problem on 3.4.0beta1. I updated the RH BZ with
more information (logs, straces, etc).
-Jacob
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
As an FYI, I opened a RH BZ for this issue:
https://bugzilla.redhat.com/show_bug.cgi?id=958781
-Jacob
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
On 04/29/2013 09:55 PM, Jacob Yundt wrote:
Can you provide details of your volume configuration? If you have posix-aio
enabled on the volume, it might be worth trying after disabling that option.
No, I do not have posix-aio (storage.linux-aio ?) enabled.
Yes, I was referring to storage.linux-
> Can you provide details of your volume configuration? If you have posix-aio
> enabled on the volume, it might be worth trying after disabling that option.
No, I do not have posix-aio (storage.linux-aio ?) enabled. Below are
the settings for my xfs backed gluster volume:
+-
On 04/29/2013 08:40 PM, Jacob Yundt wrote:
Using 3.4.0alpha3 I get a mix of the following errors:
[2013-04-29 12:17:06.345816] W
[client-rpc-fops.c:2697:client3_3_readv_cbk] 0-xfs-client-0: remote
operation failed: Invalid argument
[2013-04-29 12:17:06.345858] W [fuse-bridge.c:2049:fuse_readv_cb
> Using 3.4.0alpha3 I get a mix of the following errors:
>
> [2013-04-29 12:17:06.345816] W
> [client-rpc-fops.c:2697:client3_3_readv_cbk] 0-xfs-client-0: remote
> operation failed: Invalid argument
> [2013-04-29 12:17:06.345858] W [fuse-bridge.c:2049:fuse_readv_cbk]
> 0-glusterfs-fuse: 17200: READ
On 04/28/2013 11:14 PM, Jacob Yundt wrote:
Does anyone have experience using gluster storage pools (backed by xfs
filesystems) with KVM? I'm able to add a VirtIO disk without issue,
however, when I try to create (or mount) an ext4 filesystem from the
KVM guest, I get errors. If I use a gluster
Does anyone have experience using gluster storage pools (backed by xfs
filesystems) with KVM? I'm able to add a VirtIO disk without issue,
however, when I try to create (or mount) an ext4 filesystem from the
KVM guest, I get errors. If I use a gluster volume that is _not_
backed by xfs (e.g. ext4
32 matches
Mail list logo