Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-08-03 Thread David Gossage
On Wed, Aug 3, 2016 at 7:57 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

> On 3/08/2016 10:45 PM, Lindsay Mathieson wrote:
>
> On 3/08/2016 2:26 PM, Krutika Dhananjay wrote:
>
> Once I deleted old content from test volume it mounted to oVirt via
> storage add when previously it would error out.  I am now creating a test
> VM with default disk caching settings (pretty sure oVirt is defaulting to
> none rather than writeback/through).  So far all shards are being created
> properly.
>
>
> I can confirm that it works with ProxMox VM's in direct (no cache mode) as
> well.
>
>
> Also Gluster 3.8.1 is good to
>

ugh almost done updating to 3.7.14 and already feeling the urge to start
testing and updating to 3.8 branch.

>
> --
> Lindsay Mathieson
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-08-03 Thread Lindsay Mathieson

On 3/08/2016 10:45 PM, Lindsay Mathieson wrote:

On 3/08/2016 2:26 PM, Krutika Dhananjay wrote:
Once I deleted old content from test volume it mounted to oVirt via 
storage add when previously it would error out.  I am now creating a 
test VM with default disk caching settings (pretty sure oVirt is 
defaulting to none rather than writeback/through).  So far all shards 
are being created properly.


I can confirm that it works with ProxMox VM's in direct (no cache 
mode) as well. 


Also Gluster 3.8.1 is good to

--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-08-03 Thread Lindsay Mathieson

On 3/08/2016 2:26 PM, Krutika Dhananjay wrote:
Once I deleted old content from test volume it mounted to oVirt via 
storage add when previously it would error out.  I am now creating a 
test VM with default disk caching settings (pretty sure oVirt is 
defaulting to none rather than writeback/through).  So far all shards 
are being created properly.


I can confirm that it works with ProxMox VM's in direct (no cache mode) 
as well.




Load is sky rocketing but I have all 3 gluster bricks running off 1 
hard drive on test box so I would expect horrible io/load issues with 
that.



Ha! Same config for my test Host :)


--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-08-02 Thread Krutika Dhananjay
Glad the fixes worked for you. Thanks for that update!

-Krutika

On Tue, Aug 2, 2016 at 7:31 PM, David Gossage 
wrote:

> So far both dd commands that failed previously worked fine on 3.7.14
>
> Once I deleted old content from test volume it mounted to oVirt via
> storage add when previously it would error out.  I am now creating a test
> VM with default disk caching settings (pretty sure oVirt is defaulting to
> none rather than writeback/through).  So far all shards are being created
> properly.
>
> Load is sky rocketing but I have all 3 gluster bricks running off 1 hard
> drive on test box so I would expect horrible io/load issues with that.
>
> Very promising so far.  Thank you developers for your help in working
> through this.
>
> Once I have the VM installed and running will test for a few days and make
> sure it doesn't have any freeze or locking issues then will roll this out
> to working cluster.
>
>
>
> *David Gossage*
> *Carousel Checks Inc. | System Administrator*
> *Office* 708.613.2284
>
> On Wed, Jul 27, 2016 at 8:37 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Tue, Jul 26, 2016 at 9:38 PM, Krutika Dhananjay 
>> wrote:
>>
>>> Yes please, could you file a bug against glusterfs for this issue?
>>>
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1360785
>>
>>
>>>
>>>
>>> -Krutika
>>>
>>> On Wed, Jul 27, 2016 at 1:39 AM, David Gossage <
>>> dgoss...@carouselchecks.com> wrote:
>>>
 Has a bug report been filed for this issue or should l I create one
 with the logs and results provided so far?

 *David Gossage*
 *Carousel Checks Inc. | System Administrator*
 *Office* 708.613.2284

 On Fri, Jul 22, 2016 at 12:53 PM, David Gossage <
 dgoss...@carouselchecks.com> wrote:

>
>
>
> On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur 
> wrote:
>
>> On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen <
>> samp...@neutraali.net> wrote:
>> > Here is a quick way how to test this:
>> > GlusterFS 3.7.13 volume with default settings with brick on ZFS
>> dataset. gluster-test1 is server and gluster-test2 is client mounting 
>> with
>> FUSE.
>> >
>> > Writing file with oflag=direct is not ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
>> count=1 bs=1024000
>> > dd: failed to open ‘file’: Invalid argument
>> >
>> > Enable network.remote-dio on Gluster Volume:
>> > [root@gluster-test1 gluster]# gluster volume set gluster
>> network.remote-dio enable
>> > volume set: success
>> >
>> > Writing small file with oflag=direct is ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
>> count=1 bs=1024000
>> > 1+0 records in
>> > 1+0 records out
>> > 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s
>> >
>> > Writing bigger file with oflag=direct is ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3
>> oflag=direct count=100 bs=1M
>> > 100+0 records in
>> > 100+0 records out
>> > 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s
>> >
>> > Enable Sharding on Gluster Volume:
>> > [root@gluster-test1 gluster]# gluster volume set gluster
>> features.shard enable
>> > volume set: success
>> >
>> > Writing small file  with oflag=direct is ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3
>> oflag=direct count=1 bs=1M
>> > 1+0 records in
>> > 1+0 records out
>> > 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s
>> >
>> > Writing bigger file with oflag=direct is not ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3
>> oflag=direct count=100 bs=1M
>> > dd: error writing ‘file3’: Operation not permitted
>> > dd: closing output file ‘file3’: Operation not permitted
>> >
>>
>>
>> Thank you for these tests! would it be possible to share the brick and
>> client logs?
>>
>
> Not sure if his tests are same as my setup but here is what I end up
> with
>
> Volume Name: glustershard
> Type: Replicate
> Volume ID: 0cc4efb6-3836-4caa-b24a-b3afb6e407c3
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.71.10:/gluster1/shard1/1
> Brick2: 192.168.71.11:/gluster1/shard2/1
> Brick3: 192.168.71.12:/gluster1/shard3/1
> Options Reconfigured:
> features.shard-block-size: 64MB
> features.shard: on
> server.allow-insecure: on
> storage.owner-uid: 36
> storage.owner-gid: 36
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.quick-read: off
> cluster.self-heal-window-size: 1024
> 

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-08-02 Thread David Gossage
So far both dd commands that failed previously worked fine on 3.7.14

Once I deleted old content from test volume it mounted to oVirt via storage
add when previously it would error out.  I am now creating a test VM with
default disk caching settings (pretty sure oVirt is defaulting to none
rather than writeback/through).  So far all shards are being created
properly.

Load is sky rocketing but I have all 3 gluster bricks running off 1 hard
drive on test box so I would expect horrible io/load issues with that.

Very promising so far.  Thank you developers for your help in working
through this.

Once I have the VM installed and running will test for a few days and make
sure it doesn't have any freeze or locking issues then will roll this out
to working cluster.



*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284

On Wed, Jul 27, 2016 at 8:37 AM, David Gossage 
wrote:

> On Tue, Jul 26, 2016 at 9:38 PM, Krutika Dhananjay 
> wrote:
>
>> Yes please, could you file a bug against glusterfs for this issue?
>>
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1360785
>
>
>>
>>
>> -Krutika
>>
>> On Wed, Jul 27, 2016 at 1:39 AM, David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>> Has a bug report been filed for this issue or should l I create one with
>>> the logs and results provided so far?
>>>
>>> *David Gossage*
>>> *Carousel Checks Inc. | System Administrator*
>>> *Office* 708.613.2284
>>>
>>> On Fri, Jul 22, 2016 at 12:53 PM, David Gossage <
>>> dgoss...@carouselchecks.com> wrote:
>>>



 On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur 
 wrote:

> On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen <
> samp...@neutraali.net> wrote:
> > Here is a quick way how to test this:
> > GlusterFS 3.7.13 volume with default settings with brick on ZFS
> dataset. gluster-test1 is server and gluster-test2 is client mounting with
> FUSE.
> >
> > Writing file with oflag=direct is not ok:
> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
> count=1 bs=1024000
> > dd: failed to open ‘file’: Invalid argument
> >
> > Enable network.remote-dio on Gluster Volume:
> > [root@gluster-test1 gluster]# gluster volume set gluster
> network.remote-dio enable
> > volume set: success
> >
> > Writing small file with oflag=direct is ok:
> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
> count=1 bs=1024000
> > 1+0 records in
> > 1+0 records out
> > 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s
> >
> > Writing bigger file with oflag=direct is ok:
> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
> count=100 bs=1M
> > 100+0 records in
> > 100+0 records out
> > 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s
> >
> > Enable Sharding on Gluster Volume:
> > [root@gluster-test1 gluster]# gluster volume set gluster
> features.shard enable
> > volume set: success
> >
> > Writing small file  with oflag=direct is ok:
> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
> count=1 bs=1M
> > 1+0 records in
> > 1+0 records out
> > 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s
> >
> > Writing bigger file with oflag=direct is not ok:
> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
> count=100 bs=1M
> > dd: error writing ‘file3’: Operation not permitted
> > dd: closing output file ‘file3’: Operation not permitted
> >
>
>
> Thank you for these tests! would it be possible to share the brick and
> client logs?
>

 Not sure if his tests are same as my setup but here is what I end up
 with

 Volume Name: glustershard
 Type: Replicate
 Volume ID: 0cc4efb6-3836-4caa-b24a-b3afb6e407c3
 Status: Started
 Number of Bricks: 1 x 3 = 3
 Transport-type: tcp
 Bricks:
 Brick1: 192.168.71.10:/gluster1/shard1/1
 Brick2: 192.168.71.11:/gluster1/shard2/1
 Brick3: 192.168.71.12:/gluster1/shard3/1
 Options Reconfigured:
 features.shard-block-size: 64MB
 features.shard: on
 server.allow-insecure: on
 storage.owner-uid: 36
 storage.owner-gid: 36
 cluster.server-quorum-type: server
 cluster.quorum-type: auto
 network.remote-dio: enable
 cluster.eager-lock: enable
 performance.stat-prefetch: off
 performance.io-cache: off
 performance.quick-read: off
 cluster.self-heal-window-size: 1024
 cluster.background-self-heal-count: 16
 nfs.enable-ino32: off
 nfs.addr-namelookup: off
 nfs.disable: on
 performance.read-ahead: off
 performance.readdir-ahead: on



  dd if=/dev/zero 
 of=/rhev/data-center/mnt/glusterSD/192.168.71.11\:_glustershard/
 oflag=direct count=100 

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-27 Thread David Gossage
On Tue, Jul 26, 2016 at 9:38 PM, Krutika Dhananjay 
wrote:

> Yes please, could you file a bug against glusterfs for this issue?
>

https://bugzilla.redhat.com/show_bug.cgi?id=1360785


>
>
> -Krutika
>
> On Wed, Jul 27, 2016 at 1:39 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> Has a bug report been filed for this issue or should l I create one with
>> the logs and results provided so far?
>>
>> *David Gossage*
>> *Carousel Checks Inc. | System Administrator*
>> *Office* 708.613.2284
>>
>> On Fri, Jul 22, 2016 at 12:53 PM, David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>>
>>>
>>>
>>> On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur 
>>> wrote:
>>>
 On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen <
 samp...@neutraali.net> wrote:
 > Here is a quick way how to test this:
 > GlusterFS 3.7.13 volume with default settings with brick on ZFS
 dataset. gluster-test1 is server and gluster-test2 is client mounting with
 FUSE.
 >
 > Writing file with oflag=direct is not ok:
 > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
 count=1 bs=1024000
 > dd: failed to open ‘file’: Invalid argument
 >
 > Enable network.remote-dio on Gluster Volume:
 > [root@gluster-test1 gluster]# gluster volume set gluster
 network.remote-dio enable
 > volume set: success
 >
 > Writing small file with oflag=direct is ok:
 > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
 count=1 bs=1024000
 > 1+0 records in
 > 1+0 records out
 > 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s
 >
 > Writing bigger file with oflag=direct is ok:
 > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
 count=100 bs=1M
 > 100+0 records in
 > 100+0 records out
 > 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s
 >
 > Enable Sharding on Gluster Volume:
 > [root@gluster-test1 gluster]# gluster volume set gluster
 features.shard enable
 > volume set: success
 >
 > Writing small file  with oflag=direct is ok:
 > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
 count=1 bs=1M
 > 1+0 records in
 > 1+0 records out
 > 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s
 >
 > Writing bigger file with oflag=direct is not ok:
 > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
 count=100 bs=1M
 > dd: error writing ‘file3’: Operation not permitted
 > dd: closing output file ‘file3’: Operation not permitted
 >


 Thank you for these tests! would it be possible to share the brick and
 client logs?

>>>
>>> Not sure if his tests are same as my setup but here is what I end up with
>>>
>>> Volume Name: glustershard
>>> Type: Replicate
>>> Volume ID: 0cc4efb6-3836-4caa-b24a-b3afb6e407c3
>>> Status: Started
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 192.168.71.10:/gluster1/shard1/1
>>> Brick2: 192.168.71.11:/gluster1/shard2/1
>>> Brick3: 192.168.71.12:/gluster1/shard3/1
>>> Options Reconfigured:
>>> features.shard-block-size: 64MB
>>> features.shard: on
>>> server.allow-insecure: on
>>> storage.owner-uid: 36
>>> storage.owner-gid: 36
>>> cluster.server-quorum-type: server
>>> cluster.quorum-type: auto
>>> network.remote-dio: enable
>>> cluster.eager-lock: enable
>>> performance.stat-prefetch: off
>>> performance.io-cache: off
>>> performance.quick-read: off
>>> cluster.self-heal-window-size: 1024
>>> cluster.background-self-heal-count: 16
>>> nfs.enable-ino32: off
>>> nfs.addr-namelookup: off
>>> nfs.disable: on
>>> performance.read-ahead: off
>>> performance.readdir-ahead: on
>>>
>>>
>>>
>>>  dd if=/dev/zero 
>>> of=/rhev/data-center/mnt/glusterSD/192.168.71.11\:_glustershard/
>>> oflag=direct count=100 bs=1M
>>> 81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/ __DIRECT_IO_TEST__
>>>.trashcan/
>>> [root@ccengine2 ~]# dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/
>>> 192.168.71.11\:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
>>> oflag=direct count=100 bs=1M
>>> dd: error writing 
>>> ‘/rhev/data-center/mnt/glusterSD/192.168.71.11:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test’:
>>> Operation not permitted
>>>
>>> creates the 64M file in expected location then the shard is 0
>>>
>>> # file:
>>> gluster1/shard1/1/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
>>>
>>> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
>>> trusted.afr.dirty=0x
>>> trusted.bit-rot.version=0x0200579231f3000e16e7
>>> trusted.gfid=0xec6de302b35f427985639ca3e25d9df0
>>> trusted.glusterfs.shard.block-size=0x0400
>>>
>>> trusted.glusterfs.shard.file-size=0x0401
>>>
>>>
>>> # file: 

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-26 Thread Krutika Dhananjay
Yes please, could you file a bug against glusterfs for this issue?


-Krutika

On Wed, Jul 27, 2016 at 1:39 AM, David Gossage 
wrote:

> Has a bug report been filed for this issue or should l I create one with
> the logs and results provided so far?
>
> *David Gossage*
> *Carousel Checks Inc. | System Administrator*
> *Office* 708.613.2284
>
> On Fri, Jul 22, 2016 at 12:53 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>>
>>
>>
>> On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur  wrote:
>>
>>> On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen 
>>> wrote:
>>> > Here is a quick way how to test this:
>>> > GlusterFS 3.7.13 volume with default settings with brick on ZFS
>>> dataset. gluster-test1 is server and gluster-test2 is client mounting with
>>> FUSE.
>>> >
>>> > Writing file with oflag=direct is not ok:
>>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
>>> count=1 bs=1024000
>>> > dd: failed to open ‘file’: Invalid argument
>>> >
>>> > Enable network.remote-dio on Gluster Volume:
>>> > [root@gluster-test1 gluster]# gluster volume set gluster
>>> network.remote-dio enable
>>> > volume set: success
>>> >
>>> > Writing small file with oflag=direct is ok:
>>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
>>> count=1 bs=1024000
>>> > 1+0 records in
>>> > 1+0 records out
>>> > 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s
>>> >
>>> > Writing bigger file with oflag=direct is ok:
>>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
>>> count=100 bs=1M
>>> > 100+0 records in
>>> > 100+0 records out
>>> > 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s
>>> >
>>> > Enable Sharding on Gluster Volume:
>>> > [root@gluster-test1 gluster]# gluster volume set gluster
>>> features.shard enable
>>> > volume set: success
>>> >
>>> > Writing small file  with oflag=direct is ok:
>>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
>>> count=1 bs=1M
>>> > 1+0 records in
>>> > 1+0 records out
>>> > 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s
>>> >
>>> > Writing bigger file with oflag=direct is not ok:
>>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
>>> count=100 bs=1M
>>> > dd: error writing ‘file3’: Operation not permitted
>>> > dd: closing output file ‘file3’: Operation not permitted
>>> >
>>>
>>>
>>> Thank you for these tests! would it be possible to share the brick and
>>> client logs?
>>>
>>
>> Not sure if his tests are same as my setup but here is what I end up with
>>
>> Volume Name: glustershard
>> Type: Replicate
>> Volume ID: 0cc4efb6-3836-4caa-b24a-b3afb6e407c3
>> Status: Started
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.71.10:/gluster1/shard1/1
>> Brick2: 192.168.71.11:/gluster1/shard2/1
>> Brick3: 192.168.71.12:/gluster1/shard3/1
>> Options Reconfigured:
>> features.shard-block-size: 64MB
>> features.shard: on
>> server.allow-insecure: on
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> network.remote-dio: enable
>> cluster.eager-lock: enable
>> performance.stat-prefetch: off
>> performance.io-cache: off
>> performance.quick-read: off
>> cluster.self-heal-window-size: 1024
>> cluster.background-self-heal-count: 16
>> nfs.enable-ino32: off
>> nfs.addr-namelookup: off
>> nfs.disable: on
>> performance.read-ahead: off
>> performance.readdir-ahead: on
>>
>>
>>
>>  dd if=/dev/zero 
>> of=/rhev/data-center/mnt/glusterSD/192.168.71.11\:_glustershard/
>> oflag=direct count=100 bs=1M
>> 81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/ __DIRECT_IO_TEST__
>>.trashcan/
>> [root@ccengine2 ~]# dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/
>> 192.168.71.11\:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
>> oflag=direct count=100 bs=1M
>> dd: error writing 
>> ‘/rhev/data-center/mnt/glusterSD/192.168.71.11:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test’:
>> Operation not permitted
>>
>> creates the 64M file in expected location then the shard is 0
>>
>> # file: gluster1/shard1/1/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
>>
>> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
>> trusted.afr.dirty=0x
>> trusted.bit-rot.version=0x0200579231f3000e16e7
>> trusted.gfid=0xec6de302b35f427985639ca3e25d9df0
>> trusted.glusterfs.shard.block-size=0x0400
>>
>> trusted.glusterfs.shard.file-size=0x0401
>>
>>
>> # file: gluster1/shard1/1/.shard/ec6de302-b35f-4279-8563-9ca3e25d9df0.1
>>
>> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
>> trusted.afr.dirty=0x
>> trusted.gfid=0x2bfd3cc8a727489b9a0474241548fe80
>>
>>
>>> Regards,
>>> Vijay
>>> 

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-26 Thread David Gossage
Has a bug report been filed for this issue or should l I create one with
the logs and results provided so far?

*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284

On Fri, Jul 22, 2016 at 12:53 PM, David Gossage  wrote:

>
>
>
> On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur  wrote:
>
>> On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen 
>> wrote:
>> > Here is a quick way how to test this:
>> > GlusterFS 3.7.13 volume with default settings with brick on ZFS
>> dataset. gluster-test1 is server and gluster-test2 is client mounting with
>> FUSE.
>> >
>> > Writing file with oflag=direct is not ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
>> count=1 bs=1024000
>> > dd: failed to open ‘file’: Invalid argument
>> >
>> > Enable network.remote-dio on Gluster Volume:
>> > [root@gluster-test1 gluster]# gluster volume set gluster
>> network.remote-dio enable
>> > volume set: success
>> >
>> > Writing small file with oflag=direct is ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
>> count=1 bs=1024000
>> > 1+0 records in
>> > 1+0 records out
>> > 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s
>> >
>> > Writing bigger file with oflag=direct is ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
>> count=100 bs=1M
>> > 100+0 records in
>> > 100+0 records out
>> > 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s
>> >
>> > Enable Sharding on Gluster Volume:
>> > [root@gluster-test1 gluster]# gluster volume set gluster
>> features.shard enable
>> > volume set: success
>> >
>> > Writing small file  with oflag=direct is ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
>> count=1 bs=1M
>> > 1+0 records in
>> > 1+0 records out
>> > 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s
>> >
>> > Writing bigger file with oflag=direct is not ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
>> count=100 bs=1M
>> > dd: error writing ‘file3’: Operation not permitted
>> > dd: closing output file ‘file3’: Operation not permitted
>> >
>>
>>
>> Thank you for these tests! would it be possible to share the brick and
>> client logs?
>>
>
> Not sure if his tests are same as my setup but here is what I end up with
>
> Volume Name: glustershard
> Type: Replicate
> Volume ID: 0cc4efb6-3836-4caa-b24a-b3afb6e407c3
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.71.10:/gluster1/shard1/1
> Brick2: 192.168.71.11:/gluster1/shard2/1
> Brick3: 192.168.71.12:/gluster1/shard3/1
> Options Reconfigured:
> features.shard-block-size: 64MB
> features.shard: on
> server.allow-insecure: on
> storage.owner-uid: 36
> storage.owner-gid: 36
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.quick-read: off
> cluster.self-heal-window-size: 1024
> cluster.background-self-heal-count: 16
> nfs.enable-ino32: off
> nfs.addr-namelookup: off
> nfs.disable: on
> performance.read-ahead: off
> performance.readdir-ahead: on
>
>
>
>  dd if=/dev/zero 
> of=/rhev/data-center/mnt/glusterSD/192.168.71.11\:_glustershard/
> oflag=direct count=100 bs=1M
> 81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/ __DIRECT_IO_TEST__
>  .trashcan/
> [root@ccengine2 ~]# dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/
> 192.168.71.11\:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
> oflag=direct count=100 bs=1M
> dd: error writing 
> ‘/rhev/data-center/mnt/glusterSD/192.168.71.11:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test’:
> Operation not permitted
>
> creates the 64M file in expected location then the shard is 0
>
> # file: gluster1/shard1/1/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
>
> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
> trusted.afr.dirty=0x
> trusted.bit-rot.version=0x0200579231f3000e16e7
> trusted.gfid=0xec6de302b35f427985639ca3e25d9df0
> trusted.glusterfs.shard.block-size=0x0400
>
> trusted.glusterfs.shard.file-size=0x0401
>
>
> # file: gluster1/shard1/1/.shard/ec6de302-b35f-4279-8563-9ca3e25d9df0.1
>
> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
> trusted.afr.dirty=0x
> trusted.gfid=0x2bfd3cc8a727489b9a0474241548fe80
>
>
>> Regards,
>> Vijay
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-25 Thread Krutika Dhananjay
FYI, there's been some progress on this issue and the same has been updated
on ovirt-users ML:

http://lists.ovirt.org/pipermail/users/2016-July/041413.html

-Krutika

On Fri, Jul 22, 2016 at 11:23 PM, David Gossage  wrote:

>
>
>
> On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur  wrote:
>
>> On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen 
>> wrote:
>> > Here is a quick way how to test this:
>> > GlusterFS 3.7.13 volume with default settings with brick on ZFS
>> dataset. gluster-test1 is server and gluster-test2 is client mounting with
>> FUSE.
>> >
>> > Writing file with oflag=direct is not ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
>> count=1 bs=1024000
>> > dd: failed to open ‘file’: Invalid argument
>> >
>> > Enable network.remote-dio on Gluster Volume:
>> > [root@gluster-test1 gluster]# gluster volume set gluster
>> network.remote-dio enable
>> > volume set: success
>> >
>> > Writing small file with oflag=direct is ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
>> count=1 bs=1024000
>> > 1+0 records in
>> > 1+0 records out
>> > 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s
>> >
>> > Writing bigger file with oflag=direct is ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
>> count=100 bs=1M
>> > 100+0 records in
>> > 100+0 records out
>> > 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s
>> >
>> > Enable Sharding on Gluster Volume:
>> > [root@gluster-test1 gluster]# gluster volume set gluster
>> features.shard enable
>> > volume set: success
>> >
>> > Writing small file  with oflag=direct is ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
>> count=1 bs=1M
>> > 1+0 records in
>> > 1+0 records out
>> > 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s
>> >
>> > Writing bigger file with oflag=direct is not ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
>> count=100 bs=1M
>> > dd: error writing ‘file3’: Operation not permitted
>> > dd: closing output file ‘file3’: Operation not permitted
>> >
>>
>>
>> Thank you for these tests! would it be possible to share the brick and
>> client logs?
>>
>
> Not sure if his tests are same as my setup but here is what I end up with
>
> Volume Name: glustershard
> Type: Replicate
> Volume ID: 0cc4efb6-3836-4caa-b24a-b3afb6e407c3
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.71.10:/gluster1/shard1/1
> Brick2: 192.168.71.11:/gluster1/shard2/1
> Brick3: 192.168.71.12:/gluster1/shard3/1
> Options Reconfigured:
> features.shard-block-size: 64MB
> features.shard: on
> server.allow-insecure: on
> storage.owner-uid: 36
> storage.owner-gid: 36
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.quick-read: off
> cluster.self-heal-window-size: 1024
> cluster.background-self-heal-count: 16
> nfs.enable-ino32: off
> nfs.addr-namelookup: off
> nfs.disable: on
> performance.read-ahead: off
> performance.readdir-ahead: on
>
>
>
>  dd if=/dev/zero 
> of=/rhev/data-center/mnt/glusterSD/192.168.71.11\:_glustershard/
> oflag=direct count=100 bs=1M
> 81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/ __DIRECT_IO_TEST__
>  .trashcan/
> [root@ccengine2 ~]# dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/
> 192.168.71.11\:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
> oflag=direct count=100 bs=1M
> dd: error writing 
> ‘/rhev/data-center/mnt/glusterSD/192.168.71.11:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test’:
> Operation not permitted
>
> creates the 64M file in expected location then the shard is 0
>
> # file: gluster1/shard1/1/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
>
> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
> trusted.afr.dirty=0x
> trusted.bit-rot.version=0x0200579231f3000e16e7
> trusted.gfid=0xec6de302b35f427985639ca3e25d9df0
> trusted.glusterfs.shard.block-size=0x0400
>
> trusted.glusterfs.shard.file-size=0x0401
>
>
> # file: gluster1/shard1/1/.shard/ec6de302-b35f-4279-8563-9ca3e25d9df0.1
>
> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
> trusted.afr.dirty=0x
> trusted.gfid=0x2bfd3cc8a727489b9a0474241548fe80
>
>
>> Regards,
>> Vijay
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-22 Thread David Gossage
On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur  wrote:

> On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen 
> wrote:
> > Here is a quick way how to test this:
> > GlusterFS 3.7.13 volume with default settings with brick on ZFS dataset.
> gluster-test1 is server and gluster-test2 is client mounting with FUSE.
> >
> > Writing file with oflag=direct is not ok:
> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
> count=1 bs=1024000
> > dd: failed to open ‘file’: Invalid argument
> >
> > Enable network.remote-dio on Gluster Volume:
> > [root@gluster-test1 gluster]# gluster volume set gluster
> network.remote-dio enable
> > volume set: success
> >
> > Writing small file with oflag=direct is ok:
> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
> count=1 bs=1024000
> > 1+0 records in
> > 1+0 records out
> > 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s
> >
> > Writing bigger file with oflag=direct is ok:
> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
> count=100 bs=1M
> > 100+0 records in
> > 100+0 records out
> > 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s
> >
> > Enable Sharding on Gluster Volume:
> > [root@gluster-test1 gluster]# gluster volume set gluster features.shard
> enable
> > volume set: success
> >
> > Writing small file  with oflag=direct is ok:
> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
> count=1 bs=1M
> > 1+0 records in
> > 1+0 records out
> > 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s
> >
> > Writing bigger file with oflag=direct is not ok:
> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
> count=100 bs=1M
> > dd: error writing ‘file3’: Operation not permitted
> > dd: closing output file ‘file3’: Operation not permitted
> >
>
>
> Thank you for these tests! would it be possible to share the brick and
> client logs?
>

Not sure if his tests are same as my setup but here is what I end up with

Volume Name: glustershard
Type: Replicate
Volume ID: 0cc4efb6-3836-4caa-b24a-b3afb6e407c3
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.71.10:/gluster1/shard1/1
Brick2: 192.168.71.11:/gluster1/shard2/1
Brick3: 192.168.71.12:/gluster1/shard3/1
Options Reconfigured:
features.shard-block-size: 64MB
features.shard: on
server.allow-insecure: on
storage.owner-uid: 36
storage.owner-gid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.quick-read: off
cluster.self-heal-window-size: 1024
cluster.background-self-heal-count: 16
nfs.enable-ino32: off
nfs.addr-namelookup: off
nfs.disable: on
performance.read-ahead: off
performance.readdir-ahead: on



 dd if=/dev/zero
of=/rhev/data-center/mnt/glusterSD/192.168.71.11\:_glustershard/
oflag=direct count=100 bs=1M
81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/ __DIRECT_IO_TEST__
 .trashcan/
[root@ccengine2 ~]# dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/
192.168.71.11\:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
oflag=direct count=100 bs=1M
dd: error writing
‘/rhev/data-center/mnt/glusterSD/192.168.71.11:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test’:
Operation not permitted

creates the 64M file in expected location then the shard is 0

# file: gluster1/shard1/1/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.afr.dirty=0x
trusted.bit-rot.version=0x0200579231f3000e16e7
trusted.gfid=0xec6de302b35f427985639ca3e25d9df0
trusted.glusterfs.shard.block-size=0x0400
trusted.glusterfs.shard.file-size=0x0401


# file: gluster1/shard1/1/.shard/ec6de302-b35f-4279-8563-9ca3e25d9df0.1
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.afr.dirty=0x
trusted.gfid=0x2bfd3cc8a727489b9a0474241548fe80


> Regards,
> Vijay
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
[2016-07-22 16:26:44.645166] I [MSGID: 100030] [glusterfsd.c:2338:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.13 (args: /usr/sbin/glusterfs --volfile-server=192.168.71.11 --volfile-server=192.168.71.10 --volfile-server=192.168.71.12 --volfile-id=/glustershard /rhev/data-center/mnt/glusterSD/192.168.71.11:_glustershard)
[2016-07-22 16:26:44.655674] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2016-07-22 16:26:44.661967] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2016-07-22 16:26:44.662718] I [MSGID: 114020] 

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-22 Thread Vijay Bellur
On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen  wrote:
> Here is a quick way how to test this:
> GlusterFS 3.7.13 volume with default settings with brick on ZFS dataset. 
> gluster-test1 is server and gluster-test2 is client mounting with FUSE.
>
> Writing file with oflag=direct is not ok:
> [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct count=1 
> bs=1024000
> dd: failed to open ‘file’: Invalid argument
>
> Enable network.remote-dio on Gluster Volume:
> [root@gluster-test1 gluster]# gluster volume set gluster network.remote-dio 
> enable
> volume set: success
>
> Writing small file with oflag=direct is ok:
> [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct count=1 
> bs=1024000
> 1+0 records in
> 1+0 records out
> 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s
>
> Writing bigger file with oflag=direct is ok:
> [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct count=100 
> bs=1M
> 100+0 records in
> 100+0 records out
> 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s
>
> Enable Sharding on Gluster Volume:
> [root@gluster-test1 gluster]# gluster volume set gluster features.shard enable
> volume set: success
>
> Writing small file  with oflag=direct is ok:
> [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct count=1 
> bs=1M
> 1+0 records in
> 1+0 records out
> 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s
>
> Writing bigger file with oflag=direct is not ok:
> [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct count=100 
> bs=1M
> dd: error writing ‘file3’: Operation not permitted
> dd: closing output file ‘file3’: Operation not permitted
>


Thank you for these tests! would it be possible to share the brick and
client logs?

Regards,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-22 Thread Samuli Heinonen
Here is a quick way how to test this:
GlusterFS 3.7.13 volume with default settings with brick on ZFS dataset. 
gluster-test1 is server and gluster-test2 is client mounting with FUSE.

Writing file with oflag=direct is not ok:
[root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct count=1 
bs=1024000
dd: failed to open ‘file’: Invalid argument

Enable network.remote-dio on Gluster Volume:
[root@gluster-test1 gluster]# gluster volume set gluster network.remote-dio 
enable
volume set: success

Writing small file with oflag=direct is ok:
[root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct count=1 
bs=1024000
1+0 records in
1+0 records out
1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s

Writing bigger file with oflag=direct is ok:
[root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct count=100 
bs=1M
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s

Enable Sharding on Gluster Volume:
[root@gluster-test1 gluster]# gluster volume set gluster features.shard enable
volume set: success

Writing small file  with oflag=direct is ok:
[root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct count=1 
bs=1M
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s

Writing bigger file with oflag=direct is not ok:
[root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct count=100 
bs=1M
dd: error writing ‘file3’: Operation not permitted
dd: closing output file ‘file3’: Operation not permitted

-samuli


> On 22 Jul 2016, at 16:12, Vijay Bellur  wrote:
> 
> 2016-07-22 1:54 GMT-04:00 Frank Rothenstein 
> :
>> The point is that even if all other backend storage filesystems do correctly
>> untill 3.7.11 there was no error on ZFS. Something happened nobody ever
>> could explain in the release of 3.7.12 that makes FUSE-mount _in ovirt_ (it
>> partly uses dd with iflag=direct  , using iflag=direct yourself gives also
>> errors on the FUSE-mounts ) unusable.
>> 
>> So 3.7.11 is the last usable version when using ZFS on bricks, afaik.
>> 
> 
> Can you please share the exact dd command that causes this problem?
> 
> Thanks,
> Vijay
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-22 Thread David Gossage
On Fri, Jul 22, 2016 at 8:12 AM, Vijay Bellur  wrote:

> 2016-07-22 1:54 GMT-04:00 Frank Rothenstein <
> f.rothenst...@bodden-kliniken.de>:
> > The point is that even if all other backend storage filesystems do
> correctly
> > untill 3.7.11 there was no error on ZFS. Something happened nobody ever
> > could explain in the release of 3.7.12 that makes FUSE-mount _in ovirt_
> (it
> > partly uses dd with iflag=direct  , using iflag=direct yourself gives
> also
> > errors on the FUSE-mounts ) unusable.
> >
> > So 3.7.11 is the last usable version when using ZFS on bricks, afaik.
> >
>
> Can you please share the exact dd command that causes this problem?
>

I want to say it was this one though my logs for vdsm going back that far
have rolled off

 /usr/bin/dd
if=/rhev/data-center/mnt/glusterSD/ccgl1.gl.local:GLUSTER1/7c73a8dd-a72e-4556-ac88-7f6813131e64/dom_md/metadata
iflag=direct of=/dev/null bs=4096 count=1

>
> Thanks,
> Vijay
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-22 Thread David Gossage
On Fri, Jul 22, 2016 at 8:23 AM, Samuli Heinonen 
wrote:

>
> > On 21 Jul 2016, at 20:48, David Gossage 
> wrote:
> >
> > Wonder if this may be related at all
> >
> > * #1347553: O_DIRECT support for sharding
> > https://bugzilla.redhat.com/show_bug.cgi?id=1347553
> >
> > Is it possible to downgrade from 3.8 back to 3.7.x
> >
> > Building test box right now anyway but wondering.
> >
>
> Have you been able to do any testing yet?
>

Had time to get box built up to point of getting zfs pool made before left
work yesterday.  hope to have working gluster volumes on various fs
backends by end of day.

Figure 1 volume on xfs with sharding, 1 volume on zfs sharded, 1 on zfs no
shards, and maybe ill make 1 zvol with xfs on top of it also to see what
happens


> "O_DIRECT support for sharding" has been also included in 3.7.12. Is this
> problem occurring only when sharding is enabled? Is it possible that it
> requires direct I/O all the way to the bricks with sharding even when
> network.remote-dio is enabled?
>

It's possible though at time of update when I had issues getting it to stay
connected to ovirt I had just turned sharding on and as of yet had no
sharded images.  I also turned feature off during tests with no benefit in
stability.

>
> Best regards,
> Samuli Heinonen
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-22 Thread Samuli Heinonen

> On 21 Jul 2016, at 20:48, David Gossage  wrote:
> 
> Wonder if this may be related at all
> 
> * #1347553: O_DIRECT support for sharding
> https://bugzilla.redhat.com/show_bug.cgi?id=1347553
> 
> Is it possible to downgrade from 3.8 back to 3.7.x 
> 
> Building test box right now anyway but wondering.
> 

Have you been able to do any testing yet?

"O_DIRECT support for sharding" has been also included in 3.7.12. Is this 
problem occurring only when sharding is enabled? Is it possible that it 
requires direct I/O all the way to the bricks with sharding even when 
network.remote-dio is enabled?

Best regards,
Samuli Heinonen

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-22 Thread David Gossage
2016-07-22 2:32 GMT-05:00 Frank Rothenstein <
f.rothenst...@bodden-kliniken.de>:

> I can't tell myself, I'm using the ovirt-4.0-centos-gluster37 repo
> (from ovirt-release40). I have a second gluster-cluster as storage, I
> didn't dare to upgrade, as it simply works...not as an ovirt/vm storage.
>
> Am Freitag, den 22.07.2016, 08:28 +0200 schrieb Gandalf Corvotempesta:
>
> Il 22 lug 2016 07:54, "Frank Rothenstein" <
> f.rothenst...@bodden-kliniken.de> ha scritto:
> >
> > So 3.7.11 is the last usable version when using ZFS on bricks, afaik.
>
> Even with 3.8 this issue is present?
>
>
I believe Lindsay may have tested 3.8 and found is their still as well.



>
>
> --
>
>
>
>
> __
> BODDEN-KLINIKEN Ribnitz-Damgarten GmbH
> Sandhufe 2
> 18311 Ribnitz-Damgarten
>
> Telefon: 03821-700-0
> Fax:   03821-700-240
>
> E-Mail: i...@bodden-kliniken.de   Internet: http://www.bodden-kliniken.de
>
>
> Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-Nr.: 
> 079/133/40188
>
> Aufsichtsratsvorsitzende: Carmen Schröter, Geschäftsführer: Dr. Falko Milski
>
>
> Der Inhalt dieser E-Mail ist ausschließlich für den bezeichneten Adressaten 
> bestimmt. Wenn Sie nicht der vorge-
>
> sehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, beachten 
> Sie bitte, dass jede Form der Veröf-
>
> fentlichung, Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail 
> unzulässig ist. Wir bitten Sie, sofort den
> Absender zu informieren und die E-Mail zu löschen.
>
>
>  Bodden-Kliniken Ribnitz-Damgarten GmbH 2016
> *** Virenfrei durch Kerio Mail Server und Sophos Antivirus ***
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-22 Thread Frank Rothenstein
I can't tell myself, I'm using the ovirt-4.0-centos-gluster37 repo
(from ovirt-release40). I have a second gluster-cluster as storage, I
didn't dare to upgrade, as it simply works...not as an ovirt/vm
storage.
Am Freitag, den 22.07.2016, 08:28 +0200 schrieb Gandalf Corvotempesta:
> > Il 22 lug 2016 07:54, "Frank Rothenstein"  ha scritto:
> 
> >
> 
> > > So 3.7.11 is the last usable version when using ZFS on bricks,
afaik.
> Even with 3.8 this issue is present? 



 

__
BODDEN-KLINIKEN Ribnitz-Damgarten GmbH
Sandhufe 2
18311 Ribnitz-Damgarten

Telefon: 03821-700-0
Fax:   03821-700-240

E-Mail: i...@bodden-kliniken.de   Internet: http://www.bodden-kliniken.de

Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-Nr.: 
079/133/40188
Aufsichtsratsvorsitzende: Carmen Schröter, Geschäftsführer: Dr. Falko Milski

Der Inhalt dieser E-Mail ist ausschließlich für den bezeichneten Adressaten 
bestimmt. Wenn Sie nicht der vorge- 
sehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, beachten Sie 
bitte, dass jede Form der Veröf- 
fentlichung, Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail 
unzulässig ist. Wir bitten Sie, sofort den 
Absender zu informieren und die E-Mail zu löschen. 


 Bodden-Kliniken Ribnitz-Damgarten GmbH 2016
*** Virenfrei durch Kerio Mail Server und Sophos Antivirus ***
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-22 Thread Gandalf Corvotempesta
Il 22 lug 2016 07:54, "Frank Rothenstein" 
ha scritto:
>
> So 3.7.11 is the last usable version when using ZFS on bricks, afaik.

Even with 3.8 this issue is present?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-21 Thread Frank Rothenstein
The point is that even if all other backend storage filesystems do
correctly untill 3.7.11 there was no error on ZFS. Something happened
nobody ever could explain in the release of 3.7.12 that makes FUSE-
mount _in ovirt_ (it partly uses dd with iflag=direct  , using
iflag=direct yourself gives also errors on the FUSE-mounts ) unusable.

So 3.7.11 is the last usable version when using ZFS on bricks, afaik.
Am Freitag, den 22.07.2016, 07:52 +1000 schrieb Lindsay Mathieson:
> 
> On 22/07/2016 6:14 AM, David Gossage
>   wrote:
> 
> 
> 
> >   https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.6.4
> > 
> >   
> > 
> >   
> > 
> >   
> > New asynchronous I/O (AIO)
> >   support.
> >   
> 
> 
> Only for  ZVOLS I think, not datasets.
> 
> 
> 
>   
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users



 

__
BODDEN-KLINIKEN Ribnitz-Damgarten GmbH
Sandhufe 2
18311 Ribnitz-Damgarten

Telefon: 03821-700-0
Fax:   03821-700-240

E-Mail: i...@bodden-kliniken.de   Internet: http://www.bodden-kliniken.de

Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-Nr.: 
079/133/40188
Aufsichtsratsvorsitzende: Carmen Schröter, Geschäftsführer: Dr. Falko Milski

Der Inhalt dieser E-Mail ist ausschließlich für den bezeichneten Adressaten 
bestimmt. Wenn Sie nicht der vorge- 
sehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, beachten Sie 
bitte, dass jede Form der Veröf- 
fentlichung, Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail 
unzulässig ist. Wir bitten Sie, sofort den 
Absender zu informieren und die E-Mail zu löschen. 


 Bodden-Kliniken Ribnitz-Damgarten GmbH 2016
*** Virenfrei durch Kerio Mail Server und Sophos Antivirus ***
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-21 Thread Lindsay Mathieson

On 22/07/2016 4:00 AM, David Gossage wrote:
May be anecdotal with small sample size but the few people who have 
had issue all seemed to have zfs backed gluster volumes.


Good point = allmy volumes are all backed by ZFS and when using it 
directly for virt storage I have to enable caching due to lack of 
O_DIRECT support.



Note: AIO support was just a theory on my part


--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-21 Thread Lindsay Mathieson

On 22/07/2016 6:14 AM, David Gossage wrote:

https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.6.4

  * New asynchronous I/O (AIO) support.



Only for  ZVOLS I think, not datasets.

--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-21 Thread David Gossage
On Thu, Jul 21, 2016 at 2:48 PM, Kaleb KEITHLEY  wrote:

> On 07/21/2016 02:38 PM, Samuli Heinonen wrote:
> > Hi all,
> >
> > I’m running oVirt 3.6 and Gluster 3.7 with ZFS backend.
> > ...
> > Afaik ZFS on Linux doesn’t support aio. Has there been some changes to
> GlusterFS regarding aio?
> >
> Was under impression it did
>

https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.6.4


   - New asynchronous I/O (AIO) support.



> Boy, if that isn't a smoking gun, I don't know what is.
>
> --
>
> Kaleb
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-21 Thread Kaleb KEITHLEY
On 07/21/2016 02:38 PM, Samuli Heinonen wrote:
> Hi all,
> 
> I’m running oVirt 3.6 and Gluster 3.7 with ZFS backend. 
> ...
> Afaik ZFS on Linux doesn’t support aio. Has there been some changes to 
> GlusterFS regarding aio?
> 

Boy, if that isn't a smoking gun, I don't know what is.

--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-21 Thread Samuli Heinonen
Hi all,

I’m running oVirt 3.6 and Gluster 3.7 with ZFS backend. All hypervisor and 
storage nodes have CentOS 7. I was planning to upgrade to 3.7.13 during weekend 
but i’ll probably wait for more information on this issue.

Afaik ZFS on Linux doesn’t support aio. Has there been some changes to 
GlusterFS regarding aio?

Best regards,
Samuli heinonen

> On 21 Jul 2016, at 21:00, David Gossage  wrote:
> 
> On Thu, Jul 21, 2016 at 12:48 PM, David Gossage  
> wrote:
> On Thu, Jul 21, 2016 at 9:58 AM, David Gossage  
> wrote:
> On Thu, Jul 21, 2016 at 9:52 AM, Niels de Vos  wrote:
> On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
> > Did a quick test this morning - 3.7.13 is now working with libgfapi - yay!
> >
> >
> > However I do have to enable write-back or write-through caching in qemu
> > before the vm's will start, I believe this is to do with aio support. Not a
> > problem for me.
> >
> > I see there are settings for storage.linux-aio and storage.bd-aio - not sure
> > as to whether they are relevant or which ones to play with.
> 
> Both storage.*-aio options are used by the brick processes. Depending on
> what type of brick you have (linux = filesystem, bd = LVM Volume Group)
> you could enable the one or the other.
> 
> We do have a strong suggestion to set these "gluster volume group .."
> options:
>   https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example
> 
> From those options, network.remote-dio seems most related to your aio
> theory. It was introduced with http://review.gluster.org/4460 that
> contains some more details.
> 
> 
> Wonder if this may be related at all
> 
> * #1347553: O_DIRECT support for sharding
> https://bugzilla.redhat.com/show_bug.cgi?id=1347553
> 
> Is it possible to downgrade from 3.8 back to 3.7.x 
> 
> Building test box right now anyway but wondering.
> 
> May be anecdotal with small sample size but the few people who have had issue 
> all seemed to have zfs backed gluster volumes.
> 
> Now that i recall back to the day I updated.  The gluster volume on xfs I use 
> for my hosted engine never had issues.
>  
> 
>  
> 
> Thanks with the exception of stat-prefetch I have those enabled 
> I could try turning that back off though at the time of update to 3.7.13 it 
> was off.  I didnt turn it back on till later in next week after downgrading 
> back to 3.7.11.  
> 
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: ccgl1.gl.local:/gluster1/BRICK1/1
> Brick2: ccgl2.gl.local:/gluster1/BRICK1/1
> Brick3: ccgl4.gl.local:/gluster1/BRICK1/1
> Options Reconfigured:
> diagnostics.brick-log-level: WARNING
> features.shard-block-size: 64MB
> features.shard: on
> performance.readdir-ahead: on
> storage.owner-uid: 36
> storage.owner-gid: 36
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: on
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> server.allow-insecure: on
> cluster.self-heal-window-size: 1024
> cluster.background-self-heal-count: 16
> performance.strict-write-ordering: off
> nfs.disable: on
> nfs.addr-namelookup: off
> nfs.enable-ino32: off
> 
> 
> HTH,
> Niels
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-21 Thread David Gossage
On Thu, Jul 21, 2016 at 12:48 PM, David Gossage  wrote:

> On Thu, Jul 21, 2016 at 9:58 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Thu, Jul 21, 2016 at 9:52 AM, Niels de Vos  wrote:
>>
>>> On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
>>> > Did a quick test this morning - 3.7.13 is now working with libgfapi -
>>> yay!
>>> >
>>> >
>>> > However I do have to enable write-back or write-through caching in qemu
>>> > before the vm's will start, I believe this is to do with aio support.
>>> Not a
>>> > problem for me.
>>> >
>>> > I see there are settings for storage.linux-aio and storage.bd-aio -
>>> not sure
>>> > as to whether they are relevant or which ones to play with.
>>>
>>> Both storage.*-aio options are used by the brick processes. Depending on
>>> what type of brick you have (linux = filesystem, bd = LVM Volume Group)
>>> you could enable the one or the other.
>>>
>>> We do have a strong suggestion to set these "gluster volume group .."
>>> options:
>>>
>>> https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example
>>>
>>> From those options, network.remote-dio seems most related to your aio
>>> theory. It was introduced with http://review.gluster.org/4460 that
>>> contains some more details.
>>>
>>
>
> Wonder if this may be related at all
>
> * #1347553: O_DIRECT support for sharding
> https://bugzilla.redhat.com/show_bug.cgi?id=1347553
>
> Is it possible to downgrade from 3.8 back to 3.7.x
>
> Building test box right now anyway but wondering.
>

May be anecdotal with small sample size but the few people who have had
issue all seemed to have zfs backed gluster volumes.

Now that i recall back to the day I updated.  The gluster volume on xfs I
use for my hosted engine never had issues.


>
>
>
>>
>> Thanks with the exception of stat-prefetch I have those enabled
>> I could try turning that back off though at the time of update to 3.7.13
>> it was off.  I didnt turn it back on till later in next week after
>> downgrading back to 3.7.11.
>>
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ccgl1.gl.local:/gluster1/BRICK1/1
>> Brick2: ccgl2.gl.local:/gluster1/BRICK1/1
>> Brick3: ccgl4.gl.local:/gluster1/BRICK1/1
>> Options Reconfigured:
>> diagnostics.brick-log-level: WARNING
>> features.shard-block-size: 64MB
>> features.shard: on
>> performance.readdir-ahead: on
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.stat-prefetch: on
>> cluster.eager-lock: enable
>> network.remote-dio: enable
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> server.allow-insecure: on
>> cluster.self-heal-window-size: 1024
>> cluster.background-self-heal-count: 16
>> performance.strict-write-ordering: off
>> nfs.disable: on
>> nfs.addr-namelookup: off
>> nfs.enable-ino32: off
>>
>>
>>> HTH,
>>> Niels
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-21 Thread David Gossage
On Thu, Jul 21, 2016 at 9:58 AM, David Gossage 
wrote:

> On Thu, Jul 21, 2016 at 9:52 AM, Niels de Vos  wrote:
>
>> On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
>> > Did a quick test this morning - 3.7.13 is now working with libgfapi -
>> yay!
>> >
>> >
>> > However I do have to enable write-back or write-through caching in qemu
>> > before the vm's will start, I believe this is to do with aio support.
>> Not a
>> > problem for me.
>> >
>> > I see there are settings for storage.linux-aio and storage.bd-aio - not
>> sure
>> > as to whether they are relevant or which ones to play with.
>>
>> Both storage.*-aio options are used by the brick processes. Depending on
>> what type of brick you have (linux = filesystem, bd = LVM Volume Group)
>> you could enable the one or the other.
>>
>> We do have a strong suggestion to set these "gluster volume group .."
>> options:
>>
>> https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example
>>
>> From those options, network.remote-dio seems most related to your aio
>> theory. It was introduced with http://review.gluster.org/4460 that
>> contains some more details.
>>
>

Wonder if this may be related at all

* #1347553: O_DIRECT support for sharding
https://bugzilla.redhat.com/show_bug.cgi?id=1347553

Is it possible to downgrade from 3.8 back to 3.7.x

Building test box right now anyway but wondering.



>
> Thanks with the exception of stat-prefetch I have those enabled
> I could try turning that back off though at the time of update to 3.7.13
> it was off.  I didnt turn it back on till later in next week after
> downgrading back to 3.7.11.
>
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: ccgl1.gl.local:/gluster1/BRICK1/1
> Brick2: ccgl2.gl.local:/gluster1/BRICK1/1
> Brick3: ccgl4.gl.local:/gluster1/BRICK1/1
> Options Reconfigured:
> diagnostics.brick-log-level: WARNING
> features.shard-block-size: 64MB
> features.shard: on
> performance.readdir-ahead: on
> storage.owner-uid: 36
> storage.owner-gid: 36
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: on
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> server.allow-insecure: on
> cluster.self-heal-window-size: 1024
> cluster.background-self-heal-count: 16
> performance.strict-write-ordering: off
> nfs.disable: on
> nfs.addr-namelookup: off
> nfs.enable-ino32: off
>
>
>> HTH,
>> Niels
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-21 Thread Niels de Vos
On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
> Did a quick test this morning - 3.7.13 is now working with libgfapi - yay!
> 
> 
> However I do have to enable write-back or write-through caching in qemu
> before the vm's will start, I believe this is to do with aio support. Not a
> problem for me.
> 
> I see there are settings for storage.linux-aio and storage.bd-aio - not sure
> as to whether they are relevant or which ones to play with.

Both storage.*-aio options are used by the brick processes. Depending on
what type of brick you have (linux = filesystem, bd = LVM Volume Group)
you could enable the one or the other.

We do have a strong suggestion to set these "gluster volume group .."
options:
  https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example

From those options, network.remote-dio seems most related to your aio
theory. It was introduced with http://review.gluster.org/4460 that
contains some more details.

HTH,
Niels


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-21 Thread David Gossage
On Thu, Jul 21, 2016 at 9:52 AM, Niels de Vos  wrote:

> On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
> > Did a quick test this morning - 3.7.13 is now working with libgfapi -
> yay!
> >
> >
> > However I do have to enable write-back or write-through caching in qemu
> > before the vm's will start, I believe this is to do with aio support.
> Not a
> > problem for me.
> >
> > I see there are settings for storage.linux-aio and storage.bd-aio - not
> sure
> > as to whether they are relevant or which ones to play with.
>
> Both storage.*-aio options are used by the brick processes. Depending on
> what type of brick you have (linux = filesystem, bd = LVM Volume Group)
> you could enable the one or the other.
>
> We do have a strong suggestion to set these "gluster volume group .."
> options:
>
> https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example
>
> From those options, network.remote-dio seems most related to your aio
> theory. It was introduced with http://review.gluster.org/4460 that
> contains some more details.
>

Thanks with the exception of stat-prefetch I have those enabled
I could try turning that back off though at the time of update to 3.7.13 it
was off.  I didnt turn it back on till later in next week after downgrading
back to 3.7.11.

Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ccgl1.gl.local:/gluster1/BRICK1/1
Brick2: ccgl2.gl.local:/gluster1/BRICK1/1
Brick3: ccgl4.gl.local:/gluster1/BRICK1/1
Options Reconfigured:
diagnostics.brick-log-level: WARNING
features.shard-block-size: 64MB
features.shard: on
performance.readdir-ahead: on
storage.owner-uid: 36
storage.owner-gid: 36
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: on
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
server.allow-insecure: on
cluster.self-heal-window-size: 1024
cluster.background-self-heal-count: 16
performance.strict-write-ordering: off
nfs.disable: on
nfs.addr-namelookup: off
nfs.enable-ino32: off


> HTH,
> Niels
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-21 Thread David Gossage
On Thu, Jul 21, 2016 at 9:33 AM, Kaleb KEITHLEY  wrote:

> On 07/21/2016 10:19 AM, David Gossage wrote:
> > Has their been any release notes or bug reports about the removal of aio
> > support being intentional?
>
> Build logs of 3.7.13 on Fedora and Ubuntu PPA (Launchpad) show that when
> `configure` ran during the build it reported that Linux AIO was enabled.
>
> What packages are you using? On which Linux distribution?
>

 glusterfs-epel.repo
http://download.gluster.org/pub/gluster/glusterfs/3.7/
glusterfs-client-xlators-3.7.11-1.el7.x86_64
glusterfs-cli-3.7.11-1.el7.x86_64
glusterfs-libs-3.7.11-1.el7.x86_64
glusterfs-3.7.11-1.el7.x86_64
glusterfs-fuse-3.7.11-1.el7.x86_64
glusterfs-server-3.7.11-1.el7.x86_64
glusterfs-api-3.7.11-1.el7.x86_64


Had to downgrade from 3.7.12/13 as couldn't keep storage connection stable.



Maybe issue is different then.  I just know updating to 3.7.12 and 13 a few
of us in mail list of ovirt and gluster have been having issues that only
resolve by changing how images are accessed over fuse, or moving to NFS
away from fuse all together.

I'll see if I can gather enough useful information to file a bug report.


> You might like to file a bug report at
> https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
>
>
> --
>
> Kaleb
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-21 Thread Kaleb KEITHLEY
On 07/21/2016 10:19 AM, David Gossage wrote:
> Has their been any release notes or bug reports about the removal of aio
> support being intentional? 

Build logs of 3.7.13 on Fedora and Ubuntu PPA (Launchpad) show that when
`configure` ran during the build it reported that Linux AIO was enabled.

What packages are you using? On which Linux distribution?

You might like to file a bug report at
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS


--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users