Re: [Gluster-users] Issue when upgrading from 3.6 to 3.7

2016-07-26 Thread Manikandan Selvaganesh
Hi Ram,

Apologies. I was stuck on something else. I will update you within the EOD.

On Wed, Jul 27, 2016 at 10:11 AM, B.K.Raghuram  wrote:

> Hi Manikandan,
>
> Did you have a chance to look at the glusterd config files? We've tried a
> couple of times to upgrade from 3.6.1 and the vol info files never seems to
> get a quota-version flag in it.. One of our installations is stuck at the
> old version because of potential upgrade issues to 3.7.13.
>
> Thanks,
> -Ram
>
> On Mon, Jul 25, 2016 at 6:40 PM, Manikandan Selvaganesh <
> mselv...@redhat.com> wrote:
>
>> Hi,
>>
>> It would work fine with the upgraded setup on a fresh install. And yes,
>> if quota-version is not present it would cause malfunctioning such as
>> checksum issue, peer rejection and quota would not work properly. This
>> quota-version is introduced recently which adds suffix to the quota related
>> extended attributes.
>>
>> On Jul 25, 2016 6:36 PM, "B.K.Raghuram"  wrote:
>>
>>> Manikandan,
>>>
>>> We just overwrote the setup with a fresh install and there I see the
>>> quota-version in the volume info file. For the upgraded setup, I only have
>>> the /var/lib/glusterd, which I'm attaching. Once we recreate this, I'll
>>> send you the rest of the info.
>>>
>>> However, is there an issue if the quota-version is not being in the info
>>> file? Will it cause the quota functionality to malfunction?
>>>
>>> On Mon, Jul 25, 2016 at 5:41 PM, Manikandan Selvaganesh <
>>> mselv...@redhat.com> wrote:
>>>
 Hi,

 Could you please attach the vol files, log files and the output of
 gluster v info?

 On Mon, Jul 25, 2016 at 5:35 PM, Atin Mukherjee 
 wrote:

>
>
> On Mon, Jul 25, 2016 at 4:37 PM, B.K.Raghuram 
> wrote:
>
>> Atin,
>>
>> Couple of quick questions about the upgrade and in general about the
>> meaning of some of the parameters in the glusterd dir..
>>
>> - I dont see the quota-version in the volume info file post upgrade,
>> so did the upgrade not go through properly?
>>
>
> If you are seeing a check sum issue you'd need to copy the same volume
> info file to that node where the checksum went wrong and then restart
> glusterd service.
> And yes, this looks like a bug in quota. @Mani - time to chip in :)
>
> - What does the op-version in the volume info file mean? Does this
>> have any corelation with the cluster op-version? Does it change with an
>> upgrade?
>>
>
> volume's op-version is different. This is basically used in checking
> client's compatibility and it shouldn't change with an upgrade AFAIK and
> remember from the code.
>
>
>> - A more basic question - should all peer probes always be done from
>> the same node or can they be done from any node that is already in the
>> cluster? The reason I ask is when I tried to do what was said in
>> http://gluster-documentations.readthedocs.io/en/latest/Administrator%20Guide/Resolving%20Peer%20Rejected/
>> the initial cluster was initiated from node A with 5 other peers. Then 
>> post
>> upgrade, node B which was in the cluster got a peer rejected. So I 
>> deleted
>> all the files except glusterd.info and then did a peer probe of A
>> from B. Then when I ran a peer status on A, it only showed one node, B.
>> Should I have probed B from A instead?
>>
>
>  peer probe can be done from any node in the trusted storage pool. So
> that's really not the issue. Ensure you keep all your peer file contents
> through out the same (/var/lib/glusterd/peers) where as only self uuid
> differs and then restarting glusterd service should solve the problem.
>
>>
>> On Sat, Jul 23, 2016 at 10:48 AM, Atin Mukherjee > > wrote:
>>
>>> I am suspecting it to be new quota-version introduced in the volume
>>> info file which may have resulted in a checksum mismatch resulting into
>>> peer rejection. But we can confirm it from log files and respective info
>>> file content.
>>>
>>>
>>> On Saturday 23 July 2016, B.K.Raghuram  wrote:
>>>
 Unfortunately, the setup is at a customer's place which is not
 remotely accessible. Will try and get it by early next week. But could 
 it
 just be a mismatch of the /var/lib/glusterd files?

 On Fri, Jul 22, 2016 at 8:07 PM, Atin Mukherjee <
 amukh...@redhat.com> wrote:

> Glusterd logs from all the nodes please?
>
>
> On Friday 22 July 2016, B.K.Raghuram  wrote:
>
>> When we upgrade some nodes from 3.6.1 to 3.7.13, some of the
>> nodes give a peer status of "peer rejected" while some dont. Is 
>> there a
>> reason for this discrepency and will the steps 

Re: [Gluster-users] Help needed in debugging a glusted-vol.log file..

2016-07-26 Thread Atin Mukherjee
On Wed, Jul 27, 2016 at 10:06 AM, B.K.Raghuram  wrote:

> A request for a quick clarification Atin.. The bind-insecure and
> allow-insecure seem to be turned on by default from 3.7.3 so if I install
> 3.7.13, then can I safely use samba/gfapi-vfs without modifying any
> parameters?
>

That's right!


>
> On Mon, Jul 25, 2016 at 9:53 AM, Atin Mukherjee 
> wrote:
>
>> This doesn't look like abnormal given you are running gluster 3.6.1
>> version. In 3.6 "allow-insecure" option is turned off by default which
>> means glusterd will only entertain requests received on privileged ports
>> and this node has exhausted all the privileged ports by that time. If you
>> are willing to turn on bind-insecure option, then I do see this problem
>> going away.
>>
>> P.S : This option is been turned on by default in 3.7.
>>
>> ~Atin
>>
>> On Sun, Jul 24, 2016 at 8:51 PM, Atin Mukherjee 
>> wrote:
>>
>>> Will have a look at the logs tomorrow.
>>>
>>>
>>> On Sunday 24 July 2016, B.K.Raghuram  wrote:
>>>
 Some issues seem to have cropped up at a remote location and I'm trying
 to make sense of what the issues are. Could someone help in throwing some
 light on the potential issues here? I notice that something happened at
 2016-07-20 06:33:09.621556 after which a host of issues are being reported
 (around line 29655).  After that there seem to be a host of communication
 issues.

 Also does the line "0-rpc-service: Request received from non-privileged
 port. Failing request" mean that potentially the samba access through the
 gfapi vfs module was failed?

>>>
>>>
>>> --
>>> --Atin
>>>
>>
>>
>>
>> --
>>
>> --Atin
>>
>
>


-- 

--Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Need a way to display and flush gluster cache ?

2016-07-26 Thread Mohammed Rafi K C
Thanks for your feedback.

In fact meta xlator is loaded only on fuse mount, is there any
particular reason to not to use meta-autoload xltor for nfs server and
libgfapi ?

Regards

Rafi KC

On 07/26/2016 04:05 PM, Niels de Vos wrote:
> On Tue, Jul 26, 2016 at 12:43:56PM +0530, Kaushal M wrote:
>> On Tue, Jul 26, 2016 at 12:28 PM, Prashanth Pai  wrote:
>>> +1 to option (2) which similar to echoing into /proc/sys/vm/drop_caches
>>>
>>>  -Prashanth Pai
>>>
>>> - Original Message -
 From: "Mohammed Rafi K C" 
 To: "gluster-users" , "Gluster Devel" 
 
 Sent: Tuesday, 26 July, 2016 10:44:15 AM
 Subject: [Gluster-devel] Need a way to display and flush gluster cache ?

 Hi,

 Gluster stack has it's own caching mechanism , mostly on client side.
 But there is no concrete method to see how much memory are consuming by
 gluster for caching and if needed there is no way to flush the cache 
 memory.

 So my first question is, Do we require to implement this two features
 for gluster cache?


 If so I would like to discuss some of our thoughts towards it.

 (If you are not interested in implementation discussion, you can skip
 this part :)

 1) Implement a virtual xattr on root, and on doing setxattr, flush all
 the cache, and for getxattr we can print the aggregated cache size.

 2) Currently in gluster native client support .meta virtual directory to
 get meta data information as analogues to proc. we can implement a
 virtual file inside the .meta directory to read  the cache size. Also we
 can flush the cache using a special write into the file, (similar to
 echoing into proc file) . This approach may be difficult to implement in
 other clients.
>> +1 for making use of the meta-xlator. We should be making more use of it.
> Indeed, this would be nice. Maybe this can also expose the memory
> allocations like /proc/slabinfo.
>
> The io-stats xlator can dump some statistics to
> /var/log/glusterfs/samples/ and /var/lib/glusterd/stats/ . That seems to
> be acceptible too, and allows to get statistics from server-side
> processes without involving any clients.
>
> HTH,
> Niels
>
>
 3) A cli command to display and flush the data with ip and port as an
 argument. GlusterD need to send the op to client from the connected
 client list. But this approach would be difficult to implement for
 libgfapi based clients. For me, it doesn't seems to be a good option.

 Your suggestions and comments are most welcome.

 Thanks to Talur and Poornima for their suggestions.

 Regards

 Rafi KC

 ___
 Gluster-devel mailing list
 gluster-de...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel

>>> ___
>>> Gluster-devel mailing list
>>> gluster-de...@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Issue when upgrading from 3.6 to 3.7

2016-07-26 Thread B.K.Raghuram
Hi Manikandan,

Did you have a chance to look at the glusterd config files? We've tried a
couple of times to upgrade from 3.6.1 and the vol info files never seems to
get a quota-version flag in it.. One of our installations is stuck at the
old version because of potential upgrade issues to 3.7.13.

Thanks,
-Ram

On Mon, Jul 25, 2016 at 6:40 PM, Manikandan Selvaganesh  wrote:

> Hi,
>
> It would work fine with the upgraded setup on a fresh install. And yes, if
> quota-version is not present it would cause malfunctioning such as checksum
> issue, peer rejection and quota would not work properly. This quota-version
> is introduced recently which adds suffix to the quota related extended
> attributes.
>
> On Jul 25, 2016 6:36 PM, "B.K.Raghuram"  wrote:
>
>> Manikandan,
>>
>> We just overwrote the setup with a fresh install and there I see the
>> quota-version in the volume info file. For the upgraded setup, I only have
>> the /var/lib/glusterd, which I'm attaching. Once we recreate this, I'll
>> send you the rest of the info.
>>
>> However, is there an issue if the quota-version is not being in the info
>> file? Will it cause the quota functionality to malfunction?
>>
>> On Mon, Jul 25, 2016 at 5:41 PM, Manikandan Selvaganesh <
>> mselv...@redhat.com> wrote:
>>
>>> Hi,
>>>
>>> Could you please attach the vol files, log files and the output of
>>> gluster v info?
>>>
>>> On Mon, Jul 25, 2016 at 5:35 PM, Atin Mukherjee 
>>> wrote:
>>>


 On Mon, Jul 25, 2016 at 4:37 PM, B.K.Raghuram  wrote:

> Atin,
>
> Couple of quick questions about the upgrade and in general about the
> meaning of some of the parameters in the glusterd dir..
>
> - I dont see the quota-version in the volume info file post upgrade,
> so did the upgrade not go through properly?
>

 If you are seeing a check sum issue you'd need to copy the same volume
 info file to that node where the checksum went wrong and then restart
 glusterd service.
 And yes, this looks like a bug in quota. @Mani - time to chip in :)

 - What does the op-version in the volume info file mean? Does this have
> any corelation with the cluster op-version? Does it change with an 
> upgrade?
>

 volume's op-version is different. This is basically used in checking
 client's compatibility and it shouldn't change with an upgrade AFAIK and
 remember from the code.


> - A more basic question - should all peer probes always be done from
> the same node or can they be done from any node that is already in the
> cluster? The reason I ask is when I tried to do what was said in
> http://gluster-documentations.readthedocs.io/en/latest/Administrator%20Guide/Resolving%20Peer%20Rejected/
> the initial cluster was initiated from node A with 5 other peers. Then 
> post
> upgrade, node B which was in the cluster got a peer rejected. So I deleted
> all the files except glusterd.info and then did a peer probe of A
> from B. Then when I ran a peer status on A, it only showed one node, B.
> Should I have probed B from A instead?
>

  peer probe can be done from any node in the trusted storage pool. So
 that's really not the issue. Ensure you keep all your peer file contents
 through out the same (/var/lib/glusterd/peers) where as only self uuid
 differs and then restarting glusterd service should solve the problem.

>
> On Sat, Jul 23, 2016 at 10:48 AM, Atin Mukherjee 
> wrote:
>
>> I am suspecting it to be new quota-version introduced in the volume
>> info file which may have resulted in a checksum mismatch resulting into
>> peer rejection. But we can confirm it from log files and respective info
>> file content.
>>
>>
>> On Saturday 23 July 2016, B.K.Raghuram  wrote:
>>
>>> Unfortunately, the setup is at a customer's place which is not
>>> remotely accessible. Will try and get it by early next week. But could 
>>> it
>>> just be a mismatch of the /var/lib/glusterd files?
>>>
>>> On Fri, Jul 22, 2016 at 8:07 PM, Atin Mukherjee >> > wrote:
>>>
 Glusterd logs from all the nodes please?


 On Friday 22 July 2016, B.K.Raghuram  wrote:

> When we upgrade some nodes from 3.6.1 to 3.7.13, some of the nodes
> give a peer status of "peer rejected" while some dont. Is there a 
> reason
> for this discrepency and will the steps mentioned in
> http://gluster-documentations.readthedocs.io/en/latest/Administrator%20Guide/Resolving%20Peer%20Rejected/
> work for this as well?
>
> Just out of curiosity, why the line "Try the whole procedure a
> couple more times if it doesn't work right 

Re: [Gluster-users] Help needed in debugging a glusted-vol.log file..

2016-07-26 Thread B.K.Raghuram
A request for a quick clarification Atin.. The bind-insecure and
allow-insecure seem to be turned on by default from 3.7.3 so if I install
3.7.13, then can I safely use samba/gfapi-vfs without modifying any
parameters?

On Mon, Jul 25, 2016 at 9:53 AM, Atin Mukherjee  wrote:

> This doesn't look like abnormal given you are running gluster 3.6.1
> version. In 3.6 "allow-insecure" option is turned off by default which
> means glusterd will only entertain requests received on privileged ports
> and this node has exhausted all the privileged ports by that time. If you
> are willing to turn on bind-insecure option, then I do see this problem
> going away.
>
> P.S : This option is been turned on by default in 3.7.
>
> ~Atin
>
> On Sun, Jul 24, 2016 at 8:51 PM, Atin Mukherjee 
> wrote:
>
>> Will have a look at the logs tomorrow.
>>
>>
>> On Sunday 24 July 2016, B.K.Raghuram  wrote:
>>
>>> Some issues seem to have cropped up at a remote location and I'm trying
>>> to make sense of what the issues are. Could someone help in throwing some
>>> light on the potential issues here? I notice that something happened at
>>> 2016-07-20 06:33:09.621556 after which a host of issues are being reported
>>> (around line 29655).  After that there seem to be a host of communication
>>> issues.
>>>
>>> Also does the line "0-rpc-service: Request received from non-privileged
>>> port. Failing request" mean that potentially the samba access through the
>>> gfapi vfs module was failed?
>>>
>>
>>
>> --
>> --Atin
>>
>
>
>
> --
>
> --Atin
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-26 Thread Krutika Dhananjay
Yes please, could you file a bug against glusterfs for this issue?


-Krutika

On Wed, Jul 27, 2016 at 1:39 AM, David Gossage 
wrote:

> Has a bug report been filed for this issue or should l I create one with
> the logs and results provided so far?
>
> *David Gossage*
> *Carousel Checks Inc. | System Administrator*
> *Office* 708.613.2284
>
> On Fri, Jul 22, 2016 at 12:53 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>>
>>
>>
>> On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur  wrote:
>>
>>> On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen 
>>> wrote:
>>> > Here is a quick way how to test this:
>>> > GlusterFS 3.7.13 volume with default settings with brick on ZFS
>>> dataset. gluster-test1 is server and gluster-test2 is client mounting with
>>> FUSE.
>>> >
>>> > Writing file with oflag=direct is not ok:
>>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
>>> count=1 bs=1024000
>>> > dd: failed to open ‘file’: Invalid argument
>>> >
>>> > Enable network.remote-dio on Gluster Volume:
>>> > [root@gluster-test1 gluster]# gluster volume set gluster
>>> network.remote-dio enable
>>> > volume set: success
>>> >
>>> > Writing small file with oflag=direct is ok:
>>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
>>> count=1 bs=1024000
>>> > 1+0 records in
>>> > 1+0 records out
>>> > 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s
>>> >
>>> > Writing bigger file with oflag=direct is ok:
>>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
>>> count=100 bs=1M
>>> > 100+0 records in
>>> > 100+0 records out
>>> > 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s
>>> >
>>> > Enable Sharding on Gluster Volume:
>>> > [root@gluster-test1 gluster]# gluster volume set gluster
>>> features.shard enable
>>> > volume set: success
>>> >
>>> > Writing small file  with oflag=direct is ok:
>>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
>>> count=1 bs=1M
>>> > 1+0 records in
>>> > 1+0 records out
>>> > 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s
>>> >
>>> > Writing bigger file with oflag=direct is not ok:
>>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
>>> count=100 bs=1M
>>> > dd: error writing ‘file3’: Operation not permitted
>>> > dd: closing output file ‘file3’: Operation not permitted
>>> >
>>>
>>>
>>> Thank you for these tests! would it be possible to share the brick and
>>> client logs?
>>>
>>
>> Not sure if his tests are same as my setup but here is what I end up with
>>
>> Volume Name: glustershard
>> Type: Replicate
>> Volume ID: 0cc4efb6-3836-4caa-b24a-b3afb6e407c3
>> Status: Started
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.71.10:/gluster1/shard1/1
>> Brick2: 192.168.71.11:/gluster1/shard2/1
>> Brick3: 192.168.71.12:/gluster1/shard3/1
>> Options Reconfigured:
>> features.shard-block-size: 64MB
>> features.shard: on
>> server.allow-insecure: on
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> network.remote-dio: enable
>> cluster.eager-lock: enable
>> performance.stat-prefetch: off
>> performance.io-cache: off
>> performance.quick-read: off
>> cluster.self-heal-window-size: 1024
>> cluster.background-self-heal-count: 16
>> nfs.enable-ino32: off
>> nfs.addr-namelookup: off
>> nfs.disable: on
>> performance.read-ahead: off
>> performance.readdir-ahead: on
>>
>>
>>
>>  dd if=/dev/zero 
>> of=/rhev/data-center/mnt/glusterSD/192.168.71.11\:_glustershard/
>> oflag=direct count=100 bs=1M
>> 81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/ __DIRECT_IO_TEST__
>>.trashcan/
>> [root@ccengine2 ~]# dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/
>> 192.168.71.11\:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
>> oflag=direct count=100 bs=1M
>> dd: error writing 
>> ‘/rhev/data-center/mnt/glusterSD/192.168.71.11:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test’:
>> Operation not permitted
>>
>> creates the 64M file in expected location then the shard is 0
>>
>> # file: gluster1/shard1/1/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
>>
>> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
>> trusted.afr.dirty=0x
>> trusted.bit-rot.version=0x0200579231f3000e16e7
>> trusted.gfid=0xec6de302b35f427985639ca3e25d9df0
>> trusted.glusterfs.shard.block-size=0x0400
>>
>> trusted.glusterfs.shard.file-size=0x0401
>>
>>
>> # file: gluster1/shard1/1/.shard/ec6de302-b35f-4279-8563-9ca3e25d9df0.1
>>
>> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
>> trusted.afr.dirty=0x
>> trusted.gfid=0x2bfd3cc8a727489b9a0474241548fe80
>>
>>
>>> Regards,
>>> Vijay
>>> 

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-26 Thread David Gossage
Has a bug report been filed for this issue or should l I create one with
the logs and results provided so far?

*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284

On Fri, Jul 22, 2016 at 12:53 PM, David Gossage  wrote:

>
>
>
> On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur  wrote:
>
>> On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen 
>> wrote:
>> > Here is a quick way how to test this:
>> > GlusterFS 3.7.13 volume with default settings with brick on ZFS
>> dataset. gluster-test1 is server and gluster-test2 is client mounting with
>> FUSE.
>> >
>> > Writing file with oflag=direct is not ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
>> count=1 bs=1024000
>> > dd: failed to open ‘file’: Invalid argument
>> >
>> > Enable network.remote-dio on Gluster Volume:
>> > [root@gluster-test1 gluster]# gluster volume set gluster
>> network.remote-dio enable
>> > volume set: success
>> >
>> > Writing small file with oflag=direct is ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
>> count=1 bs=1024000
>> > 1+0 records in
>> > 1+0 records out
>> > 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s
>> >
>> > Writing bigger file with oflag=direct is ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
>> count=100 bs=1M
>> > 100+0 records in
>> > 100+0 records out
>> > 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s
>> >
>> > Enable Sharding on Gluster Volume:
>> > [root@gluster-test1 gluster]# gluster volume set gluster
>> features.shard enable
>> > volume set: success
>> >
>> > Writing small file  with oflag=direct is ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
>> count=1 bs=1M
>> > 1+0 records in
>> > 1+0 records out
>> > 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s
>> >
>> > Writing bigger file with oflag=direct is not ok:
>> > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
>> count=100 bs=1M
>> > dd: error writing ‘file3’: Operation not permitted
>> > dd: closing output file ‘file3’: Operation not permitted
>> >
>>
>>
>> Thank you for these tests! would it be possible to share the brick and
>> client logs?
>>
>
> Not sure if his tests are same as my setup but here is what I end up with
>
> Volume Name: glustershard
> Type: Replicate
> Volume ID: 0cc4efb6-3836-4caa-b24a-b3afb6e407c3
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.71.10:/gluster1/shard1/1
> Brick2: 192.168.71.11:/gluster1/shard2/1
> Brick3: 192.168.71.12:/gluster1/shard3/1
> Options Reconfigured:
> features.shard-block-size: 64MB
> features.shard: on
> server.allow-insecure: on
> storage.owner-uid: 36
> storage.owner-gid: 36
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.quick-read: off
> cluster.self-heal-window-size: 1024
> cluster.background-self-heal-count: 16
> nfs.enable-ino32: off
> nfs.addr-namelookup: off
> nfs.disable: on
> performance.read-ahead: off
> performance.readdir-ahead: on
>
>
>
>  dd if=/dev/zero 
> of=/rhev/data-center/mnt/glusterSD/192.168.71.11\:_glustershard/
> oflag=direct count=100 bs=1M
> 81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/ __DIRECT_IO_TEST__
>  .trashcan/
> [root@ccengine2 ~]# dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/
> 192.168.71.11\:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
> oflag=direct count=100 bs=1M
> dd: error writing 
> ‘/rhev/data-center/mnt/glusterSD/192.168.71.11:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test’:
> Operation not permitted
>
> creates the 64M file in expected location then the shard is 0
>
> # file: gluster1/shard1/1/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
>
> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
> trusted.afr.dirty=0x
> trusted.bit-rot.version=0x0200579231f3000e16e7
> trusted.gfid=0xec6de302b35f427985639ca3e25d9df0
> trusted.glusterfs.shard.block-size=0x0400
>
> trusted.glusterfs.shard.file-size=0x0401
>
>
> # file: gluster1/shard1/1/.shard/ec6de302-b35f-4279-8563-9ca3e25d9df0.1
>
> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
> trusted.afr.dirty=0x
> trusted.gfid=0x2bfd3cc8a727489b9a0474241548fe80
>
>
>> Regards,
>> Vijay
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Minutes of the today's Gluster Community Bug Triage Meeting

2016-07-26 Thread Muthu Vigneshwaran
Hi all,

Thank you all for the participation and here are the minutes of today's
Gluster Community Bug Triage Meeting:

Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-26/gluster_community_bug_triage_meeting.2016-07-26-12.00.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-26/gluster_community_bug_triage_meeting.2016-07-26-12.00.txt
Log:
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-26/gluster_community_bug_triage_meeting.2016-07-26-12.00.log.html

Meeting summary
---
* Roll Call  (Muthu_, 12:01:00)

* ndevos need to decide on how to provide/use debug builds  (Muthu_,
  12:04:02)
  * ACTION: ndevos need to decide on how to provide/use debug builds
(Muthu_, 12:04:41)

* Manikandan and gem to wait until Nigel gives access to test the
  scripts  (Muthu_, 12:04:56)
  * ACTION: Nigel to take care of bug automation in the developer
workflow  (Muthu_, 12:06:41)

* jiffin will try to add an error for bug ownership to check-bugs.py
  (Muthu_, 12:06:57)
  * ACTION: jiffin will try to add an error for bug ownership to
check-bugs.py  (Muthu_, 12:08:05)

* Group Triage  (Muthu_, 12:08:32)
  * LINK: https://public.pad.fsfe.org/p/gluster-bugs-to-triage
(Muthu_, 12:08:52)

* Open Floor  (Muthu_, 12:25:51)

Meeting ended at 12:31:27 UTC.

Action Items

* ndevos need to decide on how to provide/use debug builds
* Nigel to take care of bug automation in the developer workflow
* jiffin will try to add an error for bug ownership to check-bugs.py

Action Items, by person
---
  * ndevos need to decide on how to provide/use debug builds
  * Nigel to take care of bug automation in the developer workflow
  * jiffin will try to add an error for bug ownership to check-bugs.py

People Present (lines said)
---
* Muthu_ (34)
* ndevos (15)
* hgowtham (13)
* Manikandan (12)
* kkeithley (6)
* skoduri (3)
* zodbot (3)
* Saravanakmr (3)
* nigelb (1)


--
Regards,
Muthu Vigneshwaran.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting (Today)

2016-07-26 Thread Muthu Vigneshwaran
Hi all,

The weekly Gluster bug triage is about to take place in an hour

Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

Appreciate your participation

Regards,
Muthu Vigneshwaran
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Need a way to display and flush gluster cache ?

2016-07-26 Thread Niels de Vos
On Tue, Jul 26, 2016 at 12:43:56PM +0530, Kaushal M wrote:
> On Tue, Jul 26, 2016 at 12:28 PM, Prashanth Pai  wrote:
> > +1 to option (2) which similar to echoing into /proc/sys/vm/drop_caches
> >
> >  -Prashanth Pai
> >
> > - Original Message -
> >> From: "Mohammed Rafi K C" 
> >> To: "gluster-users" , "Gluster Devel" 
> >> 
> >> Sent: Tuesday, 26 July, 2016 10:44:15 AM
> >> Subject: [Gluster-devel] Need a way to display and flush gluster cache ?
> >>
> >> Hi,
> >>
> >> Gluster stack has it's own caching mechanism , mostly on client side.
> >> But there is no concrete method to see how much memory are consuming by
> >> gluster for caching and if needed there is no way to flush the cache 
> >> memory.
> >>
> >> So my first question is, Do we require to implement this two features
> >> for gluster cache?
> >>
> >>
> >> If so I would like to discuss some of our thoughts towards it.
> >>
> >> (If you are not interested in implementation discussion, you can skip
> >> this part :)
> >>
> >> 1) Implement a virtual xattr on root, and on doing setxattr, flush all
> >> the cache, and for getxattr we can print the aggregated cache size.
> >>
> >> 2) Currently in gluster native client support .meta virtual directory to
> >> get meta data information as analogues to proc. we can implement a
> >> virtual file inside the .meta directory to read  the cache size. Also we
> >> can flush the cache using a special write into the file, (similar to
> >> echoing into proc file) . This approach may be difficult to implement in
> >> other clients.
> 
> +1 for making use of the meta-xlator. We should be making more use of it.

Indeed, this would be nice. Maybe this can also expose the memory
allocations like /proc/slabinfo.

The io-stats xlator can dump some statistics to
/var/log/glusterfs/samples/ and /var/lib/glusterd/stats/ . That seems to
be acceptible too, and allows to get statistics from server-side
processes without involving any clients.

HTH,
Niels


> 
> >>
> >> 3) A cli command to display and flush the data with ip and port as an
> >> argument. GlusterD need to send the op to client from the connected
> >> client list. But this approach would be difficult to implement for
> >> libgfapi based clients. For me, it doesn't seems to be a good option.
> >>
> >> Your suggestions and comments are most welcome.
> >>
> >> Thanks to Talur and Poornima for their suggestions.
> >>
> >> Regards
> >>
> >> Rafi KC
> >>
> >> ___
> >> Gluster-devel mailing list
> >> gluster-de...@gluster.org
> >> http://www.gluster.org/mailman/listinfo/gluster-devel
> >>
> > ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Need a way to display and flush gluster cache ?

2016-07-26 Thread Kaushal M
On Tue, Jul 26, 2016 at 12:28 PM, Prashanth Pai  wrote:
> +1 to option (2) which similar to echoing into /proc/sys/vm/drop_caches
>
>  -Prashanth Pai
>
> - Original Message -
>> From: "Mohammed Rafi K C" 
>> To: "gluster-users" , "Gluster Devel" 
>> 
>> Sent: Tuesday, 26 July, 2016 10:44:15 AM
>> Subject: [Gluster-devel] Need a way to display and flush gluster cache ?
>>
>> Hi,
>>
>> Gluster stack has it's own caching mechanism , mostly on client side.
>> But there is no concrete method to see how much memory are consuming by
>> gluster for caching and if needed there is no way to flush the cache memory.
>>
>> So my first question is, Do we require to implement this two features
>> for gluster cache?
>>
>>
>> If so I would like to discuss some of our thoughts towards it.
>>
>> (If you are not interested in implementation discussion, you can skip
>> this part :)
>>
>> 1) Implement a virtual xattr on root, and on doing setxattr, flush all
>> the cache, and for getxattr we can print the aggregated cache size.
>>
>> 2) Currently in gluster native client support .meta virtual directory to
>> get meta data information as analogues to proc. we can implement a
>> virtual file inside the .meta directory to read  the cache size. Also we
>> can flush the cache using a special write into the file, (similar to
>> echoing into proc file) . This approach may be difficult to implement in
>> other clients.

+1 for making use of the meta-xlator. We should be making more use of it.

>>
>> 3) A cli command to display and flush the data with ip and port as an
>> argument. GlusterD need to send the op to client from the connected
>> client list. But this approach would be difficult to implement for
>> libgfapi based clients. For me, it doesn't seems to be a good option.
>>
>> Your suggestions and comments are most welcome.
>>
>> Thanks to Talur and Poornima for their suggestions.
>>
>> Regards
>>
>> Rafi KC
>>
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Need a way to display and flush gluster cache ?

2016-07-26 Thread Prashanth Pai
+1 to option (2) which similar to echoing into /proc/sys/vm/drop_caches

 -Prashanth Pai

- Original Message -
> From: "Mohammed Rafi K C" 
> To: "gluster-users" , "Gluster Devel" 
> 
> Sent: Tuesday, 26 July, 2016 10:44:15 AM
> Subject: [Gluster-devel] Need a way to display and flush gluster cache ?
> 
> Hi,
> 
> Gluster stack has it's own caching mechanism , mostly on client side.
> But there is no concrete method to see how much memory are consuming by
> gluster for caching and if needed there is no way to flush the cache memory.
> 
> So my first question is, Do we require to implement this two features
> for gluster cache?
> 
> 
> If so I would like to discuss some of our thoughts towards it.
> 
> (If you are not interested in implementation discussion, you can skip
> this part :)
> 
> 1) Implement a virtual xattr on root, and on doing setxattr, flush all
> the cache, and for getxattr we can print the aggregated cache size.
> 
> 2) Currently in gluster native client support .meta virtual directory to
> get meta data information as analogues to proc. we can implement a
> virtual file inside the .meta directory to read  the cache size. Also we
> can flush the cache using a special write into the file, (similar to
> echoing into proc file) . This approach may be difficult to implement in
> other clients.
> 
> 3) A cli command to display and flush the data with ip and port as an
> argument. GlusterD need to send the op to client from the connected
> client list. But this approach would be difficult to implement for
> libgfapi based clients. For me, it doesn't seems to be a good option.
> 
> Your suggestions and comments are most welcome.
> 
> Thanks to Talur and Poornima for their suggestions.
> 
> Regards
> 
> Rafi KC
> 
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users