Re: [Gluster-users] Too many open files

2016-07-28 Thread Pranith Kumar Karampuri
On Fri, Jul 29, 2016 at 8:49 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

>
>
> On Fri, Jul 29, 2016 at 8:43 AM, Gmail  wrote:
>
>> I’ve already done that in the previous e-mail.
>>
>
> Those were state dump files, the command failed and was saying check logs
> for details, so was asking for logfiles as well.
>

I went through the statedump files and don't see any open-fds. Could you
see if there are a lot of files opened in the output of 'ls -l
/proc//fd'. Please attach this output
as well. Are you still seeing this issue? Or does it happen only when you
run the application?


>
>>
>> *— Bishoy*
>>
>> On Jul 28, 2016, at 3:20 PM, Gmail  wrote:
>>
>> 
>> 
>>
>> On Jul 27, 2016, at 4:54 PM, Pranith Kumar Karampuri 
>> wrote:
>>
>>
>>
>> On Wed, Jul 27, 2016 at 11:26 PM, Gmail  wrote:
>>
>>> Hello All,
>>>
>>> I keep getting the following errors accompanied by the bricks in the
>>> same replica group going offline when trying to sync files between two
>>> volumes located in two different geographic locations using Csync2.
>>>
>>> [2016-07-27 17:07:06.701575] E [MSGID: 113015]
>>> [posix.c:1011:posix_opendir] 0-gvol001-posix: opendir failed on
>>> /zfsvol/brick02/gvol001/.glusterfs/f2/28/f228f015-2bf4-43a3-bfa6-803f282f13af/man3
>>> [Too many open files]
>>> [2016-07-27 17:07:06.701643] E [MSGID: 115056]
>>> [server-rpc-fops.c:690:server_opendir_cbk] 0-gvol001-server: 46179928:
>>> OPENDIR
>>> /perl-5.18.1/cpan/Test-Harness/blib/man3
>>> (10bc989e-395f-44ec-911e-746335f567a0) ==> (Too many open files) [Too many
>>> open files]
>>> [2016-07-27 17:07:06.702574] E [MSGID: 113099]
>>> [posix-helpers.c:389:_posix_xattr_get_set] 0-gvol001-posix: Opening file
>>> /zfsvol/brick02/gvol001/.glusterfs/7f/af/7faf6eff-4036-4603-a974-7e95c8fdd568
>>> failed [Too many open files]
>>> [2016-07-27 17:07:06.709581] E [MSGID: 113099]
>>> [posix-helpers.c:389:_posix_xattr_get_set] 0-gvol001-posix: Opening file
>>> /zfsvol/brick02/gvol001/.glusterfs/dd/d2/ddd2c86e-c296-463c-9d03-2da596bb9f88
>>> failed [Too many open files]
>>>
>>
>> 1) May be there is a bug in the brick stack which forgets to close a file.
>> 2) May be Csync2 doesn't close the files after syncing.
>> Could you let us know the version of gluster and also statedump of the
>> bricks using
>> https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/
>>
>> I’ve tried to collect the logs as per the documentation provided, but not
>> all of the logs was successfully collected.
>>
>> # gluster volume statedump gvol001
>> volume statedump: failed: Commit failed on . Please check log
>> file for details.
>>
>>
>>
>>>
>>> *—Bishoy*
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>>
>> --
>> Pranith
>>
>>
>>
>
>
> --
> Pranith
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Too many open files

2016-07-28 Thread Pranith Kumar Karampuri
On Fri, Jul 29, 2016 at 8:43 AM, Gmail  wrote:

> I’ve already done that in the previous e-mail.
>

Those were state dump files, the command failed and was saying check logs
for details, so was asking for logfiles as well.


>
> *— Bishoy*
>
> On Jul 28, 2016, at 3:20 PM, Gmail  wrote:
>
> 
> 
>
> On Jul 27, 2016, at 4:54 PM, Pranith Kumar Karampuri 
> wrote:
>
>
>
> On Wed, Jul 27, 2016 at 11:26 PM, Gmail  wrote:
>
>> Hello All,
>>
>> I keep getting the following errors accompanied by the bricks in the same
>> replica group going offline when trying to sync files between two volumes
>> located in two different geographic locations using Csync2.
>>
>> [2016-07-27 17:07:06.701575] E [MSGID: 113015]
>> [posix.c:1011:posix_opendir] 0-gvol001-posix: opendir failed on
>> /zfsvol/brick02/gvol001/.glusterfs/f2/28/f228f015-2bf4-43a3-bfa6-803f282f13af/man3
>> [Too many open files]
>> [2016-07-27 17:07:06.701643] E [MSGID: 115056]
>> [server-rpc-fops.c:690:server_opendir_cbk] 0-gvol001-server: 46179928:
>> OPENDIR
>> /perl-5.18.1/cpan/Test-Harness/blib/man3
>> (10bc989e-395f-44ec-911e-746335f567a0) ==> (Too many open files) [Too many
>> open files]
>> [2016-07-27 17:07:06.702574] E [MSGID: 113099]
>> [posix-helpers.c:389:_posix_xattr_get_set] 0-gvol001-posix: Opening file
>> /zfsvol/brick02/gvol001/.glusterfs/7f/af/7faf6eff-4036-4603-a974-7e95c8fdd568
>> failed [Too many open files]
>> [2016-07-27 17:07:06.709581] E [MSGID: 113099]
>> [posix-helpers.c:389:_posix_xattr_get_set] 0-gvol001-posix: Opening file
>> /zfsvol/brick02/gvol001/.glusterfs/dd/d2/ddd2c86e-c296-463c-9d03-2da596bb9f88
>> failed [Too many open files]
>>
>
> 1) May be there is a bug in the brick stack which forgets to close a file.
> 2) May be Csync2 doesn't close the files after syncing.
> Could you let us know the version of gluster and also statedump of the
> bricks using
> https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/
>
> I’ve tried to collect the logs as per the documentation provided, but not
> all of the logs was successfully collected.
>
> # gluster volume statedump gvol001
> volume statedump: failed: Commit failed on . Please check log
> file for details.
>
>
>
>>
>> *—Bishoy*
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
> Pranith
>
>
>


-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Too many open files

2016-07-28 Thread Gmail
I’ve already done that in the previous e-mail.

— Bishoy

> On Jul 28, 2016, at 3:20 PM, Gmail  wrote:
> 
> 
>> On Jul 27, 2016, at 4:54 PM, Pranith Kumar Karampuri > > wrote:
>> 
>> 
>> 
>> On Wed, Jul 27, 2016 at 11:26 PM, Gmail > > wrote:
>> Hello All,
>> 
>> I keep getting the following errors accompanied by the bricks in the same 
>> replica group going offline when trying to sync files between two volumes 
>> located in two different geographic locations using Csync2.
>> 
>> [2016-07-27 17:07:06.701575] E [MSGID: 113015] [posix.c:1011:posix_opendir] 
>> 0-gvol001-posix: opendir failed on 
>> /zfsvol/brick02/gvol001/.glusterfs/f2/28/f228f015-2bf4-43a3-bfa6-803f282f13af/man3
>>  [Too many open files]
>> [2016-07-27 17:07:06.701643] E [MSGID: 115056] 
>> [server-rpc-fops.c:690:server_opendir_cbk] 0-gvol001-server: 46179928: 
>> OPENDIR 
>> /perl-5.18.1/cpan/Test-Harness/blib/man3
>>  (10bc989e-395f-44ec-911e-746335f567a0) ==> (Too many open files) [Too many 
>> open files]
>> [2016-07-27 17:07:06.702574] E [MSGID: 113099] 
>> [posix-helpers.c:389:_posix_xattr_get_set] 0-gvol001-posix: Opening file 
>> /zfsvol/brick02/gvol001/.glusterfs/7f/af/7faf6eff-4036-4603-a974-7e95c8fdd568
>>  failed [Too many open files]
>> [2016-07-27 17:07:06.709581] E [MSGID: 113099] 
>> [posix-helpers.c:389:_posix_xattr_get_set] 0-gvol001-posix: Opening file 
>> /zfsvol/brick02/gvol001/.glusterfs/dd/d2/ddd2c86e-c296-463c-9d03-2da596bb9f88
>>  failed [Too many open files]
>> 
>> 1) May be there is a bug in the brick stack which forgets to close a file.
>> 2) May be Csync2 doesn't close the files after syncing.
>> Could you let us know the version of gluster and also statedump of the 
>> bricks using 
>> https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/ 
>> 
> I’ve tried to collect the logs as per the documentation provided, but not all 
> of the logs was successfully collected.
> 
> # gluster volume statedump gvol001
> volume statedump: failed: Commit failed on . Please check log file 
> for details.
>> 
>> 
>> 
>> —Bishoy
>> 
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org 
>> http://www.gluster.org/mailman/listinfo/gluster-users 
>> 
>> 
>> 
>> 
>> -- 
>> Pranith

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Too many open files

2016-07-28 Thread Pranith Kumar Karampuri
On Fri, Jul 29, 2016 at 3:50 AM, Gmail  wrote:

>
>
>
> On Jul 27, 2016, at 4:54 PM, Pranith Kumar Karampuri 
> wrote:
>
>
>
> On Wed, Jul 27, 2016 at 11:26 PM, Gmail  wrote:
>
>> Hello All,
>>
>> I keep getting the following errors accompanied by the bricks in the same
>> replica group going offline when trying to sync files between two volumes
>> located in two different geographic locations using Csync2.
>>
>> [2016-07-27 17:07:06.701575] E [MSGID: 113015]
>> [posix.c:1011:posix_opendir] 0-gvol001-posix: opendir failed on
>> /zfsvol/brick02/gvol001/.glusterfs/f2/28/f228f015-2bf4-43a3-bfa6-803f282f13af/man3
>> [Too many open files]
>> [2016-07-27 17:07:06.701643] E [MSGID: 115056]
>> [server-rpc-fops.c:690:server_opendir_cbk] 0-gvol001-server: 46179928:
>> OPENDIR
>> /perl-5.18.1/cpan/Test-Harness/blib/man3
>> (10bc989e-395f-44ec-911e-746335f567a0) ==> (Too many open files) [Too many
>> open files]
>> [2016-07-27 17:07:06.702574] E [MSGID: 113099]
>> [posix-helpers.c:389:_posix_xattr_get_set] 0-gvol001-posix: Opening file
>> /zfsvol/brick02/gvol001/.glusterfs/7f/af/7faf6eff-4036-4603-a974-7e95c8fdd568
>> failed [Too many open files]
>> [2016-07-27 17:07:06.709581] E [MSGID: 113099]
>> [posix-helpers.c:389:_posix_xattr_get_set] 0-gvol001-posix: Opening file
>> /zfsvol/brick02/gvol001/.glusterfs/dd/d2/ddd2c86e-c296-463c-9d03-2da596bb9f88
>> failed [Too many open files]
>>
>
> 1) May be there is a bug in the brick stack which forgets to close a file.
> 2) May be Csync2 doesn't close the files after syncing.
> Could you let us know the version of gluster and also statedump of the
> bricks using
> https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/
>
> I’ve tried to collect the logs as per the documentation provided, but not
> all of the logs was successfully collected.
>
> # gluster volume statedump gvol001
> volume statedump: failed: Commit failed on . Please check log
> file for details.
>

Could you attach the logfiles for us to take a look?


>
>
>>
>> *—Bishoy*
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
> Pranith
>
>
>
>


-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Too many open files

2016-07-28 Thread Gmail


glusterdump.glusterd.dump.1469742056
Description: Binary data


glusterdump.glusterfs.dump.1469742040
Description: Binary data
On Jul 27, 2016, at 4:54 PM, Pranith Kumar Karampuri  wrote:On Wed, Jul 27, 2016 at 11:26 PM, Gmail  wrote:Hello All,I keep getting the following errors accompanied by the bricks in the same replica group going offline when trying to sync files between two volumes located in two different geographic locations using Csync2.[2016-07-27 17:07:06.701575] E [MSGID: 113015] [posix.c:1011:posix_opendir] 0-gvol001-posix: opendir failed on /zfsvol/brick02/gvol001/.glusterfs/f2/28/f228f015-2bf4-43a3-bfa6-803f282f13af/man3 [Too many open files][2016-07-27 17:07:06.701643] E [MSGID: 115056] [server-rpc-fops.c:690:server_opendir_cbk] 0-gvol001-server: 46179928: OPENDIR /perl-5.18.1/cpan/Test-Harness/blib/man3 (10bc989e-395f-44ec-911e-746335f567a0) ==> (Too many open files) [Too many open files][2016-07-27 17:07:06.702574] E [MSGID: 113099] [posix-helpers.c:389:_posix_xattr_get_set] 0-gvol001-posix: Opening file /zfsvol/brick02/gvol001/.glusterfs/7f/af/7faf6eff-4036-4603-a974-7e95c8fdd568 failed [Too many open files][2016-07-27 17:07:06.709581] E [MSGID: 113099] [posix-helpers.c:389:_posix_xattr_get_set] 0-gvol001-posix: Opening file /zfsvol/brick02/gvol001/.glusterfs/dd/d2/ddd2c86e-c296-463c-9d03-2da596bb9f88 failed [Too many open files]1) May be there is a bug in the brick stack which forgets to close a file.2) May be Csync2 doesn't close the files after syncing.Could you let us know the version of gluster and also statedump of the bricks using https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/I’ve tried to collect the logs as per the documentation provided, but not all of the logs was successfully collected.# gluster volume statedump gvol001volume statedump: failed: Commit failed on . Please check log file for details.—Bishoy___Gluster-users mailing listGluster-users@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users-- Pranith___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] [gluster-users] Gluster Community Newsletter, July 2016

2016-07-28 Thread Amye Scavarda
Important happenings for Gluster this month:
First stable update for 3.8 is available, GlusterFS 3.8.1 fixes several
bugs:
http://blog.gluster.org/2016/07/first-stable-update-for-3-8-is-available-glusterfs-3-8-1-fixes-several-bugs/



Gluster Developers Summit:

October 6, 7, 2016 directly following LinuxCon Berlin

This is an invite-only event, but you can apply for an invitation.

Deadline for application is July 31, 2016.
Apply for an invitation:
http://goo.gl/forms/JOEzoimW9qVV4jdz1

Gluster-users:
http://www.gluster.org/pipermail/gluster-users/2016-June/027135.html -
Aravinda kicks off GlusterFS 3.9 Planning
http://www.gluster.org/pipermail/gluster-users/2016-June/027157.html -
 Pranith gives more details about using Gluster as a distributed block
store with Kubernetes
http://www.gluster.org/pipermail/gluster-users/2016-July/027222.html  -
B.K.Raghuram follows up with the design behind a distributed iscsi
implementation
http://www.gluster.org/pipermail/gluster-users/2016-June/026962.html,
http://www.gluster.org/pipermail/gluster-users/2016-July/027259.html -
Gandalf Corvotempesta's longer discussion around RAID
http://www.gluster.org/pipermail/gluster-users/2016-July/027370.html -
Lindsay Mathieson follows up on 3.7.13 & proxmox/qemu
http://www.gluster.org/pipermail/gluster-users/2016-July/027386.html -
Dmitry Melekhov with questions around self-healing
http://www.gluster.org/pipermail/gluster-users/2016-July/027682.html -
Kaushal starts a conversation about 3.6 End of Life

Gluster-devel:
http://www.gluster.org/pipermail/gluster-devel/2016-July/049981.html - Jeff
Darcy on Securing GlusterD management with a question around which version
this feature should go into
http://www.gluster.org/pipermail/gluster-devel/2016-July/050075.html-
Aravindra asks for assistance from maintainers for Gluster Events API -
Help required to identify the list of Events from each component
http://www.gluster.org/pipermail/gluster-devel/2016-July/050127.html-
Poornima Gurusiddaiah starts a conversation about regression test failures
http://www.gluster.org/pipermail/gluster-devel/2016-July/050137.html -
Jonathan Holloway proposes a framework to leverage existing Python unit
test standards for our testing
http://www.gluster.org/pipermail/gluster-devel/2016-July/050291.html -Niels
de Vos has Suggestions for improving the block/gluster driver in QEMU

Gluster-infra:
http://www.gluster.org/pipermail/gluster-infra/2016-July/002520.html -Nigel
provides Gluster Infra Updates - July
http://www.gluster.org/pipermail/gluster-infra/2016-July/002461.html -
Michael Scherer starts a conversation around move of the ci.gluster.org
server from one location to another location

Top 5 contributors:
Kotresh H R , Niels de Vos, Garrett LeSage, Atin Mukherjee, Aravinda VK

Upcoming CFPs:
FOSDEM: https://fosdem.org/2017/news/2016-07-20-call-for-participation/  -
4 & 5 February 2017
DevConf - http://www.devconf.cz/ - Jan 27-29

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Clarification about remove-brick

2016-07-28 Thread Lenovo Lastname
I'm using 3.7.11, this command works with me,
!remove-brick
[root@node2 ~]# gluster volume remove-brick v1 replica 2 
192.168.3.73:/gfs/b1/v1 force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success
Don't know about the commit thingy... 

On Thursday, July 28, 2016 3:47 PM, Richard Klein (RSI)  
wrote:
 

 We are 
using Gluster 3.7.6 in a replica 2 distributed-replicate configuration.  I am 
wondering when we do a remove-brick with just one brick pair will the data be 
moved off the bricks once the status show complete and then you do the commit?  
  Also, if you start a remove-brick process can you stop it?  Is there an abort 
or stop command or do you just don’t do the commit?  Any help would be 
appreciated.  Richard KleinRSI      
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Clarification about remove-brick

2016-07-28 Thread Richard Klein (RSI)
We are using Gluster 3.7.6 in a replica 2 distributed-replicate configuration.  
I am wondering when we do a remove-brick with just one brick pair will the data 
be moved off the bricks once the status show complete and then you do the 
commit?Also, if you start a remove-brick process can you stop it?  Is there 
an abort or stop command or do you just don't do the commit?

Any help would be appreciated.

Richard Klein
RSI



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glusterfs Distributed-Replicated local self-mounts

2016-07-28 Thread tecforte jason
*Hi Pranith,*

*Thank you for your guide !*

*Best Regards,*
*Jason*

On Thu, Jul 28, 2016 at 11:09 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

>
>
> On Thu, Jul 28, 2016 at 2:10 PM, tecforte jason 
> wrote:
>
>> *Hi Pranith,*
>>
>> *Thank you for your reply!*
>>
>> I read the artical (https://brainpan.io/wiki/GlusterFS_Build_Steps), it
>> emphasize "This
>>
>> distribution and replication would be used when your clients are external
>> to the cluster, not local self-mounts." It refer to the gluster nfs-server 
>> and nfs-client on same machine ?
>>
>>
>> Between If i have 2 node each with 3 physical disk inside,  each physical 
>> disk size
>>
>> is 1 TB,  i want to create replica of 2, may i know the command below
>> correct ?
>>
>> gluster volume create test-volume replica 2
>> node1:/exp1/brick1 node2:/exp2/brick2
>> node1:/exp1/brick3 node2:/exp2/brick4
>> node1:/exp1/brick5 node2:/exp2/brick6
>>
>> below is what i expecting :
>>
>>- usable space is 3 TB
>>- replica 3TB
>>- node1:/exp1/brick1 is replicated with node2:/exp2/brick2
>>- node1:/exp1/brick3 is replicated with node2:/exp2/brick4
>>- node1:/exp1/brick5  is replicated with node2:/exp2/brick6
>>
>>
>>  Is my above expectation valid ? using Gluster 3.8.
>>
>>
>> Appreciate for any advice and suggestion.
>>
>>
> Looks good to me.
>
> You can mount this volume using fuse on both the nodes.
>
> mount -t glusterfs node1:/testvolume 
>
> Please don't hesitate to include gluster-users email-address in CC, it may
> help someone who is googling for the exact question you are looking for an
> answer to in future..
>
>
>> Best Regards,
>>
>> Jason
>>
>>
>> On Thu, Jul 28, 2016 at 3:34 PM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>>> On Thu, Jul 28, 2016 at 12:29 PM, tecforte jason <
>>> tecforte.ja...@gmail.com> wrote:
>>>
 Hi,

 Can i do local self-mounts on Glusterfs Distributed-Replicated mode?

>>>
>>> Are you asking for fuse mounts on the same servers which are used as
>>> bricks? Yeah this is not a problem. But gluster nfs-server and nfs-client
>>> on same machine will lead to hangs. This is not particular to gluster. Even
>>> nfs client and nfs server on same machine leads to hangs.
>>>
>>>

 Much appreciate for any advice and suggestion.

 Thanks.
 Jason

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

>>>
>>>
>>>
>>> --
>>> Pranith
>>>
>>
>>
>
>
> --
> Pranith
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS on EXT4 is stable enough

2016-07-28 Thread Pranith Kumar Karampuri
On Thu, Jul 28, 2016 at 3:40 PM, tecforte jason 
wrote:

> Hi All,
>
> May i konw GlusterFS latest version on EXT4 is stable enough ?
>

We haven't heard any complaints about this at all from 3.5 I guess where we
fixed 64-bit offsets for readdir.


>
> Much appreciate for any advice / guide.
>
> Thanks.
>
> Best Regards,
> Jason
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Need a way to display and flush gluster cache ?

2016-07-28 Thread Mohammed Rafi K C


On 07/28/2016 07:56 PM, Niels de Vos wrote:
> On Thu, Jul 28, 2016 at 05:58:15PM +0530, Mohammed Rafi K C wrote:
>>
>> On 07/27/2016 04:33 PM, Raghavendra G wrote:
>>>
>>> On Wed, Jul 27, 2016 at 10:29 AM, Mohammed Rafi K C
>>> > wrote:
>>>
>>> Thanks for your feedback.
>>>
>>> In fact meta xlator is loaded only on fuse mount, is there any
>>> particular reason to not to use meta-autoload xltor for nfs server
>>> and libgfapi ?
>>>
>>>
>>> I think its because of lack of resources. I am not aware of any
>>> technical reason for not using on NFSv3 server and gfapi.
>> Cool. I will try to see how we can implement meta-autoliad feature for
>> nfs-server and libgfapi. Once we have the feature in place, I will
>> implement the cache memory display/flush feature using meta xlators.
> In case you plan to have this ready in a month (before the end of
> August), you should propose it as a 3.9 feature. Click the "Edir this
> page on GitHub" link on the bottom of
> https://www.gluster.org/community/roadmap/3.9/ :)
I will do an assessment and will see if I can spend some time on this
for 3.9 release. If so, I will add it into 3.9 feature page.

Regards
Rafi KC

>
> Thanks,
> Niels
>
>
>> Thanks for your valuable feedback.
>> Rafi KC
>>
>>>  
>>>
>>> Regards
>>>
>>> Rafi KC
>>>
>>> On 07/26/2016 04:05 PM, Niels de Vos wrote:
 On Tue, Jul 26, 2016 at 12:43:56PM +0530, Kaushal M wrote:
> On Tue, Jul 26, 2016 at 12:28 PM, Prashanth Pai  
>  wrote:
>> +1 to option (2) which similar to echoing into 
>> /proc/sys/vm/drop_caches
>>
>>  -Prashanth Pai
>>
>> - Original Message -
>>> From: "Mohammed Rafi K C"  
>>> 
>>> To: "gluster-users"  
>>> , "Gluster Devel" 
>>>  
>>> Sent: Tuesday, 26 July, 2016 10:44:15 AM
>>> Subject: [Gluster-devel] Need a way to display and flush gluster 
>>> cache ?
>>>
>>> Hi,
>>>
>>> Gluster stack has it's own caching mechanism , mostly on client 
>>> side.
>>> But there is no concrete method to see how much memory are 
>>> consuming by
>>> gluster for caching and if needed there is no way to flush the 
>>> cache memory.
>>>
>>> So my first question is, Do we require to implement this two 
>>> features
>>> for gluster cache?
>>>
>>>
>>> If so I would like to discuss some of our thoughts towards it.
>>>
>>> (If you are not interested in implementation discussion, you can 
>>> skip
>>> this part :)
>>>
>>> 1) Implement a virtual xattr on root, and on doing setxattr, flush 
>>> all
>>> the cache, and for getxattr we can print the aggregated cache size.
>>>
>>> 2) Currently in gluster native client support .meta virtual 
>>> directory to
>>> get meta data information as analogues to proc. we can implement a
>>> virtual file inside the .meta directory to read  the cache size. 
>>> Also we
>>> can flush the cache using a special write into the file, (similar to
>>> echoing into proc file) . This approach may be difficult to 
>>> implement in
>>> other clients.
> +1 for making use of the meta-xlator. We should be making more use of 
> it.
 Indeed, this would be nice. Maybe this can also expose the memory
 allocations like /proc/slabinfo.

 The io-stats xlator can dump some statistics to
 /var/log/glusterfs/samples/ and /var/lib/glusterd/stats/ . That seems 
 to
 be acceptible too, and allows to get statistics from server-side
 processes without involving any clients.

 HTH,
 Niels


>>> 3) A cli command to display and flush the data with ip and port as 
>>> an
>>> argument. GlusterD need to send the op to client from the connected
>>> client list. But this approach would be difficult to implement for
>>> libgfapi based clients. For me, it doesn't seems to be a good 
>>> option.
>>>
>>> Your suggestions and comments are most welcome.
>>>
>>> Thanks to Talur and Poornima for their suggestions.
>>>
>>> Regards
>>>
>>> Rafi KC
>>>
>>> ___
>>> Gluster-devel mailing list
>>> gluster-de...@gluster.org 
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org 

Re: [Gluster-users] [Gluster-devel] Need a way to display and flush gluster cache ?

2016-07-28 Thread Niels de Vos
On Thu, Jul 28, 2016 at 05:58:15PM +0530, Mohammed Rafi K C wrote:
> 
> 
> On 07/27/2016 04:33 PM, Raghavendra G wrote:
> >
> >
> > On Wed, Jul 27, 2016 at 10:29 AM, Mohammed Rafi K C
> > > wrote:
> >
> > Thanks for your feedback.
> >
> > In fact meta xlator is loaded only on fuse mount, is there any
> > particular reason to not to use meta-autoload xltor for nfs server
> > and libgfapi ?
> >
> >
> > I think its because of lack of resources. I am not aware of any
> > technical reason for not using on NFSv3 server and gfapi.
> 
> Cool. I will try to see how we can implement meta-autoliad feature for
> nfs-server and libgfapi. Once we have the feature in place, I will
> implement the cache memory display/flush feature using meta xlators.

In case you plan to have this ready in a month (before the end of
August), you should propose it as a 3.9 feature. Click the "Edir this
page on GitHub" link on the bottom of
https://www.gluster.org/community/roadmap/3.9/ :)

Thanks,
Niels


> 
> Thanks for your valuable feedback.
> Rafi KC
> 
> >  
> >
> > Regards
> >
> > Rafi KC
> >
> > On 07/26/2016 04:05 PM, Niels de Vos wrote:
> >> On Tue, Jul 26, 2016 at 12:43:56PM +0530, Kaushal M wrote:
> >>> On Tue, Jul 26, 2016 at 12:28 PM, Prashanth Pai  
> >>>  wrote:
>  +1 to option (2) which similar to echoing into 
>  /proc/sys/vm/drop_caches
> 
>   -Prashanth Pai
> 
>  - Original Message -
> > From: "Mohammed Rafi K C"  
> > 
> > To: "gluster-users"  
> > , "Gluster Devel" 
> >  
> > Sent: Tuesday, 26 July, 2016 10:44:15 AM
> > Subject: [Gluster-devel] Need a way to display and flush gluster 
> > cache ?
> >
> > Hi,
> >
> > Gluster stack has it's own caching mechanism , mostly on client 
> > side.
> > But there is no concrete method to see how much memory are 
> > consuming by
> > gluster for caching and if needed there is no way to flush the 
> > cache memory.
> >
> > So my first question is, Do we require to implement this two 
> > features
> > for gluster cache?
> >
> >
> > If so I would like to discuss some of our thoughts towards it.
> >
> > (If you are not interested in implementation discussion, you can 
> > skip
> > this part :)
> >
> > 1) Implement a virtual xattr on root, and on doing setxattr, flush 
> > all
> > the cache, and for getxattr we can print the aggregated cache size.
> >
> > 2) Currently in gluster native client support .meta virtual 
> > directory to
> > get meta data information as analogues to proc. we can implement a
> > virtual file inside the .meta directory to read  the cache size. 
> > Also we
> > can flush the cache using a special write into the file, (similar to
> > echoing into proc file) . This approach may be difficult to 
> > implement in
> > other clients.
> >>> +1 for making use of the meta-xlator. We should be making more use of 
> >>> it.
> >> Indeed, this would be nice. Maybe this can also expose the memory
> >> allocations like /proc/slabinfo.
> >>
> >> The io-stats xlator can dump some statistics to
> >> /var/log/glusterfs/samples/ and /var/lib/glusterd/stats/ . That seems 
> >> to
> >> be acceptible too, and allows to get statistics from server-side
> >> processes without involving any clients.
> >>
> >> HTH,
> >> Niels
> >>
> >>
> > 3) A cli command to display and flush the data with ip and port as 
> > an
> > argument. GlusterD need to send the op to client from the connected
> > client list. But this approach would be difficult to implement for
> > libgfapi based clients. For me, it doesn't seems to be a good 
> > option.
> >
> > Your suggestions and comments are most welcome.
> >
> > Thanks to Talur and Poornima for their suggestions.
> >
> > Regards
> >
> > Rafi KC
> >
> > ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org 
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> >
>  ___
>  Gluster-devel mailing list
>  gluster-de...@gluster.org 
>  http://www.gluster.org/mailman/listinfo/gluster-devel
> >>> ___
> >>> Gluster-users mailing list
> >>> Gluster-users@gluster.org 

Re: [Gluster-users] [Gluster-devel] Need a way to display and flush gluster cache ?

2016-07-28 Thread Mohammed Rafi K C


On 07/27/2016 04:33 PM, Raghavendra G wrote:
>
>
> On Wed, Jul 27, 2016 at 10:29 AM, Mohammed Rafi K C
> > wrote:
>
> Thanks for your feedback.
>
> In fact meta xlator is loaded only on fuse mount, is there any
> particular reason to not to use meta-autoload xltor for nfs server
> and libgfapi ?
>
>
> I think its because of lack of resources. I am not aware of any
> technical reason for not using on NFSv3 server and gfapi.

Cool. I will try to see how we can implement meta-autoliad feature for
nfs-server and libgfapi. Once we have the feature in place, I will
implement the cache memory display/flush feature using meta xlators.

Thanks for your valuable feedback.
Rafi KC

>  
>
> Regards
>
> Rafi KC
>
> On 07/26/2016 04:05 PM, Niels de Vos wrote:
>> On Tue, Jul 26, 2016 at 12:43:56PM +0530, Kaushal M wrote:
>>> On Tue, Jul 26, 2016 at 12:28 PM, Prashanth Pai  
>>>  wrote:
 +1 to option (2) which similar to echoing into /proc/sys/vm/drop_caches

  -Prashanth Pai

 - Original Message -
> From: "Mohammed Rafi K C"  
> 
> To: "gluster-users"  
> , "Gluster Devel" 
>  
> Sent: Tuesday, 26 July, 2016 10:44:15 AM
> Subject: [Gluster-devel] Need a way to display and flush gluster 
> cache ?
>
> Hi,
>
> Gluster stack has it's own caching mechanism , mostly on client side.
> But there is no concrete method to see how much memory are consuming 
> by
> gluster for caching and if needed there is no way to flush the cache 
> memory.
>
> So my first question is, Do we require to implement this two features
> for gluster cache?
>
>
> If so I would like to discuss some of our thoughts towards it.
>
> (If you are not interested in implementation discussion, you can skip
> this part :)
>
> 1) Implement a virtual xattr on root, and on doing setxattr, flush all
> the cache, and for getxattr we can print the aggregated cache size.
>
> 2) Currently in gluster native client support .meta virtual directory 
> to
> get meta data information as analogues to proc. we can implement a
> virtual file inside the .meta directory to read  the cache size. Also 
> we
> can flush the cache using a special write into the file, (similar to
> echoing into proc file) . This approach may be difficult to implement 
> in
> other clients.
>>> +1 for making use of the meta-xlator. We should be making more use of 
>>> it.
>> Indeed, this would be nice. Maybe this can also expose the memory
>> allocations like /proc/slabinfo.
>>
>> The io-stats xlator can dump some statistics to
>> /var/log/glusterfs/samples/ and /var/lib/glusterd/stats/ . That seems to
>> be acceptible too, and allows to get statistics from server-side
>> processes without involving any clients.
>>
>> HTH,
>> Niels
>>
>>
> 3) A cli command to display and flush the data with ip and port as an
> argument. GlusterD need to send the op to client from the connected
> client list. But this approach would be difficult to implement for
> libgfapi based clients. For me, it doesn't seems to be a good option.
>
> Your suggestions and comments are most welcome.
>
> Thanks to Talur and Poornima for their suggestions.
>
> Regards
>
> Rafi KC
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org 
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
 ___
 Gluster-devel mailing list
 gluster-de...@gluster.org 
 http://www.gluster.org/mailman/listinfo/gluster-devel
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org 
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>> ___
>>> Gluster-devel mailing list
>>> gluster-de...@gluster.org 
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org 
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
>
> -- 
> Raghavendra G


[Gluster-users] GlusterFS on EXT4 is stable enough

2016-07-28 Thread tecforte jason
Hi All,

May i konw GlusterFS latest version on EXT4 is stable enough ?

Much appreciate for any advice / guide.

Thanks.

Best Regards,
Jason
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glusterfs Distributed-Replicated local self-mounts

2016-07-28 Thread Pranith Kumar Karampuri
On Thu, Jul 28, 2016 at 12:29 PM, tecforte jason 
wrote:

> Hi,
>
> Can i do local self-mounts on Glusterfs Distributed-Replicated mode?
>

Are you asking for fuse mounts on the same servers which are used as
bricks? Yeah this is not a problem. But gluster nfs-server and nfs-client
on same machine will lead to hangs. This is not particular to gluster. Even
nfs client and nfs server on same machine leads to hangs.


>
> Much appreciate for any advice and suggestion.
>
> Thanks.
> Jason
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Data encryption

2016-07-28 Thread Kaushal M
On Tue, Jul 26, 2016 at 3:47 PM, Paul Warren  wrote:
> Hi,
>
> New to the list, I am trying to setup data encryption, I currently have SSL
> encryption up and running thanks to the help of kshlm. When I enable the
> option features.encryption on, I unmount and try to remount my client and
> get the following error.
>
> [2016-07-26 09:47:17.792417] E [crypt.c:4307:master_set_master_vol_key]
> 0-data-crypt: FATAL: can not open file with master key
> [2016-07-26 09:47:17.792448] E [MSGID: 101019] [xlator.c:428:xlator_init]
> 0-data-crypt: Initialization of volume 'data-crypt' failed, review your
> volfile again
>
>
> We are running Centos 6.7 and glusterfs-3.7.11-1.el6.x86_64
> with the client being centos 7.2 using glusterfs to mount the share.

I hope you are using the client packages provided by the community and
not the ones that come in the CentOS repo.
The client packages that come with the CentOS repo are repackaged
versions of the RHGS product. They might not be completely compatible
with the community packages.

>
> I've done some googling looking for a answer but I can't seem to find much
> regarding how data encryption works / errors etc. I would have assumed I
> just need to generate a key for the client. But I can't find much info about
> this.
>
> I was following
> http://www.gluster.org/community/documentation/index.php/Features/disk-encryption
> - but this doesn't exist any more.


The feature spec page at [1] has the information required. Section 6.2
has information on generating the key, and section 7 shows how to
enable and use encryption.
Let us know if this works, and if you have suggestions for improvement.

[1]: 
https://github.com/gluster/glusterfs-specs/blob/master/done/GlusterFS%203.5/Disk%20Encryption.md

>
> Thanks
> Paul
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Glusterfs Distributed-Replicated local self-mounts

2016-07-28 Thread tecforte jason
Hi,

Can i do local self-mounts on Glusterfs Distributed-Replicated mode?

Much appreciate for any advice and suggestion.

Thanks.
Jason
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users