Re: [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature

2019-05-06 Thread Jiffin Tony Thottan

Hi

On 04/05/19 12:04 PM, Strahil wrote:

Hi Jiffin,

No vendor will support your corosync/pacemaker stack if you do not have proper 
fencing.
As Gluster is already a cluster of its own, it makes sense to control 
everything from there.

Best Regards,



Yeah I agree with your point. What I meant to say by default this 
feature won't provide any fencing mechanism,


user need to manually configure fencing for the cluster. In future we 
can try to include to default fencing configuration


for the ganesha cluster as part of the Ganesha HA configuration

Regards,

Jiffin



Strahil NikolovOn May 3, 2019 09:08, Jiffin Tony Thottan  
wrote:


On 30/04/19 6:59 PM, Strahil Nikolov wrote:

Hi,

I'm posting this again as it got bounced.
Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.


It do take care those, but need to follow certain prerequisite, but
please fencing won't configured for this setup. May we think about in
future.

--

Jiffin


Still, this will be a lot of work to achieve.

Best Regards,
Strahil Nikolov

On Apr 30, 2019 15:19, Jim Kinney  wrote:
 
+1!

I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.

On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
wrote:
 
Hi all,


Some of you folks may be familiar with HA solution provided for nfs-ganesha by 
gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA project 
"Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting for next 
gluster release 7.

     I have opened up an issue [1] with details and posted initial set of 
patches [2]

Please share your thoughts on the same


Regards,

Jiffin

[1] https://github.com/gluster/glusterfs/issues/663

[2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)



--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.

Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.

Still, this will be a lot of work to achieve.

Best Regards,
Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney  wrote:

+1!
I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.

On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
wrote:

Hi all,

Some of you folks may be familiar with HA solution provided for nfs-ganesha by 
gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA project 
"Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting for next 
gluster release 7.

I have opened up an issue [1] with details and posted initial set of patches [2]

Please share your thoughts on the same

Regards,

Jiffin

[1] https://github.com/gluster/glusterfs/issues/663

[2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)

--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature

2019-05-03 Thread Jiffin Tony Thottan


On 30/04/19 6:59 PM, Strahil Nikolov wrote:

Hi,

I'm posting this again as it got bounced.
Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.



It do take care those, but need to follow certain prerequisite, but 
please fencing won't configured for this setup. May we think about in 
future.


--

Jiffin



Still, this will be a lot of work to achieve.

Best Regards,
Strahil Nikolov

On Apr 30, 2019 15:19, Jim Kinney  wrote:
   
+1!

I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.

On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
wrote:
   
Hi all,


Some of you folks may be familiar with HA solution provided for nfs-ganesha by 
gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA project 
"Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting for next 
gluster release 7.

   I have opened up an issue [1] with details and posted initial set of patches 
[2]

Please share your thoughts on the same


Regards,

Jiffin

[1] https://github.com/gluster/glusterfs/issues/663

[2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)



--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.

Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.

Still, this will be a lot of work to achieve.

Best Regards,
Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney  wrote:

+1!
I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.

On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
wrote:

Hi all,

Some of you folks may be familiar with HA solution provided for nfs-ganesha by 
gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA project 
"Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting for next 
gluster release 7.

I have opened up an issue [1] with details and posted initial set of patches [2]

Please share your thoughts on the same

Regards,

Jiffin

[1] https://github.com/gluster/glusterfs/issues/663

[2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)


--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Proposing to previous ganesha HA cluster solution back to gluster code as gluster-7 feature

2019-05-03 Thread Jiffin Tony Thottan


On 30/04/19 6:41 PM, Renaud Fortier wrote:


IMO, you should keep storhaug and maintain it. At the beginning, we 
were with pacemaker and corosync. Then we move to storhaug with the 
upgrade to gluster 4.1.x. Now you are talking about going back like it 
was. Maybe it will be better with pacemake and corosync but the 
important is to have a solution that will be stable and maintained.




I agree it is very frustrating, there is no longer development planned 
for future unless someone pick it and work on for its stabilization and 
improvement.


My plan is just to get back what gluster and nfs-ganesha had before

--

Jiffin


thanks

Renaud

*De :*gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] *De la part de* Jim Kinney

*Envoyé :* 30 avril 2019 08:20
*À :* gluster-users@gluster.org; Jiffin Tony Thottan 
; gluster-users@gluster.org; Gluster Devel 
; gluster-maintain...@gluster.org; 
nfs-ganesha ; de...@lists.nfs-ganesha.org
*Objet :* Re: [Gluster-users] Proposing to previous ganesha HA cluster 
solution back to gluster code as gluster-7 feature


+1!
I'm using nfs-ganesha in my next upgrade so my client systems can use 
NFS instead of fuse mounts. Having an integrated, designed in process 
to coordinate multiple nodes into an HA cluster will very welcome.


On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan 
mailto:jthot...@redhat.com>> wrote:


Hi all,

Some of you folks may be familiar with HA solution provided for
nfs-ganesha by gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA
project "Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state,
hence planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting
for next gluster release 7.

I have opened up an issue [1] with details and posted initial set
of patches [2]

Please share your thoughts on the same

Regards,

Jiffin

[1]https://github.com/gluster/glusterfs/issues/663
<https://github.com/gluster/glusterfs/issues/663>

[2]
https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)


--
Sent from my Android device with K-9 Mail. All tyopes are thumb 
related and reflect authenticity.


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Proposing to previous ganesha HA cluster solution back to gluster code as gluster-7 feature

2019-04-30 Thread Jiffin Tony Thottan

Hi all,

Some of you folks may be familiar with HA solution provided for 
nfs-ganesha by gluster using pacemaker and corosync.


That feature was removed in glusterfs 3.10 in favour for common HA 
project "Storhaug". Even Storhaug was not progressed


much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back


to gluster code repository with some improvement and targetting for next 
gluster release 7.


I have opened up an issue [1] with details and posted initial set of 
patches [2]


Please share your thoughts on the same

Regards,

Jiffin

[1]https://github.com/gluster/glusterfs/issues/663 



[2] 
https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster GEO replication fault after write over nfs-ganesha

2019-04-03 Thread Jiffin Tony Thottan

CCIng sunn as well.

On 28/03/19 4:05 PM, Soumya Koduri wrote:



On 3/27/19 7:39 PM, Alexey Talikov wrote:

I have two clusters with dispersed volumes (2+1) with GEO replication
It works fine till I use glusterfs-fuse, but as even one file written 
over nfs-ganesha replication goes to Fault and recovers after I 
remove this file (sometimes after stop/start)
I think nfs-hanesha writes file in some way that produces problem 
with replication




I am not much familiar with geo-rep and not sure what/why exactly 
failed here. Request Kotresh (cc'ed) to take a look and provide his 
insights on the issue.


Thanks,
Soumya

|OSError: [Errno 61] No data available: 
'.gfid/9c9514ce-a310-4a1c-a87b-a800a32a99f8' |


but if I check over glusterfs mounted with aux-gfid-mount

|getfattr -n trusted.glusterfs.pathinfo -e text 
/mnt/TEST/.gfid/9c9514ce-a310-4a1c-a87b-a800a32a99f8 getfattr: 
Removing leading '/' from absolute path names # file: 
mnt/TEST/.gfid/9c9514ce-a310-4a1c-a87b-a800a32a99f8 
trusted.glusterfs.pathinfo="( 
( 
))" |


File exists
Details available here 
https://github.com/nfs-ganesha/nfs-ganesha/issues/408



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Glusterfs release 3.12.15 (Long Term Maintenance)

2018-10-16 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.15 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.15/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12 


[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.15/


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS-Ganesha question

2018-10-15 Thread Jiffin Tony Thottan

CCing ganesha list as well


On Monday 15 October 2018 07:44 PM, Renaud Fortier wrote:


Hi,

We are currently facing a strange behaviour with our cluster. Right 
now I’m running bitrot scrub against the volume but I’m not sure it 
will help finding the problem. Anyway, my question is about 
nfs-ganesha and NFSv4. Since this strange behaviour begun, I read alot 
and I found that idmapd is needed for NFSv4. If I run rpcinfo or ps 
–ef |grep idmapd on our nodes, I don’t see it.


Is rpc.idmapd supposed to be running when using nfs-ganesha 2.6.3 with 
gluster 4.1.5 ?




IMO rpc.idmap as a service is not required for ganesha, but ganesha uses 
apis from "libnfsidmap"  for id mapping for confirming the same


CCing ganesha devel list as well.

--
Jiffin


Thank you



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Found anomalies in ganesha-gfapi.log

2018-10-03 Thread Jiffin Tony Thottan

Are u performing lookups or mkdir in parallel via two different clients ?

--

Jiffin


On Friday 28 September 2018 08:13 PM, Renaud Fortier wrote:


Hi,

I have a lot of these lines in ganesha-gfapi.log. What is it and 
should I worried about it ?


[2018-09-28 14:26:46.296375] I [MSGID: 109063] 
[dht-layout.c:693:dht_layout_normalize] 0-testing-dht: Found anomalies 
in (null) (gfid = 4efad4fd-fc7f-4c06-90e0-f882ca74b9a5). Holes=1 
overlaps=0


OS : Debian stretch

Gluster : v4.1.5 type : replicated 3 briks

Ganesha : 2.6.0

Thank you



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Cannot connect using NFS, protocol problems

2018-09-17 Thread Jiffin Tony Thottan



On Monday 17 September 2018 11:53 PM, Arthur Pemberton wrote:
When I try to mount to my working glusterfs cluster using NFS, I can't 
establish the connection.


# mount -v -t nfs -o mountproto=tcp,proto=tcp,vers=3
SERVER:/VOLUME /mnt/glusterfs
mount.nfs: timeout set for Mon Sep 17 13:23:44 2018
mount.nfs: trying text-based options
'mountproto=tcp,proto=tcp,vers=3,addr=172.24.16.17'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: portmap query failed: RPC: Program not registered
mount.nfs: trying text-based options
'mountproto=tcp,proto=tcp,vers=3,addr=172.24.16.17'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: portmap query failed: RPC: Program not registered
mount.nfs: trying text-based options
'mountproto=tcp,proto=tcp,vers=3,addr=172.24.16.17'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: portmap query failed: RPC: Program not registered
mount.nfs: requested NFS version or transport protocol is not
supported


This is on CentOS 7, I don't really know how to troubleshoot.




Are u sure the volume is exported via NFS, what showmount -e  
returns from the client


It is look like client was not able to connect the nfs server?



Regards,
Jiffin

Arthur Pemberton



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] [IMPORTANT] Announcing Gluster 3.12.14 and Gluster 4.1.4

2018-09-07 Thread Jiffin Tony Thottan

Hi,

The next set of minor updates for 3.12(.4) and 4.1(.4) are available 
earlier than expected.


These releases were made together mainly to address a security 
vulnerabilities in Gluster [1].


The packages for Gluster 3.12.14 packages available at [5,6,7] and  
release notes [8].


The packages for Gluster 4.1.4 packages available at [9,10,11] and  
release notes [12].


Thanks,

Jiffin

[1] The list of security

   - https://nvd.nist.gov/vuln/detail/CVE-2018-10904
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10907
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10911
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10913
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10914
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10923
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10926
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10927
   -https://nvd.nist.gov/vuln/detail/CVE-2018-10928 


   - https://nvd.nist.gov/vuln/detail/CVE-2018-10929
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10930

[5] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.14/
[6] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[7] https://build.opensuse.org/project/subprojects/home:glusterfs
[8] Release 
notes:https://gluster.readthedocs.io/en/latest/release-notes/3.12.14/


[9] https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.4/
[10] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-4.1
[11] https://build.opensuse.org/project/subprojects/home:glusterfs
[12] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/4.1.4/


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Announcing Glusterfs release 3.12.13 (Long Term Maintenance)

2018-08-27 Thread Jiffin Tony Thottan



On Monday 27 August 2018 01:57 PM, Pasi Kärkkäinen wrote:

Hi,

On Mon, Aug 27, 2018 at 11:10:21AM +0530, Jiffin Tony Thottan wrote:

The Gluster community is pleased to announce the release of Gluster
3.12.13 (packages available at [1,2,3]).

Release notes for the release can be found at [4].

Thanks,
Gluster community

[1] [1]https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.13/
[2] [2]https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] [3]https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes:
[4]https://gluster.readthedocs.io/en/latest/release-notes/3.12.12/


Hmm, I guess release-notes link should say 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.13 instead.. but 
that page doesn't seem to exist (yet) ?


It got fixed now :)

Thanks,
Jiffin





Thanks,

-- Pasi



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing Glusterfs release 3.12.13 (Long Term Maintenance)

2018-08-26 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.13 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.13/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.12/


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster release 3.12.13 (Long Term Maintenance) Canceled for 10th of August, 2018

2018-08-15 Thread Jiffin Tony Thottan
Since the issue seems to critical and there is no longer lock is held on 
Master branch, I will try to do a 3.12 release ASAP.


--

Jiffin

On Tuesday 14 August 2018 05:48 PM, Nithya Balachandran wrote:

I agree as well. This is a bug that is impacting users.

On 14 August 2018 at 16:30, Ravishankar N <mailto:ravishan...@redhat.com>> wrote:


+1

Considering that master is no longer locked, it would be nice if a
release can be made sooner.  Amar sent a missing back port [1]
which also fixes a mem leak issue on the client side. This needs
to go in too.
Regards,
Ravi

[1] https://review.gluster.org/#/c/glusterfs/+/20723/
<https://review.gluster.org/#/c/glusterfs/+/20723/>


On 08/14/2018 04:20 PM, lemonni...@ulrar.net
<mailto:lemonni...@ulrar.net> wrote:

Hi,

That's actually pretty bad, we've all been waiting for the
memory leak
patch for a while now, an extra month is a bit of a nightmare
for us.

Is there no way to get 3.12.12 with that patch sooner, at
least ? I'm
getting a bit tired of rebooting virtual machines by hand
everyday to
avoid the OOM killer ..

On Tue, Aug 14, 2018 at 04:12:28PM +0530, Jiffin Tony Thottan
wrote:

Hi,

Currently master branch is lock for fixing failures in the
regression
test suite [1].

As a result we are not releasing the next minor update for
the 3.12 branch,

which falls on the 10th of every month.

The next 3.12 update would be around the 10th of
September, 2018.

Apologies for the delay to inform above details.

[1]

https://lists.gluster.org/pipermail/gluster-devel/2018-August/055160.html

<https://lists.gluster.org/pipermail/gluster-devel/2018-August/055160.html>

Regards,

Jiffin

___
Gluster-users mailing list
Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users
<https://lists.gluster.org/mailman/listinfo/gluster-users>


___
Gluster-users mailing list
Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users
<https://lists.gluster.org/mailman/listinfo/gluster-users>




___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster release 3.12.13 (Long Term Maintenance) Canceled for 10th of August, 2018

2018-08-14 Thread Jiffin Tony Thottan

Hi,

Currently master branch is lock for fixing failures in the regression 
test suite [1].


As a result we are not releasing the next minor update for the 3.12 branch,

which falls on the 10th of every month.

The next 3.12 update would be around the 10th of September, 2018.

Apologies for the delay to inform above details.

[1] 
https://lists.gluster.org/pipermail/gluster-devel/2018-August/055160.html


Regards,

Jiffin

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Glusterfs release 3.12.12 (Long Term Maintenance)

2018-07-12 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.12 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.12/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12 


[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.12/


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Release 3.12.12: Scheduled for the 11th of July

2018-07-11 Thread Jiffin Tony Thottan

Hi Mabi,

I have checked with afr maintainer, all of the required changes is 
merged in 3.12.


Hence moving forward with 3.12.12 release

Regards,

Jiffin


On Monday 09 July 2018 01:04 PM, mabi wrote:

Hi Jiffin,

Based on the issues I am encountering on a nearly daily basis (See 
"New 3.12.7 possible split-brain on replica 3" thread in this ML) 
since now already 2-3 months I would be really glad if the required 
fixes as mentioned by Ravi could make it into the 3.12.12 release. 
Ravi mentioned the following:


afr: heal gfids when file is not present on all bricks
afr: don't update readables if inode refresh failed on all children
afr: fix bug-1363721.t failure
afr: add quorum checks in pre-op
afr: don't treat all cases all bricks being blamed as split-brain
afr: capture the correct errno in post-op quorum check
afr: add quorum checks in post-op

Right now I only see the first one pending in the review dashboard. It 
would be great if all of them could make it into this release.


Best regards,
Mabi



‐‐‐ Original Message ‐‐‐
On July 9, 2018 7:18 AM, Jiffin Tony Thottan  wrote:


Hi,

It's time to prepare the 3.12.12 release, which falls on the 10th of
each month, and hence would be 11-07-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.12? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.12

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 





___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Release 3.12.12: Scheduled for the 11th of July

2018-07-08 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.12 release, which falls on the 10th of
each month, and hence would be 11-07-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.12? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.12

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing Glusterfs release 3.12.10 (Long Term Maintenance)

2018-06-15 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.10 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.10/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12 


[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.10/


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Release 3.12.10: Scheduled for the 13th of July

2018-06-12 Thread Jiffin Tony Thottan

typos


On Tuesday 12 June 2018 12:15 PM, Jiffin Tony Thottan wrote:

Hi,

It's time to prepare the 3.12.7 release, which falls on the 10th of


3.12.10


each month, and hence would be 08-03-2018 this time around.



13-06-2018


This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.10? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

Plus I have cc'ed owners of patch which can be candidate for 3.12 but 
failed regressions.


Please have look into that

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.10

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 





___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Release 3.12.10: Scheduled for the 13th of July

2018-06-12 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.7 release, which falls on the 10th of
each month, and hence would be 08-03-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.10? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

Plus I have cc'ed owners of patch which can be candidate for 3.12 but 
failed regressions.


Please have look into that

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.10

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Glusterfs release 3.12.8 (Long Term Maintenance)

2018-04-18 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.8 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.8/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.8/


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Release 3.12.8: Scheduled for the 12th of April

2018-04-10 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.8 release, which falls on the 10th of
each month, and hence would be 12-04-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.7? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is 
*green*

as all fixes ported to 3.10, are ported to 3.12 as well.

@Mlind

IMO https://review.gluster.org/19659 is like a minor feature to me. Can 
please provide a justification for why it need to include in 3.12 stable 
release?


And please rebase the change as well

@Raghavendra

The smoke failed for https://review.gluster.org/#/c/19818/. Can please 
check the same?


Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.8

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Release 3.12.7: Scheduled for the 8th of March

2018-03-05 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.7 release, which falls on the 10th of
each month, and hence would be 08-03-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.7? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is 
*green*

as all fixes ported to 3.10, are ported to 3.12 as well.

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.7

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Announcing Glusterfs release 3.12.6 (Long Term Maintenance)

2018-02-19 Thread Jiffin Tony Thottan



On Tuesday 20 February 2018 09:37 AM, Jiffin Tony Thottan wrote:


The Gluster community is pleased to announce the release of Gluster 
3.12.6 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry following major issue that is reported in the 
release-notes as follows,


1.) - Expanding a gluster volume that is sharded may cause file 
corruption


    Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


    The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


    Status of this bug can be tracked here, #1465123



Above issue got fixed in 3.12.6. Sorry for mentioning it in the announce 
mail.


--

Jiffin



Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.6/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.6/




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing Glusterfs release 3.12.6 (Long Term Maintenance)

2018-02-19 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.6 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry following major issue that is reported in the 
release-notes as follows,


1.) - Expanding a gluster volume that is sharded may cause file corruption

    Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


    The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


    Status of this bug can be tracked here, #1465123

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.6/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.6/


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Release 3.12.6: Scheduled for the 12th of February

2018-02-01 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.6 release, which falls on the 10th of
each month, and hence would be 12-02-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.6? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is 
*green*

as all fixes ported to 3.10, are ported to 3.12 as well.

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.6

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Release 3.12.5: Scheduled for the 12th of January

2018-01-31 Thread Jiffin Tony Thottan
The glusterfs 3.12.5 got released on Jan 12th 2018. Apologies for not 
sending the announcement mail on time


Release notes for the release can be found at [4].

We still carry following major issue that is reported in the 
release-notes as follows,


1.) - Expanding a gluster volume that is sharded may cause file corruption

    Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


    The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


    Status of this bug can be tracked here, #1465123

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.5/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.5/



On Thursday 11 January 2018 11:32 AM, Jiffin Tony Thottan wrote:


Hi,

It's time to prepare the 3.12.5 release, which falls on the 10th of
each month, and hence would be 12-01-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.5? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this 
is *green*

as all fixes ported to 3.10, are ported to 3.12 as well.

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.5

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Segfaults after upgrade to GlusterFS 3.10.9

2018-01-18 Thread Jiffin Tony Thottan

Hi Frank,

It will be very easy to debug if u have core file with u. It looks like 
crash is coming from gfapi stack.


If there is core file can u please share bt of the core file.

Regards,

Jiffin


On Thursday 18 January 2018 11:18 PM, Frank Wall wrote:

Hi,

after upgrading to 3.10.9 I'm seing ganesha.nfsd segfaulting all the time:

[12407.918249] ganesha.nfsd[38104]: segfault at 0 ip 7f872425fb00 sp 
7f867cefe5d0 error 4 in libglusterfs.so.0.0.1[7f8724223000+f1000]
[12693.119259] ganesha.nfsd[3610]: segfault at 0 ip 7f716d8f5b00 sp 
7f71367e15d0 error 4 in libglusterfs.so.0.0.1[7f716d8b9000+f1000]
[14531.582667] ganesha.nfsd[17025]: segfault at 0 ip 7f7cb8fa8b00 sp 
7f7c5878d5d0 error 4 in libglusterfs.so.0.0.1[7f7cb8f6c000+f1000]

ganesha-fgapi.log shows the following errors:

[2018-01-18 17:24:00.146094] W [inode.c:1341:inode_parent] 
(-->/lib64/libgfapi.so.0(glfs_resolve_at+0x278) [0x7f7cb927f0b8] 
-->/lib64/libglusterfs.so.0(glusterfs_normalize_dentry+0x8e) [0x7f7cb8fa8aee] 
-->/lib64/libglusterfs.so.0(inode_parent+0xda) [0x7f7cb8fa670a] ) 0-gfapi: inode not 
found
[2018-01-18 17:24:00.146210] E [inode.c:2567:inode_parent_null_check] 
(-->/lib64/libgfapi.so.0(glfs_resolve_at+0x278) [0x7f7cb927f0b8] 
-->/lib64/libglusterfs.so.0(glusterfs_normalize_dentry+0xa0) [0x7f7cb8fa8b00] 
-->/lib64/libglusterfs.so.0(+0x398c4) [0x7f7cb8fa58c4] ) 0-inode: invalid argument: 
inode [Invalid argument]

This leads to serious availability issues.

Is this a known issue? Any workaround available?

FWIW, my GlusterFS volume looks like this:

Volume Name: gfsvol
Type: Distributed-Replicate
Volume ID: f7985bf3-67e1-49d6-90bf-16816536533b
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1: AAA:/bricks/gfsvol/vol1/volume
Brick2: BBB:/bricks/gfsvol/vol1/volume
Brick3: CCC:/bricks/gfsvol/vol1/volume
Brick4: AAA:/bricks/gfsvol/vol2/volume
Brick5: BBB:/bricks/gfsvol/vol2/volume
Brick6: CCC:/bricks/gfsvol/vol2/volume
Brick7: AAA:/bricks/gfsvol/vol3/volume
Brick8: BBB:/bricks/gfsvol/vol3/volume
Brick9: CCC:/bricks/gfsvol/vol3/volume
Brick10: AAA:/bricks/gfsvol/vol4/volume
Brick11: BBB:/bricks/gfsvol/vol4/volume
Brick12: CCC:/bricks/gfsvol/vol4/volume
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
features.cache-invalidation: off
ganesha.enable: on
auth.allow: *
nfs.rpc-auth-allow: *
nfs-ganesha: enable
cluster.enable-shared-storage: enable


Thanks
- Frank
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Release 3.12.5: Scheduled for the 12th of January

2018-01-11 Thread Jiffin Tony Thottan



On Thursday 11 January 2018 12:24 PM, Hans Henrik Happe wrote:

Hi,

I wonder how this procedure works. I could add a bug that I think is a
*blocker*, but there might not be consensus.


You can add it the tracker bug. Depending on the severity we may or may 
not take it for 3.12.5

--
Jiffin


Cheers,
Hans Henrik

On 11-01-2018 07:02, Jiffin Tony Thottan wrote:

Hi,

It's time to prepare the 3.12.5 release, which falls on the 10th of
each month, and hence would be 12-01-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.5? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is
*green*
as all fixes ported to 3.10, are ported to 3.12 as well.

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.5

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Release 3.12.5: Scheduled for the 12th of January

2018-01-10 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.5 release, which falls on the 10th of
each month, and hence would be 12-01-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.5? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is 
*green*

as all fixes ported to 3.10, are ported to 3.12 as well.

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.5

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Announcing Glusterfs release 3.12.4 (Long Term Maintenance)

2018-01-08 Thread Jiffin Tony Thottan

Thanks Darrell for testing it


On Saturday 06 January 2018 05:51 AM, Darrell Budic wrote:

Hey Niels,

Installed 3.12.4 from centos-gluster312-test on my dev ovirt hyper 
converged cluster. Everything looks good and is working as expected 
for storage, migration, & healing. Need any specifics?


  -D



*From:* Jiffin Tony Thottan <jthot...@redhat.com 
<mailto:jthot...@redhat.com>>
*Subject:* [Gluster-users] Announcing Glusterfs release 3.12.4 (Long 
Term Maintenance)

*Date:* December 19, 2017 at 12:14:15 AM CST
*To:* gluster-users@gluster.org <mailto:gluster-users@gluster.org>, 
gluster-de...@gluster.org <mailto:gluster-de...@gluster.org>, 
annou...@gluster.org <mailto:annou...@gluster.org>


The Gluster community is pleased to announce the release of Gluster 
3.12.4 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry following major issue that is reported in the 
release-notes as follows,


1.) - Expanding a gluster volume that is sharded may cause file 
corruption


    Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


    The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


    Status of this bug can be tracked here, #1465123

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.4/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12 
<https://launchpad.net/%7Egluster/+archive/ubuntu/glusterfs-3.12>

[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.4/


___
Gluster-users mailing list
Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing Glusterfs release 3.13.1 (Short Term Maintenance)

2017-12-21 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.13.1 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry following major issue that is reported in the 
release-notes as follows,


1.) - Expanding a gluster volume that is sharded may cause file corruption

    Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


    The last known cause for corruption (Bug #1515434) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


    Status of this bug can be tracked here, #1515434

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.13/3.13.1/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.13
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.13.1/


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing Glusterfs release 3.12.4 (Long Term Maintenance)

2017-12-18 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.4 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry following major issue that is reported in the 
release-notes as follows,


1.) - Expanding a gluster volume that is sharded may cause file corruption

    Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


    The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


    Status of this bug can be tracked here, #1465123

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.4/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.4/


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Release 3.12.4 : Scheduled for the 12th of December

2017-12-11 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.4 release, which falls on the 10th of
each month, and hence would be 12-12-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.4? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is 
*green*

as all fixes ported to 3.10, are ported to 3.12 as well.


Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.4

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] pcs resources

2017-12-08 Thread Jiffin Tony Thottan

Hi,

Okay. what happens if u run the command "gluster nfs-ganesha enable " again?

Regards,

Jiffin


On Friday 08 December 2017 04:15 PM, Hetz Ben Hamo wrote:
There are no resources, there were error messages that I ignored 
accidently. How do I recreate those resources?


Thanks

On Dec 8, 2017 12:14, "Jiffin Tony Thottan" <jthot...@redhat.com 
<mailto:jthot...@redhat.com>> wrote:


Hi,

Can you provide me the output of pcs status. All the resource will
be created automatically if it is for the ganesha cluster.

Regards,

Jiffin


On Wednesday 06 December 2017 05:06 PM, Hetz Ben Hamo wrote:

Hi,

I'm setting up gluster on a 2 node system.
The setup is working, I configured pcsd and it's working, and I
added the virtual_IP resource.

However, on many examples on the net (for example: the output in
this thread: https://www.centos.org/forums/viewtopic.php?t=60001
<https://www.centos.org/forums/viewtopic.php?t=60001> ) I see few
resources in the pcs.

Is this something which is being created automatically? or is it
something that I need to create manually? (gluster docs doesn't
help much here...)

Thanks


___
Gluster-users mailing list
Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>





___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS, Pacemaker, OCF resource agents on CentOS 7

2017-12-08 Thread Jiffin Tony Thottan

Hi,

Can u please explain for what purpose pacemaker cluster used here?

Regards,

Jiffin


On Thursday 07 December 2017 06:59 PM, Tomalak Geret'kal wrote:


Hi guys

I'm wondering if anyone here is using the GlusterFS OCF resource 
agents with Pacemaker on CentOS 7?


yum install centos-release-gluster
yum install glusterfs-server glusterfs-resource-agents

The reason I ask is that there seem to be a few problems with them on 
3.10, but these problems are so severe that I'm struggling to believe 
I'm not just doing something wrong.


I created my brick (on a volume previously used for DRBD, thus its name):

mkfs.xfs /dev/cl/lv_drbd -f
mkdir -p /gluster/test_brick
mount -t xfs /dev/cl/lv_drbd /gluster

And then my volume (enabling clients to mount it via NFS):

systemctl start glusterd
gluster volume create logs replica 2 transport tcp 
pcmk01-drbd:/gluster/test_brick pcmk02-drbd:/gluster/test_brick

gluster volume start test_logs
gluster volume set test_logs nfs.disable off

And here's where the fun starts.

Firstly, we need to work around bug 1233344* (which was closed when 
3.7 went end-of-life but still seems valid in 3.10):


sed -i 
's#voldir="/etc/glusterd/vols/${OCF_RESKEY_volname}"#voldir="/var/lib/glusterd/vols/${OCF_RESKEY_volname}"#' 
/usr/lib/ocf/resource.d/glusterfs/volume


With that done, I [attempt to] stop GlusterFS so it can be brought 
under Pacemaker control:


systemctl stop glusterfsd
systemctl stop glusterd
umount /gluster

(I usually have to manually kill glusterfs processes at this point 
before the unmount works - why does the systemctl stop not do it?)


With the node in standby (just one is online in this example, but 
another is configured), I then set up the resources:


pcs node standby
pcs resource create gluster_data ocf:heartbeat:Filesystem 
device="/dev/cl/lv_drbd" directory="/gluster" fstype="xfs"

pcs resource create glusterd ocf:glusterfs:glusterd
pcs resource create gluster_vol ocf:glusterfs:volume volname="test_logs"
pcs resource create test_logs ocf:heartbeat:Filesystem \
    device="localhost:/test_logs" directory="/var/log/test" fstype="nfs" \
options="vers=3,tcp,nolock,context=system_u:object_r:httpd_sys_content_t:s0" 
\

    op monitor OCF_CHECK_LEVEL="20"
pcs resource clone glusterd
pcs resource clone gluster_data
pcs resource clone gluster_vol ordered=true
pcs constraint order start gluster_data-clone then start glusterd-clone
pcs constraint order start glusterd-clone then start gluster_vol-clone
pcs constraint order start gluster_vol-clone then start test_logs
pcs constraint colocation add test_logs with FloatingIp INFINITY

(note the SELinux wrangling - this is because I have a CGI web 
application which will later need to read files from the /var/log/test 
mount)


At this point, even with the node in standby, it's /already/ failing:

[root@pcmk01 ~]# pcs status
Cluster name: test_cluster
Stack: corosync
Current DC: pcmk01-cr (version 1.1.15-11.el7_3.5-e174ec8) - partition 
WITHOUT quorum
Last updated: Thu Dec  7 13:20:41 2017  Last change: Thu Dec  
7 13:09:33 2017 by root via crm_attribute on pcmk01-cr


2 nodes and 13 resources configured

Online: [ pcmk01-cr ]
OFFLINE: [ pcmk02-cr ]

Full list of resources:

 FloatingIp (ocf::heartbeat:IPaddr2):   Started pcmk01-cr
 test_logs  (ocf::heartbeat:Filesystem):    Stopped
 Clone Set: glusterd-clone [glusterd]
 Stopped: [ pcmk01-cr pcmk02-cr ]
 Clone Set: gluster_data-clone [gluster_data]
 Stopped: [ pcmk01-cr pcmk02-cr ]
 Clone Set: gluster_vol-clone [gluster_vol]
 gluster_vol    (ocf::glusterfs:volume): FAILED pcmk01-cr 
(blocked)

 Stopped: [ pcmk02-cr ]

Failed Actions:
* gluster_data_start_0 on pcmk01-cr 'not configured' (6): call=72, 
status=complete, exitreason='DANGER! xfs on /dev/cl/lv_drbd is NOT 
cluster-aware!',

    last-rc-change='Thu Dec  7 13:09:28 2017', queued=0ms, exec=250ms
* gluster_vol_stop_0 on pcmk01-cr 'unknown error' (1): call=60, 
status=Timed Out, exitreason='none',

    last-rc-change='Thu Dec  7 12:55:11 2017', queued=0ms, exec=20004ms


Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

1. The data mount can't be created? Why?
2. Why is there a volume "stop" command being attempted, and why does 
it fail?
3. Why is any of this happening in standby? I can't have the resources 
failing before I've even made the node live! I could understand why a 
gluster_vol start operation would fail when glusterd is (correctly) 
stopped, but why is there a *stop* operation? And why does that make 
the resource "blocked"?


Given the above steps, is there something fundamental I'm missing 
about how these resource agents should be used? How do *you* configure 
GlusterFS on Pacemaker?


Any advice appreciated.

Best regards


* https://bugzilla.redhat.com/show_bug.cgi?id=1233344




___
Gluster-users mailing list
Gluster-users@gluster.org

Re: [Gluster-users] pcs resources

2017-12-08 Thread Jiffin Tony Thottan

Hi,

Can you provide me the output of pcs status. All the resource will be 
created automatically if it is for the ganesha cluster.


Regards,

Jiffin


On Wednesday 06 December 2017 05:06 PM, Hetz Ben Hamo wrote:

Hi,

I'm setting up gluster on a 2 node system.
The setup is working, I configured pcsd and it's working, and I added 
the virtual_IP resource.


However, on many examples on the net (for example: the output in this 
thread: https://www.centos.org/forums/viewtopic.php?t=60001 ) I see 
few resources in the pcs.


Is this something which is being created automatically? or is it 
something that I need to create manually? (gluster docs doesn't help 
much here...)


Thanks


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and nfs-ganesha

2017-12-05 Thread Jiffin Tony Thottan



On Wednesday 06 December 2017 11:08 AM, Hetz Ben Hamo wrote:

Thanks Jiffin,

Btw, the nfs-ganesha part in the release notes is having a wrong 
header, so it's not highlighted.


One thing that it is still mystery to me: gluster 3.8.x does all what 
the release notes of 3.9 says - automatically. Any chance that someone 
could port it to 3.9?


I didn't get that. can u tell me what all 3.8 does automatically ?
--
Jiffin


Thanks for the links

On Wed, Dec 6, 2017 at 7:28 AM, Jiffin Tony Thottan 
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:


Hi,


On Monday 04 December 2017 07:43 PM, Hetz Ben Hamo wrote:

Hi Jiffin,

I looked at the document, and there are 2 things:

1. In Gluster 3.8 it seems you don't need to do that at all, it
creates this automatically, so why not in 3.10?



Kindly please refer the mail[1] and release note [2] for glusterfs-3.9

Regards,
Jiffin

[1] https://www.spinics.net/lists/gluster-devel/msg20488.html
<https://www.spinics.net/lists/gluster-devel/msg20488.html>
[2] http://docs.gluster.org/en/latest/release-notes/3.9.0/
<http://docs.gluster.org/en/latest/release-notes/3.9.0/>




2. The step by step guide, in the last item, doesn't say where
exactly do I need to create the nfs-ganesha directory. The
copy/paste seems irrelevant as enabling nfs-ganesha creates
automatically the ganesha.conf and a subdirectory (called
"exports") with the volume share configuration file.

Also, could someone tell me whats up with no ganesha on 3.12?

Thanks

    On Mon, Dec 4, 2017 at 11:47 AM, Jiffin Tony Thottan
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:



On Saturday 02 December 2017 07:00 PM, Hetz Ben Hamo wrote:

HI,

I'm using CentOS 7.4 with Gluster 3.10.7 and Ganesha NFS 2.4.5.

I'm trying to create a very simple 2 nodes cluster to be
used with NFS-ganesha. I've created the bricks and the
volume. Here's the output:

# gluster volume info

Volume Name: cluster-demo
Type: Replicate
Volume ID: 9c835a8e-c0ec-494c-a73b-cca9d77871c5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glnode1:/data/brick1/gv0
Brick2: glnode2:/data/brick1/gv0
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
cluster.enable-shared-storage: enable

Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: caf36f36-0364-4ab9-a158-f0d1205898c4
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glnode2:/var/lib/glusterd/ss_brick
Brick2: 192.168.0.95:/var/lib/glusterd/ss_brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
cluster.enable-shared-storage: enable

However, when I'm trying to run gluster nfs-ganesha enable -
it creates a wrong symbolic link and failes:

# gluster nfs-ganesha enable
Enabling NFS-Ganesha requires Gluster-NFS to be disabled
across the trusted pool. Do you still want to continue?
 (y/n) y
This will take a few minutes to complete. Please wait ..
nfs-ganesha: failed: creation of symlink ganesha.conf in
/etc/ganesha failed

wrong link: ganesha.conf ->
/var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf

# ls -l /var/run/gluster/shared_storage/
total 0

I've seen some reports (and fixed) in Red Hat's Bugzilla and
looked at the Red Hat solutions
(https://access.redhat.com/solutions/3099581
<https://access.redhat.com/solutions/3099581>) but this
doesn't help.

Suggestions?

Hi,

It seems you have not created directory nfs-ganesha under
shared storage and plus copy/create
ganesha.conf/ganesha-ha.conf inside
Please follow this document

http://docs.gluster.org/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/

<http://docs.gluster.org/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/>

Regards,
Jiffin






I tried to upgrade to Gluster 3.12 and it seems Ganesha
support was kicked out? whats replacing it?



___
Gluster-users mailing list
Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>








___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and nfs-ganesha

2017-12-05 Thread Jiffin Tony Thottan

Hi,


On Monday 04 December 2017 07:43 PM, Hetz Ben Hamo wrote:

Hi Jiffin,

I looked at the document, and there are 2 things:

1. In Gluster 3.8 it seems you don't need to do that at all, it 
creates this automatically, so why not in 3.10?



Kindly please refer the mail[1] and release note [2] for glusterfs-3.9

Regards,
Jiffin

[1] https://www.spinics.net/lists/gluster-devel/msg20488.html
[2] http://docs.gluster.org/en/latest/release-notes/3.9.0/


2. The step by step guide, in the last item, doesn't say where exactly 
do I need to create the nfs-ganesha directory. The copy/paste seems 
irrelevant as enabling nfs-ganesha creates automatically the 
ganesha.conf and a subdirectory (called "exports") with the volume 
share configuration file.


Also, could someone tell me whats up with no ganesha on 3.12?

Thanks

On Mon, Dec 4, 2017 at 11:47 AM, Jiffin Tony Thottan 
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:




On Saturday 02 December 2017 07:00 PM, Hetz Ben Hamo wrote:

HI,

I'm using CentOS 7.4 with Gluster 3.10.7 and Ganesha NFS 2.4.5.

I'm trying to create a very simple 2 nodes cluster to be used
with NFS-ganesha. I've created the bricks and the volume. Here's
the output:

# gluster volume info

Volume Name: cluster-demo
Type: Replicate
Volume ID: 9c835a8e-c0ec-494c-a73b-cca9d77871c5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glnode1:/data/brick1/gv0
Brick2: glnode2:/data/brick1/gv0
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
cluster.enable-shared-storage: enable

Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: caf36f36-0364-4ab9-a158-f0d1205898c4
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glnode2:/var/lib/glusterd/ss_brick
Brick2: 192.168.0.95:/var/lib/glusterd/ss_brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
cluster.enable-shared-storage: enable

However, when I'm trying to run gluster nfs-ganesha enable - it
creates a wrong symbolic link and failes:

# gluster nfs-ganesha enable
Enabling NFS-Ganesha requires Gluster-NFS to be disabled across
the trusted pool. Do you still want to continue?
 (y/n) y
This will take a few minutes to complete. Please wait ..
nfs-ganesha: failed: creation of symlink ganesha.conf in
/etc/ganesha failed

wrong link: ganesha.conf ->
/var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf

# ls -l /var/run/gluster/shared_storage/
total 0

I've seen some reports (and fixed) in Red Hat's Bugzilla and
looked at the Red Hat solutions
(https://access.redhat.com/solutions/3099581
<https://access.redhat.com/solutions/3099581>) but this doesn't help.

Suggestions?

Hi,

It seems you have not created directory nfs-ganesha under shared
storage and plus copy/create ganesha.conf/ganesha-ha.conf inside
Please follow this document

http://docs.gluster.org/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/

<http://docs.gluster.org/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/>

Regards,
Jiffin






I tried to upgrade to Gluster 3.12 and it seems Ganesha support
was kicked out? whats replacing it?



___
Gluster-users mailing list
Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>





___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] What’s the purpose of /var/lib/glusterd/nfs/secret.pem.pub ?

2017-12-04 Thread Jiffin Tony Thottan



On Friday 01 December 2017 03:04 AM, Adam Ru wrote:

Some time ago I read and followed this quide for installing and
configuring Gluster:
http://blog.gluster.org/linux-scale-out-nfsv4-using-nfs-ganesha-and-glusterfs-one-step-at-a-time/

with steps to create certificate:

/var/lib/glusterd/nfs/secret.pem
/var/lib/glusterd/nfs/secret.pem.pub

and distribute public and private cert file among nodes.

I’ve just tried new Gluster 3.12 and I forgot to create the
certificate and I created new cluster and it worked:

sudo gluster peer probe SecondNode
peer probe: success.
sudo gluster peer probe ThirdNode
peer probe: success.

After I mounted Gluster volumes everything seems to work and Gluster
replicates files.
So why do I need the certificate?


Hi,

If u are not using nfs-ganesha then those files are not required.
Those certificates are used for internal communications for 
setting/modifying the nfs-ganesha HA,

nothing to do with gluster at all.
Regards,
Jiffin



Thank you.

Kind regards,
Adam
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and nfs-ganesha

2017-12-04 Thread Jiffin Tony Thottan



On Saturday 02 December 2017 07:00 PM, Hetz Ben Hamo wrote:

HI,

I'm using CentOS 7.4 with Gluster 3.10.7 and Ganesha NFS 2.4.5.

I'm trying to create a very simple 2 nodes cluster to be used with 
NFS-ganesha. I've created the bricks and the volume. Here's the output:


# gluster volume info

Volume Name: cluster-demo
Type: Replicate
Volume ID: 9c835a8e-c0ec-494c-a73b-cca9d77871c5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glnode1:/data/brick1/gv0
Brick2: glnode2:/data/brick1/gv0
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
cluster.enable-shared-storage: enable

Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: caf36f36-0364-4ab9-a158-f0d1205898c4
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glnode2:/var/lib/glusterd/ss_brick
Brick2: 192.168.0.95:/var/lib/glusterd/ss_brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
cluster.enable-shared-storage: enable

However, when I'm trying to run gluster nfs-ganesha enable - it 
creates a wrong symbolic link and failes:


# gluster nfs-ganesha enable
Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the 
trusted pool. Do you still want to continue?

 (y/n) y
This will take a few minutes to complete. Please wait ..
nfs-ganesha: failed: creation of symlink ganesha.conf in /etc/ganesha 
failed


wrong link: ganesha.conf -> 
/var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf


# ls -l /var/run/gluster/shared_storage/
total 0

I've seen some reports (and fixed) in Red Hat's Bugzilla and looked at 
the Red Hat solutions (https://access.redhat.com/solutions/3099581) 
but this doesn't help.


Suggestions?

Hi,

It seems you have not created directory nfs-ganesha under shared storage 
and plus copy/create ganesha.conf/ganesha-ha.conf inside
Please follow this document 
http://docs.gluster.org/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/


Regards,
Jiffin






I tried to upgrade to Gluster 3.12 and it seems Ganesha support was 
kicked out? whats replacing it?




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing Glusterfs release 3.12.2 (Long Term Maintenance)

2017-10-13 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.2 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry following major issues that is reported in the 
release-notes as follows,


1.) - Expanding a gluster volume that is sharded may cause file corruption

Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


Status of this bug can be tracked here, #1465123


2 .) Gluster volume restarts fail if the sub directory export feature is 
in use. Status of this issue can be tracked here, #1501315


3.) Mounting a gluster snapshot will fail, when attempting a FUSE based 
mount of the snapshot. So for the current users, it is recommend to only 
access snapshot via


".snaps" directory on a mounted gluster volume. Status of this issue can 
be tracked here, #1501378


Thanks,
 Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.2/ 


[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs

[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.2/


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-Maintainers] Release 3.12.2 : Scheduled for the 10th of October

2017-10-12 Thread Jiffin Tony Thottan



On 12/10/17 16:05, Amar Tumballi wrote:



On Thu, Oct 12, 2017 at 3:43 PM, Mohammed Rafi K C 
<rkavu...@redhat.com <mailto:rkavu...@redhat.com>> wrote:


Hi Jiffin/Shyam,


Snapshot volume has been broken in 3.12 . We just got the bug, I have
send a patch [1]  . Let me know your thought.



Similar with subdir mount's authentication. [2]

[2] : https://review.gluster.org/#/c/18489/

[1] : https://review.gluster.org/18506
<https://review.gluster.org/18506>


Hi,

Both issues looks like a regression. Master patch [2] got merged in 
master but [1] is still pending.

@Rafi : Can you get the reviews done ASAP and merge it on master.
I hope both can be make it in 3.12 before the time deadline. If not 
please let me know.


Thanks,
Jiffin



On 10/12/2017 12:32 PM, Jiffin Tony Thottan wrote:
> Hi,
>
> I am planning to do 3.12.2 release today 11:00 pm IST (5:30 pm GMT).
>
> Following bugs is removed from tracker list
>
> Bug 1493422 - AFR : [RFE] Improvements needed in "gluster volume
heal
> info" commands -- feature request will be target for 3.13
>
> Bug 1497989 - Gluster 3.12.1 Packages require manual systemctl
daemon
> reload after install -- "-1" from Kaleb, no progress from Oct 4th,
>
> will be tracked as part of 3.12.3
>
    > Regards,
>
> Jiffin
>
>
>
>
> On 06/10/17 12:36, Jiffin Tony Thottan wrote:
>> Hi,
>>
>> It's time to prepare the 3.12.2 release, which falls on the 10th of
>> each month, and hence would be 10-10-2017 this time around.
>>
>> This mail is to call out the following,
>>
>> 1) Are there any pending *blocker* bugs that need to be tracked for
>> 3.12.2? If so mark them against the provided tracker [1] as
blockers
>> for the release, or at the very least post them as a response
to this
>> mail
>>
>> 2) Pending reviews in the 3.12 dashboard will be part of the
release,
>> *iff* they pass regressions and have the review votes, so use the
>> dashboard [2] to check on the status of your patches to 3.12
and get
>> these going
>>
>> 3) I have made checks on what went into 3.10 post 3.12 release
and if
>> these fixes are already included in 3.12 branch, then status on
this
>> is *green*
>> as all fixes ported to 3.10, are ported to 3.12 as well.
>>
>> Thanks,
>> Jiffin
>>
>> [1] Release bug tracker:
>> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.2
<https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.2>
>>
>> [2] 3.10 review dashboard:
>>

https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard

<https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard>
>>
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
> http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>

___
maintainers mailing list
maintain...@gluster.org <mailto:maintain...@gluster.org>
http://lists.gluster.org/mailman/listinfo/maintainers
<http://lists.gluster.org/mailman/listinfo/maintainers>




--
Amar Tumballi (amarts)


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Release 3.12.2 : Scheduled for the 10th of October

2017-10-12 Thread Jiffin Tony Thottan

Hi,

I am planning to do 3.12.2 release today 11:00 pm IST (5:30 pm GMT).

Following bugs is removed from tracker list

Bug 1493422 - AFR : [RFE] Improvements needed in "gluster volume heal 
info" commands -- feature request will be target for 3.13


Bug 1497989 - Gluster 3.12.1 Packages require manual systemctl daemon 
reload after install -- "-1" from Kaleb, no progress from Oct 4th,


will be tracked as part of 3.12.3

Regards,

Jiffin




On 06/10/17 12:36, Jiffin Tony Thottan wrote:

Hi,

It's time to prepare the 3.12.2 release, which falls on the 10th of
each month, and hence would be 10-10-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.2? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this 
is *green*

as all fixes ported to 3.10, are ported to 3.12 as well.

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.2

[2] 3.10 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 





___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Release 3.12.2 : Scheduled for the 10th of October

2017-10-06 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.2 release, which falls on the 10th of
each month, and hence would be 10-10-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.2? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is 
*green*

as all fixes ported to 3.10, are ported to 3.12 as well.

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.2

[2] 3.10 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Glusterfs release 3.12.1 (Long Term Maintenance)

2017-09-14 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.1 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry a major issue that is reported in the release-notes as 
follows,


- Expanding a gluster volume that is sharded may cause file corruption

Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


Status of this bug can be tracked here, #1465123


Thanks,
Gluster community

[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.1/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs

[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.1/


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] no ganesha.so in 3.10.5 ?

2017-08-30 Thread Jiffin Tony Thottan



On 29/08/17 18:41, lejeczek wrote:

hi

I see:
..
[2017-08-29 12:53:41.708756] W [MSGID: 101095] 
[xlator.c:162:xlator_volopt_dynload] 0-xlator: 
/usr/lib64/glusterfs/3.10.5/xlator/features/ganesha.so: cannot open 
shared object file: No such file or directory

..

and I wonder.. because nothing provides that lib(in terms of rpm 
packages @centos) for 3.10.5 version.





This spurious message , no need to worry. 
https://review.gluster.org/#/c/18147/ should fix it.


Regards,
Jiffin


L.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Bug 1374166 or similar

2017-07-18 Thread Jiffin Tony Thottan



On 16/07/17 20:11, Bernhard Dübi wrote:

Hi,

both Gluster servers were rebooted and now the unlink directory is clean.


Following should have happened, If delete operation is performed gluster 
keeps file in .unlink directory if it has open fd.
In this case since lazy umount is performed, ganesha server may still 
keep the fd's open by that client so gluster keeps

the unlink directory even though it is removed from fuse mount.

--
Jiffin


Best Regards
Bernhard

2017-07-14 12:43 GMT+02:00 Bernhard Dübi <1linuxengin...@gmail.com>:

Hi,

yes, I mounted the Gluster volume and deleted the files from the
volume not the brick

mount -t glusterfs hostname:volname /mnt
cd /mnt/some/directory
rm -rf *

restart of nfs-ganesha is planned for tomorrow. I'll keep you posted
BTW: nfs-ganesha is running on a separate server in standalone configuration

Best Regards
Bernhard

2017-07-14 10:43 GMT+02:00 Jiffin Tony Thottan <jthot...@redhat.com>:


On 14/07/17 13:06, Bernhard Dübi wrote:

Hello everybody,

I'm in a similar situation as described in
https://bugzilla.redhat.com/show_bug.cgi?id=1374166


The issue got fixed by https://review.gluster.org/#/c/14820 and is already
available in 3.8 branch


I have a gluster volume exported through ganesha. we had some problems
on the gluster server and the NFS mount on the client was hanging.
I did a lazy umount of the NFS mount on the client, then went to the
Gluster server, mounted the Gluster volume and deleted a bunch of
files.
When I mounted the volume again on the client I noticed that the space
was not freed. Now I find them in $brick/.glusterfs/unlink

Here you have mounted the volume via glusterfs fuse mount and deleted those
files
right(not directly from the bricks)?
Can you restart nfs-ganesha server and see what happens ?
What type of volume are you using?
--
Jiffin


OS: Ubuntu 16.04
Gluster: 3.8.13
Ganesha: 2.4.5

Let me know if you need more info

Best Regards
Bernhard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Bug 1374166 or similar

2017-07-14 Thread Jiffin Tony Thottan



On 14/07/17 13:06, Bernhard Dübi wrote:

Hello everybody,

I'm in a similar situation as described in
https://bugzilla.redhat.com/show_bug.cgi?id=1374166


The issue got fixed by https://review.gluster.org/#/c/14820 and is 
already available in 3.8 branch




I have a gluster volume exported through ganesha. we had some problems
on the gluster server and the NFS mount on the client was hanging.
I did a lazy umount of the NFS mount on the client, then went to the
Gluster server, mounted the Gluster volume and deleted a bunch of
files.
When I mounted the volume again on the client I noticed that the space
was not freed. Now I find them in $brick/.glusterfs/unlink
Here you have mounted the volume via glusterfs fuse mount and deleted 
those files

right(not directly from the bricks)?
Can you restart nfs-ganesha server and see what happens ?
What type of volume are you using?
--
Jiffin


OS: Ubuntu 16.04
Gluster: 3.8.13
Ganesha: 2.4.5

Let me know if you need more info

Best Regards
Bernhard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Reasons for recommending nfs-ganesha

2017-05-21 Thread Jiffin Tony Thottan

Hi,


On 19/05/17 18:27, te-yamau...@usen.co.jp wrote:

I currently use version 3.10.2.
When nfs is enabled, the following warning is displayed.
Why is nfs-ganesha recommended?
Is there something wrong with gluster nfs?

Gluster NFS is being deprecated in favor of NFS-Ganesha Enter "yes" to continue 
using Gluster NFS (y/n)


The main reason behind above warning message is that currently most of 
the development focus

happens in NFS-Ganesha than gluster nfs(only bug fixes).
The following are major plus points for NFS-Ganesha
1.) NFS-Ganesha is a different community which supports a lot of other 
fIlesystem like CEPH(cephFS / RGW), GPFS, Lustre
2.) It support different nfs protocols including v3,v4, v4.1 and pNFS 
where as gluster NFS supports only v3
3.) Can do dynamic addition/modification of exports(shares) , where in 
gluster nfs each time server requires restart
4.)It has an integrated HA solution using pacemaker & corosync for 
gluster volumes


--
Jiffin



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ganesha.nfsd: `NTIRPC_1.4.3' not found

2017-05-21 Thread Jiffin Tony Thottan

Forwarding mail to ganesha list

Adding Kaleb as well who usually build nfs-ganesha packages


On 21/05/17 07:52, W Kern wrote:
I got bit by that during a maintenance session on a production NFS 
server.  I upgraded and got the same message.


libntirpc 1.4.4 is a security upgrade due to a DOS possibility with 
1.4.3 or earlier


but the nfs-ganesha package is still looking for 1.4.3

Unfortunately the maintainers removed the older libntirpc 1.4.3 
package but didn't update the nfs-ganesha deb to accept 1.4.4


I was in a hurry so I ended up digging up an older 1.4.3 Trusty deb 
package (I'm on Xenial) and installed that manually.


That seemed to work fine. NFS-Ganesha sees 1.4.3 and is fine with it.

When the nfs-ganasha package is fixed, I'll put back in the proper 
1.4.4 package


-wk


On 5/20/2017 1:33 AM, Bernhard Dübi wrote:

Hi,

is this list also dealing with nfs-ganesha problems?

I just ran a dist-upgrade on my Ubuntu 16.04 machine and now
nfs-ganesha doesn't start anymore

May 20 10:00:15 chastcvtprd03 bash[5720]: /usr/bin/ganesha.nfsd:
/lib/x86_64-linux-gnu/libntirpc.so.1.4: version `NTIRPC_1.4.3' not
found (required by /usr/bin/ganesha.nfsd)

Any hints?


Here some info about my system:

# uname -a
Linux hostname 4.4.0-78-generic #99-Ubuntu SMP Thu Apr 27 15:29:09 UTC
2017 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.2 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.2 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/;
SUPPORT_URL="http://help.ubuntu.com/;
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/;
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial


/etc/apt/sources.list.d# head *.list
==> gluster-ubuntu-glusterfs-3_8-xenial.list <==
deb http://ppa.launchpad.net/gluster/glusterfs-3.8/ubuntu xenial main
# deb-src http://ppa.launchpad.net/gluster/glusterfs-3.8/ubuntu 
xenial main


==> gluster-ubuntu-libntirpc-xenial.list <==
deb http://ppa.launchpad.net/gluster/libntirpc/ubuntu xenial main
# deb-src http://ppa.launchpad.net/gluster/libntirpc/ubuntu xenial main

==> gluster-ubuntu-nfs-ganesha-xenial.list <==
deb http://ppa.launchpad.net/gluster/nfs-ganesha/ubuntu xenial main
# deb-src http://ppa.launchpad.net/gluster/nfs-ganesha/ubuntu xenial 
main



# dpkg -l | grep -E 'gluster|ganesha|libntirpc'
ii  glusterfs-common  3.8.12-ubuntu1~xenial1
   amd64GlusterFS common libraries and translator
modules
ii  libntirpc1:amd64  1.4.4-ubuntu1~xenial1
   amd64new transport-independent RPC library
ii  nfs-ganesha   2.4.5-ubuntu1~xenial1
   amd64nfs-ganesha is a NFS server in User Space
ii  nfs-ganesha-fsal:amd642.4.5-ubuntu1~xenial1
   amd64nfs-ganesha fsal libraries


Best Regards
Bernhard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

---
This email has been checked for viruses by AVG.
http://www.avg.com



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ganesha.nfsd: `NTIRPC_1.4.3' not found

2017-05-20 Thread Jiffin Tony Thottan



On 20/05/17 14:03, Bernhard Dübi wrote:

Hi,

is this list also dealing with nfs-ganesha problems?

I just ran a dist-upgrade on my Ubuntu 16.04 machine and now
nfs-ganesha doesn't start anymore

May 20 10:00:15 chastcvtprd03 bash[5720]: /usr/bin/ganesha.nfsd:
/lib/x86_64-linux-gnu/libntirpc.so.1.4: version `NTIRPC_1.4.3' not
found (required by /usr/bin/ganesha.nfsd)


it looks like ganesha  process is trying to use old libntirpc(1.4.3) 
version than the new one

libntirpc(1.4.4) even after post upgrade.
If possible can you try to reinstall libntirpc packages and check 
whether it works


--
Jiffin



Any hints?


Here some info about my system:

# uname -a
Linux hostname 4.4.0-78-generic #99-Ubuntu SMP Thu Apr 27 15:29:09 UTC
2017 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.2 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.2 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/;
SUPPORT_URL="http://help.ubuntu.com/;
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/;
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial


/etc/apt/sources.list.d# head *.list
==> gluster-ubuntu-glusterfs-3_8-xenial.list <==
deb http://ppa.launchpad.net/gluster/glusterfs-3.8/ubuntu xenial main
# deb-src http://ppa.launchpad.net/gluster/glusterfs-3.8/ubuntu xenial main

==> gluster-ubuntu-libntirpc-xenial.list <==
deb http://ppa.launchpad.net/gluster/libntirpc/ubuntu xenial main
# deb-src http://ppa.launchpad.net/gluster/libntirpc/ubuntu xenial main

==> gluster-ubuntu-nfs-ganesha-xenial.list <==
deb http://ppa.launchpad.net/gluster/nfs-ganesha/ubuntu xenial main
# deb-src http://ppa.launchpad.net/gluster/nfs-ganesha/ubuntu xenial main


# dpkg -l | grep -E 'gluster|ganesha|libntirpc'
ii  glusterfs-common  3.8.12-ubuntu1~xenial1
   amd64GlusterFS common libraries and translator
modules
ii  libntirpc1:amd64  1.4.4-ubuntu1~xenial1
   amd64new transport-independent RPC library
ii  nfs-ganesha   2.4.5-ubuntu1~xenial1
   amd64nfs-ganesha is a NFS server in User Space
ii  nfs-ganesha-fsal:amd642.4.5-ubuntu1~xenial1
   amd64nfs-ganesha fsal libraries


Best Regards
Bernhard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] bootstrapping cluster "failure" condition fix for local mounts (like: "gluster volume set all cluster.enable-shared-storage enable")

2017-05-12 Thread Jiffin Tony Thottan



On 09/05/17 19:18, hvjunk wrote:


On 03 May 2017, at 07:49 , Jiffin Tony Thottan <jthot...@redhat.com 
<mailto:jthot...@redhat.com>> wrote:




On 02/05/17 15:27, hvjunk wrote:

Good day,

I’m busy setting up/testing NFS-HA with GlusterFS storage across VMs 
running Debian 8. GlusterFS volume to be "replica 3 arbiter 1"


In the NFS-ganesha information I’ve gleamed thus far, it mentions 
the "gluster volume set all cluster.enable-shared-storage enable”.


My first question is this: is that shared volume that gets 
created/setup, suppose to be resilient across reboots?
 It appears to not be the case in my test setup thus far, that that 
mount doesn’t get recreated/remounted after a reboot.


Following is the script which creates shared storage and mount it in 
the node, plus an entry will be added to /etc/fstab

https://github.com/gluster/glusterfs/blob/master/extras/hook-scripts/set/post/S32gluster_enable_shared_storage.sh

But there is a possibility such that, if glusterd(I hope u have 
enabled enabled glusterd service) is not started before

systemd tries mount the shared storage then it will fail.





Thanks for the systemd helper script
--
Jiffin


Thank Jiffin,

 I since found that (1) you need to wait a bit  for the cluster to 
“settle” with that script having executed, before you reboot the 
cluster (As you might see in my bitbucket ansible scripts in 
https://bitbucket.org/dismyne/gluster-ansibles/src ) … something to 
add in the manuals perhaps to warn people to wait for that script to 
finish before rebooting node/vm/server(s)?


 (2) the default configuration, can’t bootstrap the 
/gluster_shared_storage volume/directory reliably from a clean 
shutdown-reboot of the whole cluster!!!


The problem: SystemD and it’s wanting to have the control over 
/etc/fstab and the mounting, and and and…. (and I’ll not empty my mind 
about L.P. based on his remarks in: 
https://github.com/systemd/systemd/issues/4468#issuecomment-255711912 after 
my struggling with this issue)



To have a reliably bootstrapped (from all nodes down booting up) I'm 
using the following SystemD service and helper script(s) to have the 
gluster cluster node mount their local mounts (like 
/gluster_shared_storage) reliably:


https://bitbucket.org/dismyne/gluster-ansibles/src/24b62dcc858364ee3744d351993de0e8e35c2680/ansible/files/glusterfsmounts.service-centos?at=master


https://bitbucket.org/dismyne/gluster-ansibles/src/24b62dcc858364ee3744d351993de0e8e35c2680/ansible/files/test-mounts.sh?at=master







--
Jiffin


If the mount is not resilient, ie. not recreated/mounted by 
glusterfs and neither added to the /etc/fstab by glusterfs, why the 
initial auto mount by glusterfs and not afterwards with a reboot?


The biggest “issue” I have found with glusterfs is the interaction 
with SystemD and mounts that fails and don’t get properly retried 
later (Will email separately on that issue) during bootstrapping of 
the cluster, and that is why I need to confirm the reasoning/etc. on 
this initial auto-mounting, but then the need to manually add it 
into the /etc/fstab


Thank you
Hendrik

___
Gluster-users mailing list
Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users






___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] postgresql is unable to create a table in gluster volume

2017-05-04 Thread Jiffin Tony Thottan



On 04/05/17 02:03, Praveen George wrote:

Hi Team,

We’ve been intermittently seeing issues where postgresql is unable to 
create a table, or some info is missing.


Postgresql logs the following error:

ERROR:  unexpected data beyond EOF in block 53 of relation 
base/16384/12009
HINT:  This has been seen to occur with buggy kernels; consider 
updating your system.


We are using the k8s PV/PVC to bind the volumes to the containers and 
using the gluster plugin to mount the volumes on the worker nodes and 
take it into the containers.


The issue occurs regardless of whether the  k8s spec specifies 
mounting of the pv using the pv provider or mount the gluster volume 
directly.


Just to check if the issue is with the glusterfs client, we mount the 
volume using NFS (NFS on the client talking to gluster on the master), 
the issue doesn’t occur. However, with the NFS client talking directly 
to _one_ of the gluster masters; this means that if that master fails, 
it will not failover to the other gluster master - we thus lose 
gluster HA if we go this route.




If you are interested there are HA solutions available with NFS. It 
depends on NFS solution which u are trying, if it is gluster 
nfs(integrated nfs server with gluster) then use ctdb and for 
NFS-Ganesha , we already have an integrated solution with pacemaker/corosync


Please update ur gluster version since it EOLed, you don't receive any 
more update for that version.


--

Jiffin

Anyone faced this issue, is there any fix already available for the 
same. Gluster version is 3.7.20 and k8s is 1.5.2.


Thanks
Praveen


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] "gluster volume set all cluster.enable-shared-storage enable"

2017-05-02 Thread Jiffin Tony Thottan



On 02/05/17 15:27, hvjunk wrote:

Good day,

I’m busy setting up/testing NFS-HA with GlusterFS storage across VMs running Debian 8. 
GlusterFS volume to be "replica 3 arbiter 1"

In the NFS-ganesha information I’ve gleamed thus far, it mentions the "gluster 
volume set all cluster.enable-shared-storage enable”.

My first question is this: is that shared volume that gets created/setup, 
suppose to be resilient across reboots?
  It appears to not be the case in my test setup thus far, that that mount 
doesn’t get recreated/remounted after a reboot.


Following is the script which creates shared storage and mount it in the 
node, plus an entry will be added to /etc/fstab

https://github.com/gluster/glusterfs/blob/master/extras/hook-scripts/set/post/S32gluster_enable_shared_storage.sh

But there is a possibility such that, if glusterd(I hope u have enabled 
enabled glusterd service) is not started before

systemd tries mount the shared storage then it will fail.

--
Jiffin


If the mount is not resilient, ie. not recreated/mounted by glusterfs and 
neither added to the /etc/fstab by glusterfs, why the initial auto mount by 
glusterfs and not afterwards with a reboot?

The biggest “issue” I have found with glusterfs is the interaction with SystemD 
and mounts that fails and don’t get properly retried later (Will email 
separately on that issue) during bootstrapping of the cluster, and that is why 
I need to confirm the reasoning/etc. on this initial auto-mounting, but then 
the need to manually add it into the /etc/fstab

Thank you
Hendrik

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Minutes of Gluster Community Bug Triage meeting

2017-03-07 Thread Jiffin Tony Thottan

Hi,

Thanks for everyone's participation


Meeting summary
---
* agenda:https://github.com/gluster/glusterfs/wiki/Bug-Triage-Meeting
  (jiffin, 12:00:30)
* Roll call  (jiffin, 12:00:39)

* Next weeks meeting host  (jiffin, 12:06:15)
  * ACTION: hgowtham will host on March  7th  (jiffin, 12:07:21)
  * ACTION: ndevos need to decide on how to provide/use debug builds
(jiffin, 12:08:15)
  * ACTION: jiffin  needs to send the changes to check-bugs.py (jiffin,
12:08:22)

* Group Triage  (jiffin, 12:08:28)
  * you can find the bugs to triage here in
http://bit.ly/gluster-bugs-to-triage  (jiffin, 12:08:34)
  *
https://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Triage/
(jiffin, 12:08:40)

* Open Floor  (jiffin, 12:19:27)

Meeting ended at 12:20:19 UTC.




Action Items

* hgowtham will host on March  7th
* ndevos need to decide on how to provide/use debug builds
* jiffin  needs to send the changes to check-bugs.py




Action Items, by person
---
* hgowtham
  * hgowtham will host on March  7th
* jiffin
  * jiffin  needs to send the changes to check-bugs.py
* **UNASSIGNED**
  * ndevos need to decide on how to provide/use debug builds




People Present (lines said)
---
* jiffin (25)
* hgowtham (5)
* zodbot (3)
* skoduri (2)

See everyone at same time on March 7th 2017

Regards,

Jiffin

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Ganesha with Gluster transport RDMA does not work

2017-01-10 Thread Jiffin Tony Thottan



On 09/01/17 16:27, Andreas Kurzac wrote:


Hi Jiffin,

i raised bug 1411281.

If you could provide test-rpms i would be very happy to test them in 
our environment.




Sorry If I missed in previous mail, which nfs-ganesha (2.3 or 2.4) 
version are u using ?

--
Jiffin

In the meantime i will switch to tcp,rdma and continue working on our 
setup, we can then switch back to pure rdma any time for testing.


Thanks for your help!

Regards,

Andreas

*Von:*Jiffin Tony Thottan [mailto:jthot...@redhat.com]
*Gesendet:* Montag, 9. Januar 2017 06:02
*An:* Andreas Kurzac <akur...@kinetik.de>; gluster-users@gluster.org
*Betreff:* Re: [Gluster-users] Ganesha with Gluster transport RDMA 
does not work


Hi Andreas,

By checking the code IMO currently this is limitation with in 
FSAL_GLUSTER. It tries to


establish connection with glusterfs servers only using "tcp". It is 
easy to fix as well.


You can raise a bug in 
https://bugzilla.redhat.com/enter_bug.cgi?product=nfs-ganesha


under FSAL_GLUSTER. I don't have any hardware to test the fix. I can 
either help you in


writing up fix for the issue or provide a test rpms with the fix .

Also thanks for trying out nfs-ganesha  with rdma and finding about 
this issue.


For the time being , if possible you can try with tcp,rdma volume to 
solve the problem.


Regards,

Jiffin

On 06/01/17 22:56, Andreas Kurzac wrote:

Dear All,

i have a glusterfs pool with 3 servers with Centos7.3, Glusterfs
3.8.5, network is Infiniband.

Pacemaker/Corosync and Ganesha-NFS is installed and all seems to
be OK, no error logged.

I created a replica 3 volume with transport rdma (without tcp!).

When i mount this volume via glusterfs and do some IO, no errors
are logged and everything seems to go pretty well.

When i mount the volume via nfs and do some IO, nfs freezes
immediatly and following logs are written to

ganesha-gfapi.log:

2017-01-05 23:23:53.536526] W [MSGID: 103004]
[rdma.c:452:gf_rdma_register_arena] 0-rdma: allocation of mr failed

[2017-01-05 23:23:53.541519] W [MSGID: 103004]
[rdma.c:1463:__gf_rdma_create_read_chunks_from_vector]
0-rpc-transport/rdma: memory registration failed
(peer:10.40.1.1:49152) [Keine Berechtigung]

[2017-01-05 23:23:53.541547] W [MSGID: 103029]
[rdma.c:1558:__gf_rdma_create_read_chunks] 0-rpc-transport/rdma:
cannot create read chunks from vector entry->prog_payload

[2017-01-05 23:23:53.541553] W [MSGID: 103033]
[rdma.c:2063:__gf_rdma_ioq_churn_request] 0-rpc-transport/rdma:
creation of read chunks failed

[2017-01-05 23:23:53.541557] W [MSGID: 103040]
[rdma.c:2775:__gf_rdma_ioq_churn_entry] 0-rpc-transport/rdma:
failed to process request ioq entry to peer(10.40.1.1:49152)

[2017-01-05 23:23:53.541562] W [MSGID: 103040]
[rdma.c:2859:gf_rdma_writev] 0-vmstor1-client-0: processing ioq
entry destined to (10.40.1.1:49152) failed

[2017-01-05 23:23:53.541569] W [MSGID: 103037]
[rdma.c:3016:gf_rdma_submit_request] 0-rpc-transport/rdma: sending
request to peer (10.40.1.1:49152) failed

[…]

Some additional info:

Firewall is disabled, SELinux is disabled.

Different hardware with Centos 7.1 and the Mellanox OFED 3.4
packages instead of the Centos Infiniband packages lead to the
same results.

Just to mention: I am not trying to do NFS over RDMA, the Ganesha
FSAL is just configured to "glusterfs".

I hope someone could help me, i am running out of ideas…

Kind regards,

Andreas




___

Gluster-users mailing list

Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>

http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Ganesha with Gluster transport RDMA does not work

2017-01-08 Thread Jiffin Tony Thottan

Hi Andreas,

By checking the code IMO currently this is limitation with in 
FSAL_GLUSTER. It tries to


establish connection with glusterfs servers only using "tcp". It is easy 
to fix as well.


You can raise a bug in 
https://bugzilla.redhat.com/enter_bug.cgi?product=nfs-ganesha


under FSAL_GLUSTER. I don't have any hardware to test the fix. I can 
either help you in


writing up fix for the issue or provide a test rpms with the fix .

Also thanks for trying out nfs-ganesha  with rdma and finding about this 
issue.


For the time being , if possible you can try with tcp,rdma volume to 
solve the problem.


Regards,

Jiffin


On 06/01/17 22:56, Andreas Kurzac wrote:


Dear All,

i have a glusterfs pool with 3 servers with Centos7.3, Glusterfs 
3.8.5, network is Infiniband.


Pacemaker/Corosync and Ganesha-NFS is installed and all seems to be 
OK, no error logged.


I created a replica 3 volume with transport rdma (without tcp!).

When i mount this volume via glusterfs and do some IO, no errors are 
logged and everything seems to go pretty well.


When i mount the volume via nfs and do some IO, nfs freezes immediatly 
and following logs are written to


ganesha-gfapi.log:

2017-01-05 23:23:53.536526] W [MSGID: 103004] 
[rdma.c:452:gf_rdma_register_arena] 0-rdma: allocation of mr failed


[2017-01-05 23:23:53.541519] W [MSGID: 103004] 
[rdma.c:1463:__gf_rdma_create_read_chunks_from_vector] 
0-rpc-transport/rdma: memory registration failed 
(peer:10.40.1.1:49152) [Keine Berechtigung]


[2017-01-05 23:23:53.541547] W [MSGID: 103029] 
[rdma.c:1558:__gf_rdma_create_read_chunks] 0-rpc-transport/rdma: 
cannot create read chunks from vector entry->prog_payload


[2017-01-05 23:23:53.541553] W [MSGID: 103033] 
[rdma.c:2063:__gf_rdma_ioq_churn_request] 0-rpc-transport/rdma: 
creation of read chunks failed


[2017-01-05 23:23:53.541557] W [MSGID: 103040] 
[rdma.c:2775:__gf_rdma_ioq_churn_entry] 0-rpc-transport/rdma: failed 
to process request ioq entry to peer(10.40.1.1:49152)


[2017-01-05 23:23:53.541562] W [MSGID: 103040] 
[rdma.c:2859:gf_rdma_writev] 0-vmstor1-client-0: processing ioq entry 
destined to (10.40.1.1:49152) failed


[2017-01-05 23:23:53.541569] W [MSGID: 103037] 
[rdma.c:3016:gf_rdma_submit_request] 0-rpc-transport/rdma: sending 
request to peer (10.40.1.1:49152) failed


[…]

Some additional info:

Firewall is disabled, SELinux is disabled.

Different hardware with Centos 7.1 and the Mellanox OFED 3.4 packages 
instead of the Centos Infiniband packages lead to the same results.


Just to mention: I am not trying to do NFS over RDMA, the Ganesha FSAL 
is just configured to "glusterfs".


I hope someone could help me, i am running out of ideas…

Kind regards,

Andreas



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to enable shared_storage?

2016-11-20 Thread Jiffin Tony Thottan



On 21/11/16 11:13, Alexandr Porunov wrote:

Version of glusterfs is 3.8.5

Here what I have installed:
rpm  -ivh 
http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm

yum install centos-release-gluster
yum install glusterfs-server


It should be part of glusterfs-server. So can u check files provided by 
this, run rpm -qil 



yum install glusterfs-geo-replication

Unfortunately it doesn't work if I just add the script 
"/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh" 
and restart "glusterd".




I didn't get that, when u rerun gluster v set all 
cluster.enable-shared-storage enable should work (I guess even glusterd 
restart is not required)
Or do u have any volumes named "gluster_shared_storage", if yes please 
remove it and rerun the cli.


--
Jiffin


It seems that I have to install something else..

Sincerely,
Alexandr



On Mon, Nov 21, 2016 at 6:58 AM, Jiffin Tony Thottan 
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:



On 21/11/16 01:07, Alexandr Porunov wrote:

I have installed it from rpm. No that file isn't there. The
folder "/var/lib/glusterd/hooks/1/set/post/" is empty..



which gluster version and what all gluster rpms have u installed?
For time being just download this file[1] and copy to above
location and rerun the same cli.

[1]

https://github.com/gluster/glusterfs/blob/master/extras/hook-scripts/set/post/S32gluster_enable_shared_storage.sh

<https://github.com/gluster/glusterfs/blob/master/extras/hook-scripts/set/post/S32gluster_enable_shared_storage.sh>

--
Jiffin



Sincerely,
Alexandr

On Sun, Nov 20, 2016 at 2:55 PM, Jiffin Tony Thottan
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:

Did u install rpm or directly from sources. Can u check
whether following script is present?

/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh

--

Jiffin


On 20/11/16 13:33, Alexandr Porunov wrote:

To enable shared storage I used next command:
# gluster volume set all cluster.enable-shared-storage enable

But it seems that it doesn't create gluster_shared_storage
automatically.

# gluster volume status gluster_shared_storage
Volume gluster_shared_storage does not exist

Do I need to manually create a volume
"gluster_shared_storage"? Do I need to manually create a
folder "/var/run/gluster/shared_storage"? Do I need to
manually mount it? Or something I don't need to do?

If I use 6 cluster nodes and I need to have a shared storage
on all of them then how to create a shared storage?
It says that it have to be with replication 2 or replication
3. But if we use shared storage on all of 6 nodes then we
have only 2 ways to create a volume:
1. Use replication 6
2. Use replication 3 with distribution.

Which way I need to use?

    Sincerely,
Alexandr

On Sun, Nov 20, 2016 at 9:07 AM, Jiffin Tony Thottan
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:



On 19/11/16 21:47, Alexandr Porunov wrote:

Unfortunately I haven't this log file but I have
'run-gluster-shared_storage.log' and it has errors I
don't know why.

Here is the content of the
'run-gluster-shared_storage.log':



Make sure shared storage is up and running using
"gluster volume status gluster_shared_storage"

May be the issue is related to firewalld or iptables.
Try it after disabling them.

--

Jiffin

[2016-11-19 10:37:01.581737] I [MSGID: 100030]
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started
running /usr/sbin/glusterfs version 3.8.5 (args:
/usr/sbin/glusterfs --volfile-server=127.0.0.1
--volfile-id=gluster_shared_storage
/run/gluster/shared_storage)
[2016-11-19 10:37:01.641836] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker]
0-epoll: Started thread with index 1
[2016-11-19 10:37:01.642311] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs:
failed to get the 'volume file' from server
[2016-11-19 10:37:01.642340] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt:
failed to fetch volume file (key:gluster_shared_storage)
[2016-11-19 10:37:01.642592] W
[glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f95cd309770]
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)
[0x7f95cda3afc6]

Re: [Gluster-users] How to enable shared_storage?

2016-11-20 Thread Jiffin Tony Thottan


On 21/11/16 01:07, Alexandr Porunov wrote:
I have installed it from rpm. No that file isn't there. The folder 
"/var/lib/glusterd/hooks/1/set/post/" is empty..




which gluster version and what all gluster rpms have u installed?
For time being just download this file[1] and copy to  above location 
and rerun the same cli.


[1] 
https://github.com/gluster/glusterfs/blob/master/extras/hook-scripts/set/post/S32gluster_enable_shared_storage.sh


--
Jiffin


Sincerely,
Alexandr

On Sun, Nov 20, 2016 at 2:55 PM, Jiffin Tony Thottan 
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:


Did u install rpm or directly from sources. Can u check whether
following script is present?

/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh

--

Jiffin


On 20/11/16 13:33, Alexandr Porunov wrote:

To enable shared storage I used next command:
# gluster volume set all cluster.enable-shared-storage enable

But it seems that it doesn't create gluster_shared_storage
automatically.

# gluster volume status gluster_shared_storage
Volume gluster_shared_storage does not exist

Do I need to manually create a volume "gluster_shared_storage"?
Do I need to manually create a folder
"/var/run/gluster/shared_storage"? Do I need to manually mount
it? Or something I don't need to do?

If I use 6 cluster nodes and I need to have a shared storage on
all of them then how to create a shared storage?
It says that it have to be with replication 2 or replication 3.
But if we use shared storage on all of 6 nodes then we have only
2 ways to create a volume:
1. Use replication 6
2. Use replication 3 with distribution.

Which way I need to use?

Sincerely,
Alexandr

    On Sun, Nov 20, 2016 at 9:07 AM, Jiffin Tony Thottan
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:



On 19/11/16 21:47, Alexandr Porunov wrote:

Unfortunately I haven't this log file but I have
'run-gluster-shared_storage.log' and it has errors I don't
know why.

Here is the content of the 'run-gluster-shared_storage.log':



Make sure shared storage is up and running using "gluster
volume status gluster_shared_storage"

May be the issue is related to firewalld or iptables. Try it
after disabling them.

--

Jiffin

[2016-11-19 10:37:01.581737] I [MSGID: 100030]
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started
running /usr/sbin/glusterfs version 3.8.5 (args:
/usr/sbin/glusterfs --volfile-server=127.0.0.1
--volfile-id=gluster_shared_storage /run/gluster/shared_storage)
[2016-11-19 10:37:01.641836] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1
[2016-11-19 10:37:01.642311] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs:
failed to get the 'volume file' from server
[2016-11-19 10:37:01.642340] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt: failed to
fetch volume file (key:gluster_shared_storage)
[2016-11-19 10:37:01.642592] W
[glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f95cd309770]
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)
[0x7f95cda3afc6]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7f95cda34b4b] ) 0-: received signum (0), shutting down
[2016-11-19 10:37:01.642638] I [fuse-bridge.c:5793:fini]
0-fuse: Unmounting '/run/gluster/shared_storage'.
[2016-11-19 10:37:18.798787] I [MSGID: 100030]
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started
running /usr/sbin/glusterfs version 3.8.5 (args:
/usr/sbin/glusterfs --volfile-server=127.0.0.1
--volfile-id=gluster_shared_storage /run/gluster/shared_storage)
[2016-11-19 10:37:18.813011] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1
[2016-11-19 10:37:18.813363] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs:
failed to get the 'volume file' from server
[2016-11-19 10:37:18.813386] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt: failed to
fetch volume file (key:gluster_shared_storage)
[2016-11-19 10:37:18.813592] W
[glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f96ba4c7770]
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)
[0x7f96babf8fc6]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7f96babf2b4b] ) 0-: received signum (0), shutting down
[2016-11-19 10:37:18.813633] I [fuse-bridge.c:5793:fini]
0-fuse: Unmou

Re: [Gluster-users] How to enable shared_storage?

2016-11-20 Thread Jiffin Tony Thottan
Did u install rpm or directly from sources. Can u check whether 
following script is present?


/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh

--

Jiffin

On 20/11/16 13:33, Alexandr Porunov wrote:

To enable shared storage I used next command:
# gluster volume set all cluster.enable-shared-storage enable

But it seems that it doesn't create gluster_shared_storage automatically.

# gluster volume status gluster_shared_storage
Volume gluster_shared_storage does not exist

Do I need to manually create a volume "gluster_shared_storage"? Do I 
need to manually create a folder "/var/run/gluster/shared_storage"? Do 
I need to manually mount it? Or something I don't need to do?


If I use 6 cluster nodes and I need to have a shared storage on all of 
them then how to create a shared storage?
It says that it have to be with replication 2 or replication 3. But if 
we use shared storage on all of 6 nodes then we have only 2 ways to 
create a volume:

1. Use replication 6
2. Use replication 3 with distribution.

Which way I need to use?

Sincerely,
Alexandr

On Sun, Nov 20, 2016 at 9:07 AM, Jiffin Tony Thottan 
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:




On 19/11/16 21:47, Alexandr Porunov wrote:

Unfortunately I haven't this log file but I have
'run-gluster-shared_storage.log' and it has errors I don't know why.

Here is the content of the 'run-gluster-shared_storage.log':



Make sure shared storage is up and running using "gluster volume
status  gluster_shared_storage"

May be the issue is related to firewalld or iptables. Try it after
disabling them.

--

Jiffin

[2016-11-19 10:37:01.581737] I [MSGID: 100030]
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running
/usr/sbin/glusterfs version 3.8.5 (args: /usr/sbin/glusterfs
--volfile-server=127.0.0.1 --volfile-id=gluster_shared_storage
/run/gluster/shared_storage)
[2016-11-19 10:37:01.641836] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1
[2016-11-19 10:37:01.642311] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs: failed to
get the 'volume file' from server
[2016-11-19 10:37:01.642340] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt: failed to fetch
volume file (key:gluster_shared_storage)
[2016-11-19 10:37:01.642592] W
[glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f95cd309770] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)
[0x7f95cda3afc6] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7f95cda34b4b] ) 0-: received signum (0), shutting down
[2016-11-19 10:37:01.642638] I [fuse-bridge.c:5793:fini] 0-fuse:
Unmounting '/run/gluster/shared_storage'.
[2016-11-19 10:37:18.798787] I [MSGID: 100030]
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running
/usr/sbin/glusterfs version 3.8.5 (args: /usr/sbin/glusterfs
--volfile-server=127.0.0.1 --volfile-id=gluster_shared_storage
/run/gluster/shared_storage)
[2016-11-19 10:37:18.813011] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1
[2016-11-19 10:37:18.813363] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs: failed to
get the 'volume file' from server
[2016-11-19 10:37:18.813386] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt: failed to fetch
volume file (key:gluster_shared_storage)
[2016-11-19 10:37:18.813592] W
[glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f96ba4c7770] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)
[0x7f96babf8fc6] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7f96babf2b4b] ) 0-: received signum (0), shutting down
[2016-11-19 10:37:18.813633] I [fuse-bridge.c:5793:fini] 0-fuse:
Unmounting '/run/gluster/shared_storage'.
[2016-11-19 10:40:33.115685] I [MSGID: 100030]
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running
/usr/sbin/glusterfs version 3.8.5 (args: /usr/sbin/glusterfs
--volfile-server=127.0.0.1 --volfile-id=gluster_shared_storage
/run/gluster/shared_storage)
[2016-11-19 10:40:33.124218] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1
[2016-11-19 10:40:33.124722] E
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs: failed to
get the 'volume file' from server
[2016-11-19 10:40:33.124738] E
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt: failed to fetch
volume file (key:gluster_shared_storage)
[2016-11-19 10:40:33.124869] W
[glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)
[0x7f23576a9770] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536)

Re: [Gluster-users] How to enable shared_storage?

2016-11-19 Thread Jiffin Tony Thottan



On 19/11/16 21:47, Alexandr Porunov wrote:
Unfortunately I haven't this log file but I have 
'run-gluster-shared_storage.log' and it has errors I don't know why.


Here is the content of the 'run-gluster-shared_storage.log':



Make sure shared storage is up and running using "gluster volume status  
gluster_shared_storage"


May be the issue is related to firewalld or iptables. Try it after 
disabling them.


--

Jiffin
[2016-11-19 10:37:01.581737] I [MSGID: 100030] 
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running 
/usr/sbin/glusterfs version 3.8.5 (args: /usr/sbin/glusterfs 
--volfile-server=127.0.0.1 --volfile-id=gluster_shared_storage 
/run/gluster/shared_storage)
[2016-11-19 10:37:01.641836] I [MSGID: 101190] 
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started 
thread with index 1
[2016-11-19 10:37:01.642311] E 
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs: failed to get 
the 'volume file' from server
[2016-11-19 10:37:01.642340] E 
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt: failed to fetch 
volume file (key:gluster_shared_storage)
[2016-11-19 10:37:01.642592] W [glusterfsd.c:1327:cleanup_and_exit] 
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7f95cd309770] 
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536) [0x7f95cda3afc6] 
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f95cda34b4b] ) 0-: 
received signum (0), shutting down
[2016-11-19 10:37:01.642638] I [fuse-bridge.c:5793:fini] 0-fuse: 
Unmounting '/run/gluster/shared_storage'.
[2016-11-19 10:37:18.798787] I [MSGID: 100030] 
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running 
/usr/sbin/glusterfs version 3.8.5 (args: /usr/sbin/glusterfs 
--volfile-server=127.0.0.1 --volfile-id=gluster_shared_storage 
/run/gluster/shared_storage)
[2016-11-19 10:37:18.813011] I [MSGID: 101190] 
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started 
thread with index 1
[2016-11-19 10:37:18.813363] E 
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs: failed to get 
the 'volume file' from server
[2016-11-19 10:37:18.813386] E 
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt: failed to fetch 
volume file (key:gluster_shared_storage)
[2016-11-19 10:37:18.813592] W [glusterfsd.c:1327:cleanup_and_exit] 
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7f96ba4c7770] 
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536) [0x7f96babf8fc6] 
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f96babf2b4b] ) 0-: 
received signum (0), shutting down
[2016-11-19 10:37:18.813633] I [fuse-bridge.c:5793:fini] 0-fuse: 
Unmounting '/run/gluster/shared_storage'.
[2016-11-19 10:40:33.115685] I [MSGID: 100030] 
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running 
/usr/sbin/glusterfs version 3.8.5 (args: /usr/sbin/glusterfs 
--volfile-server=127.0.0.1 --volfile-id=gluster_shared_storage 
/run/gluster/shared_storage)
[2016-11-19 10:40:33.124218] I [MSGID: 101190] 
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started 
thread with index 1
[2016-11-19 10:40:33.124722] E 
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs: failed to get 
the 'volume file' from server
[2016-11-19 10:40:33.124738] E 
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt: failed to fetch 
volume file (key:gluster_shared_storage)
[2016-11-19 10:40:33.124869] W [glusterfsd.c:1327:cleanup_and_exit] 
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7f23576a9770] 
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536) [0x7f2357ddafc6] 
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f2357dd4b4b] ) 0-: 
received signum (0), shutting down
[2016-11-19 10:40:33.124896] I [fuse-bridge.c:5793:fini] 0-fuse: 
Unmounting '/run/gluster/shared_storage'.
[2016-11-19 10:44:36.029838] I [MSGID: 100030] 
[glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running 
/usr/sbin/glusterfs version 3.8.5 (args: /usr/sbin/glusterfs 
--volfile-server=127.0.0.1 --volfile-id=gluster_shared_storage 
/run/gluster/shared_storage)
[2016-11-19 10:44:36.043705] I [MSGID: 101190] 
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started 
thread with index 1
[2016-11-19 10:44:36.044082] E 
[glusterfsd-mgmt.c:1586:mgmt_getspec_cbk] 0-glusterfs: failed to get 
the 'volume file' from server
[2016-11-19 10:44:36.044106] E 
[glusterfsd-mgmt.c:1686:mgmt_getspec_cbk] 0-mgmt: failed to fetch 
volume file (key:gluster_shared_storage)
[2016-11-19 10:44:36.044302] W [glusterfsd.c:1327:cleanup_and_exit] 
(-->/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7fbd9dced770] 
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x536) [0x7fbd9e41efc6] 
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7fbd9e418b4b] ) 0-: 
received signum (0), shutting down
[2016-11-19 10:44:36.044356] I [fuse-bridge.c:5793:fini] 0-fuse: 
Unmounting '/run/gluster/shared_storage'.


Can you help me to figure out what I am doing wrong?

Sincerely,
Alexandr

On Sat, Nov 19, 2016 at 3:18 PM, Saravanakumar Arumugam 
> wrote:




On 

Re: [Gluster-users] trashcan file size limit

2016-10-20 Thread Jiffin Tony Thottan



On 19/10/16 20:54, Jackie Tung wrote:

Thanks Jiffin, filed https://bugzilla.redhat.com/show_bug.cgi?id=1386766

In my limited knowledge of the original reasons for 1GB hardcode, 
either removing the limit altogether, or an additional “override" 
option parameter would be preferable in my humble opinion.





Thanks for filing the bug, patch posted for addressing this issue 
http://review.gluster.org/15689

--
Jiffin

On Oct 19, 2016, at 2:02 AM, Jiffin Tony Thottan <jthot...@redhat.com 
<mailto:jthot...@redhat.com>> wrote:


Hi Jackie,


On 18/10/16 23:48, Jackie Tung wrote:

Hi all,

Documentation says: 
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Trash/


*/gluster volume set  features.trash-max-filesize /*

This command can be used to filter files entering trash directory 
based on their size. Files above trash_max_filesize are 
deleted/truncated directly.Value for size may be followed by 
multiplicative suffixes as KB(=1024 bytes), MB(=1024*1024 bytes) and 
GB(=1024*1024*1024 bytes). Default size is set to 5MB. Considering 
the fact that trash directory is consuming the glusterfs volume 
space, trash feature is implemented to function in such a way 
that*it directly deletes/truncates files with size > 1GB even if 
this option is set to some value greater than 1GB.*


Is there any workaround (short of changing source code and 
rebuilding) that can allow me to override this 1GB hard limit?  We 
store a lot of large files, and having a limit of 1GB greatly 
reduces the value of this trashcan feature for us.





I don't remember exactly reason behind having a hard coded value for 
upper boundary, one reason may be to limit space used
by trash directory and for truncate operation it result in 
performance hit. The hard coded value can be found at
xlators/features/trash/src/trash.h:32 (GF_ALLOWED_MAX_FILE_SIZE) . If 
you are interested , then change value(according to
your preference) in the code and send out a patch to 
http://review.gluster.org/ as well.


As first step please file a bug https://bugzilla.redhat.com/ under 
community->glusterfs->trash xlator.


Regards,
Jiffin


Many thanks,
Jackie





The information in this email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this 
email by anyone else is unauthorized. If you are not the intended 
recipient, any disclosure, copying, distribution or any action taken 
or omitted to be taken in reliance on it, is prohibited and may be 
unlawful.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users





The information in this email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this 
email by anyone else is unauthorized. If you are not the intended 
recipient, any disclosure, copying, distribution or any action taken 
or omitted to be taken in reliance on it, is prohibited and may be 
unlawful.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] trashcan file size limit

2016-10-19 Thread Jiffin Tony Thottan

Hi Jackie,


On 18/10/16 23:48, Jackie Tung wrote:

Hi all,

Documentation says: 
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Trash/


*/gluster volume set  features.trash-max-filesize /*

This command can be used to filter files entering trash directory 
based on their size. Files above trash_max_filesize are 
deleted/truncated directly.Value for size may be followed by 
multiplicative suffixes as KB(=1024 bytes), MB(=1024*1024 bytes) and 
GB(=1024*1024*1024 bytes). Default size is set to 5MB. Considering the 
fact that trash directory is consuming the glusterfs volume space, 
trash feature is implemented to function in such a way that*it 
directly deletes/truncates files with size > 1GB even if this option 
is set to some value greater than 1GB.*


Is there any workaround (short of changing source code and rebuilding) 
that can allow me to override this 1GB hard limit?  We store a lot of 
large files, and having a limit of 1GB greatly reduces the value of 
this trashcan feature for us.





I don't remember exactly reason behind having a hard coded value for 
upper boundary, one reason may be to limit space used
by trash directory and for truncate operation it result in performance 
hit. The hard coded value can be found at
xlators/features/trash/src/trash.h:32 (GF_ALLOWED_MAX_FILE_SIZE) . If 
you are interested , then change value(according to
your preference) in the code and send out a patch to 
http://review.gluster.org/ as well.


As first step please file a bug https://bugzilla.redhat.com/ under 
community->glusterfs->trash xlator.


Regards,
Jiffin


Many thanks,
Jackie





The information in this email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this 
email by anyone else is unauthorized. If you are not the intended 
recipient, any disclosure, copying, distribution or any action taken 
or omitted to be taken in reliance on it, is prohibited and may be 
unlawful.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Being Gluster NFS off

2016-10-10 Thread Jiffin Tony Thottan

Hi all,

I am trying to list out  glusterd issues with the 3.8 feature "Gluster 
NFS being off by default".


As per current implementation,

1.) On a freshly installed setup with 3.8/3.9, if u create a volume, 
then Gluster NFS won't


 come by default and in the vol info we can see " nfs.disable on"

2.) For existing volumes(created in 3.7 or below), there are two 
possibilities


a.) if there are only volumes with default configuration, Gluster NFS 
won't come up and in


 vol info "nfs.disable on" won't displayed. In volume status 
command pid of Gluster NFS


will be N/A.

b.) if there is a volume with explicit "nfs.disable off" set , then 
after upgrade Gluster NFS will


come and export all the existing volumes and vol info will have similar 
value as a.)


Currently three bugs[1,2,3] have opened to address these issues.

As per 3.8 release note, gluster nfs should be up for all existing 
volumes with default


configuration. We are planning to change this behavior from 3.9 onwards 
and Atin send out a patch[4]


With his patch after upgrade all the existing volumes with default 
configuration will have


nfs.disable value set to on explicitly in the vol info. So Gluster NFS 
won't export that volume at all


and gluster v status does not display status of Gluster NFS server.

This patch also solves bugs 2 and 3 as well

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1383006 - gluster nfs 
not coming for existing volumes on 3.8


[2] https://bugzilla.redhat.com/show_bug.cgi?id=1383005 - getting n/a 
entry in volume status command


[3] https://bugzilla.redhat.com/show_bug.cgi?id=1379223 - nfs.disable: 
on" is not showing in Vol info by default
  for the 3.7.x volumes after updating to 
3.9.0


[4] http://review.gluster.org/#/c/15568/

Regards,
Jiffin

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Weekly Community Meeting 31/Aug/2016 - Minutes

2016-09-07 Thread Jiffin Tony Thottan

Hi all,

Thanks for everyone's  participation and making it success.

The minutes and logs for todays meeting are available from the links below,
 Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-08-31/weekly_community_meeting_31aug2015.2016-08-31-12.01.html
 Minutes 
(text):https://meetbot.fedoraproject.org/gluster-meeting/2016-08-31/weekly_community_meeting_31aug2015.2016-08-31-12.01.txt
 Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-08-31/weekly_community_meeting_31aug2015.2016-08-31-12.01.log.html


kshlm will host next week's meeting. See you all again at same time on 
7th September 2016 at #gluster-meeting.


Regards,

Jiffin

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] CFP Gluster Developer Summit

2016-08-19 Thread Jiffin Tony Thottan



On 17/08/16 19:26, Kaleb S. KEITHLEY wrote:

I propose to present on one or more of the following topics:

* NFS-Ganesha Architecture, Roadmap, and Status


Sorry for the late notice. I am willing to be a co-presenter for the 
above topic.

--
Jiffin


* Architecture of the High Availability Solution for Ganesha and Samba
  - detailed walk through and demo of current implementation
  - difference between the current and storhaug implementations
* High Level Overview of autoconf/automake/libtool configuration
  (I gave a presentation in BLR in 2015, so this is perhaps less
interesting?)
* Packaging Howto — RPMs and .debs
  (maybe a breakout session or a BOF. Would like to (re)enlist volunteers
to help build packages.)




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS mounts

2016-08-11 Thread Jiffin Tony Thottan



On 12/08/16 07:23, Deepak Naidu wrote:

I tried more things to figure out the issue. Like upgrading NFS-ganesha to the 
latest version(as the earlier version had some bug regarding crashing), that 
helped a bit.

But still again the ls -ls or rm -rf files were hanging but not much as 
earlier. So upgrade of NFS ganesha to stable version did help help a bit.

I did strace again, looks like its pausing/hanging at "lstat" I had to [crtl+c] 
to get the exact hang/pausing line.

lgetxattr("/mnt/gluster/rand.26.0", "security.selinux", 0x1990a00, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.25.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.25.0", "security.selinux", 0x1990a20, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.24.0", ^C


NOTE: I am running fio to generate some write operation & hangs are seen when 
issuing ls during write operation.

Next thing, I might try is to use NFS mount rather than Glustefs fuse to see if 
its related to fuse client.

strace of ls -l /mnt/gluster/==

munmap(0x7efebec71000, 4096)= 0
openat(AT_FDCWD, "/mnt/gluster/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
getdents(3, /* 14 entries */, 32768)= 464
lstat("/mnt/gluster/9e50d562-5846-4a60-ad75-e95dcbe0e38a.vhd", 
{st_mode=S_IFREG|0644, st_size=19474461184, ...}) = 0
lgetxattr("/mnt/gluster/9e50d562-5846-4a60-ad75-e95dcbe0e38a.vhd", 
"security.selinux", 0x1990900, 255) = -1 ENODATA (No data available)
lstat("/mnt/gluster/file1", {st_mode=S_IFREG|0644, st_size=19474461184, ...}) = 0
lgetxattr("/mnt/gluster/file1", "security.selinux", 0x1990940, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.0.0", {st_mode=S_IFREG|0644, st_size=2147483648, ...}) 
= 0
lgetxattr("/mnt/gluster/rand.0.0", "security.selinux", 0x1990940, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.31.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.31.0", "security.selinux", 0x1990960, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.30.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.30.0", "security.selinux", 0x1990980, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.29.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.29.0", "security.selinux", 0x19909a0, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.28.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.28.0", "security.selinux", 0x19909c0, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.27.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.27.0", "security.selinux", 0x19909e0, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.26.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.26.0", "security.selinux", 0x1990a00, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.25.0", {st_mode=S_IFREG|0644, st_size=2147483648, 
...}) = 0
lgetxattr("/mnt/gluster/rand.25.0", "security.selinux", 0x1990a20, 255) = -1 
ENODATA (No data available)
lstat("/mnt/gluster/rand.24.0", ^C

strace of end -  ls -l /mnt/gluster/==



I am wondering why it is  sending gettattr call on "security.selinux". 
Also can u please mention which version
of ganesha and details of ganesha.conf . Latest stable release for 
ganesha(2.3.3) is pretty close.


Check /var/log/ganesha.log and /var/log/ganesha-gfapi.log for more clues

--
Jiffin



-Original Message-
From: Deepak Naidu
Sent: Wednesday, August 10, 2016 2:25 PM
To: Vijay Bellur
Cc: gluster-users@gluster.org
Subject: RE: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS 
mounts

To be more precious the hang is clearly seen when there is some IO(write) to 
the mount point. Even rm -rf takes time to clear the files.

Below, time command showing the delay. Typically it should take less then a 
second, but glusterfs take more than 5seconds just to list 32x 2GB files.

[root@client-host ~]# time ls -l /mnt/gluster/ total 34575680 -rw-r--r--. 1 
root root 2147483648 Aug 10 12:23 rand.0.0 -rw-r--r--. 1 root root 2147483648 
Aug 10 12:23 rand.1.0 -rw-r--r--. 1 root root 2147454976 Aug 10 12:23 rand.10.0 
-rw-r--r--. 1 root root 2147463168 Aug 10 12:23 rand.11.0 -rw-r--r--. 1 root 
root 2147467264 Aug 10 12:23 rand.12.0 -rw-r--r--. 1 root root 2147475456 Aug 
10 12:23 rand.13.0 -rw-r--r--. 1 root root 2147479552 Aug 10 12:23 rand.14.0 
-rw-r--r--. 1 root root 2147479552 Aug 10 12:23 rand.15.0 -rw-r--r--. 1 root 
root 2147483648 Aug 10 12:23 rand.16.0 -rw-r--r--. 1 root root 2147479552 Aug 
10 12:23 rand.17.0 -rw-r--r--. 1 root root 2147483648 Aug 10 12:23 rand.18.0 
-rw-r--r--. 1 root root 2147467264 Aug 10 12:23 rand.19.0 -rw-r--r--. 1 root 
root 2147483648 Aug 10 12:23 rand.2.0 -rw-r--r--. 1 root root 2147475456 Aug 10 
12:23 

[Gluster-users] Improvements in Glusterd NFS-Ganesha integration for GlusterFS3.9

2016-08-08 Thread Jiffin Tony Thottan

Hi all,

Currently all the configuration related NFS Ganesha is stored 
individually in each
node belong to ganesha cluster at /etc/ganesha.  The following are the 
files

present in it :
- ganesha.conf - configuration file for ganesha process
- ganesha-ha.conf - configuration file high availablity cluster
- files under export directory - export configuration file for gluster 
volume

- .export_added  - to track no of volumes got exported

The glusterd does not have specific control over this file or in other 
words there
is no specific way to synchronize these files. So this will result in 
having different

values for above files. For example consider following node down scenario,
 * Two volumes volA and volB got exported one after another while one of
node(lets call it as tmp)  the ganesha cluster was down.
 * Now in current cluster volA will be export with Id = 2, for volB it 
will be 3.
 * When tmp comes up, there is a chance in which volA with id 3 and 
volB with 2
 * This a give undesired behavior during failover and failback with the 
node tmp.


More such scenarios is described in the bug[1]. A proposed solution to 
overcome
such situations is that store above mentioned configuration files in 
shared storage.
Then it can be shared by every node in ganesha cluster and all such mess 
can be

avoided. More detailed description can be found in feature page[2]

So here as prerequisite, user need to create a folder nfs-ganesha in 
shared storage
and save ganesha.conf, ganesha-ha.conf in it. When the cli "gluster 
nfs-ganesha enable"
got executed, glusterd creates a symlink in /etc/ganesha using 
ganesha.conf, then start
ganesha process and sets up HA. During disable it teardowns the HA, 
stops ganesha

process and then remove entry from /etc/ganesha.

For existing users, scripts will be provided for the smooth migration.

Please share your thoughts on the same

[1] Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=1355956
[2] Feature page : http://review.gluster.org/#/c/15105/
[3] Patches posted upstream for share storage migration
* http://review.gluster.org/14906
* http://review.gluster.org/14908
* http://review.gluster.org/14909
[4] Patches posted/merged upstream as part of clean up
* http://review.gluster.org/15055
* http://review.gluster.org/14871
* http://review.gluster.org/14812
* http://review.gluster.org/14907

Regards,
Jiffin


Regards,
Jiffin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS-3.8 is available

2016-06-29 Thread Jiffin Tony Thottan

Hi all,

There has been great delay in announcement of the major release
GlusterFS-3.8 in gluster mali list. Apologies for that. Since GlusterFS
3.8 got released, the version 3.5 has reached EOL. We do our best to
maintain three versions of Gluster, with the 3.8  release it will be 3.8,
3.7 and 3.6. This is a major release that includes a huge number of
changes including  15 plus  features and 1228 bug fixes .Most of the
improvements contribute to better support of Gluster  with containers
and running your storage on the sameserver as your hypervisors.
Lots of work has been done to integrate with other projects that are
part of the open source storage  ecosystem. More information can
be found in release note [1] and blog post [2]. The packages for the
various distribution can be downloaded from d.g.o[3].

Testing feedback and patches for issues observed in 3.8.0 are very
welcome. If you would like to propose bug fix candidates or minor
features for inclusion in 3.8.1, please add them to the tracker at [4].
We intend to do fairly regular releases for stabilizing 3.8.x based on
your feedback.

Thanks everybody who have contibuted to 3.8 release and make it
happen on time

[1] 
https://github.com/gluster/glusterfs/blob/release-3.8/doc/release-notes/3.8.0.md


[2] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ 


*
*[3] http://download.gluster.org/pub/gluster/glusterfs/3.8/3.8.0/
*
*[4] 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=1=glusterfs-3.8.1_resolved=1

*
*Cheers,
Jiffin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] setfacl: Operation not supported

2016-06-28 Thread Jiffin Tony Thottan



On 28/06/16 19:12, Evans, Kyle wrote:

Hi Jiffin,

Thanks for confirming that it is a bug and it is fixed in a newer 
version; I appreciate it.




no problem



Kyle



From: Jiffin Tony Thottan
Date: Tuesday, June 28, 2016 at 4:53 AM
To: Kyle Evans, "gluster-users@gluster.org 
<mailto:gluster-users@gluster.org>"

Subject: Re: [Gluster-users] setfacl: Operation not supported

Hi Evans,

Sorry for the delayed reply.

I tried to reproduce on my setup(version 3.7.9) and it was working 
fine for me.
But it was fairly reproducible with version which you had mentioned. I 
don't know
which patch got fixed that issue, still I suggest to update your 
gluster so that both

issues mentioned below will be solved

On 24/06/16 23:43, Evans, Kyle wrote:


Hi Jiffin,


Thanks for the help.  You understand correctly, I am talking about 
the client.  The problem is intermittent, and those lines DO appear 
in the log when it works but DO NOT appear in the log when it is 
broken.  Also, here is another log I am getting that may be relevant:



[2016-06-13 17:39:33.128941] I [dict.c:473:dict_get] 
(-->/usr/lib64/glusterfs/3.7.5/xlator/system/posix-acl.so(posix_acl_setxattr_cbk+0x26) 
[0x7effdbdfb3a6] 
-->/usr/lib64/glusterfs/3.7.5/xlator/system/posix-acl.so(handling_other_acl_related_xattr+0x22) 
[0x7effdbdfb2a2] -->/lib64/libglusterfs.so.0(dict_get+0xac) 
[0x7effef3e80cc] ) 0-dict: !this || key=system.posix_acl_access 
[Invalid argument]





Ignore this , this is spurious message which was fixed by this patch 
http://review.gluster.org/#/c/13452/


--

Regards



Thanks,


Kyle


From: Jiffin Tony Thottan
Date: Friday, June 24, 2016 at 2:17 AM
To: Kyle Evans, "gluster-users@gluster.org"
Subject: Re: [Gluster-users] setfacl: Operation not supported



On 24/06/16 02:08, Evans, Kyle wrote:
I'm using gluster 3.7.5-19 on RHEL 7.2  Gluster periodically stops 
allowing ACLs.  I have it configured in fstab like this:


Server.example.com:/dir /mnt glusterfs defaults,_netdev,acl 0 0


Also, the bricks are XFS.

It usually works fine, but sometimes after a reboot, one of the 
nodes won't allow acl operations like setfacl and getfacl.  They 
give the error "Operation not supported".



Did u meant client reboot ?

Correct me if I am wrong,

You have mounted the glusterfs volume with acl enabled and configured 
in fstab


When you reboot client, acl operations are returning error as 
"Operation not supported".


Can please follow the steps if possible
after mounting can check the client log (in your example it should be 
/var/log/glusterfs/mnt.log)

and confirm whether following block is present in the vol graph
"volume posix-acl-autoload
type system/posix-acl
subvolumes dir
end-volume"

Clear the log file before reboot and just check whether same block is 
present after reboot


--
Jiffin


Sometimes it's not even after a reboot; it just stops supporting it.

If I unmount and remount, it starts working again.  Does anybody 
have any insight?


Thanks,

Kyle


___
Gluster-users mailing list
Gluster-users@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users






___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] setfacl: Operation not supported

2016-06-28 Thread Jiffin Tony Thottan

Hi Evans,

Sorry for the delayed reply.

I tried to reproduce on my setup(version 3.7.9) and it was working fine 
for me.
But it was fairly reproducible with version which you had mentioned. I 
don't know
which patch got fixed that issue, still I suggest to update your gluster 
so that both

issues mentioned below will be solved

On 24/06/16 23:43, Evans, Kyle wrote:


Hi Jiffin,


Thanks for the help.  You understand correctly, I am talking about the 
client.  The problem is intermittent, and those lines DO appear in the 
log when it works but DO NOT appear in the log when it is broken. 
 Also, here is another log I am getting that may be relevant:



[2016-06-13 17:39:33.128941] I [dict.c:473:dict_get] 
(-->/usr/lib64/glusterfs/3.7.5/xlator/system/posix-acl.so(posix_acl_setxattr_cbk+0x26) 
[0x7effdbdfb3a6] 
-->/usr/lib64/glusterfs/3.7.5/xlator/system/posix-acl.so(handling_other_acl_related_xattr+0x22) 
[0x7effdbdfb2a2] -->/lib64/libglusterfs.so.0(dict_get+0xac) 
[0x7effef3e80cc] ) 0-dict: !this || key=system.posix_acl_access 
[Invalid argument]





Ignore this , this is spurious message which was fixed by this patch 
http://review.gluster.org/#/c/13452/


--

Regards



Thanks,


Kyle


From: Jiffin Tony Thottan
Date: Friday, June 24, 2016 at 2:17 AM
To: Kyle Evans, "gluster-users@gluster.org 
<mailto:gluster-users@gluster.org>"

Subject: Re: [Gluster-users] setfacl: Operation not supported



On 24/06/16 02:08, Evans, Kyle wrote:
I'm using gluster 3.7.5-19 on RHEL 7.2  Gluster periodically stops 
allowing ACLs.  I have it configured in fstab like this:


Server.example.com:/dir /mnt glusterfs defaults,_netdev,acl 0 0


Also, the bricks are XFS.

It usually works fine, but sometimes after a reboot, one of the nodes 
won't allow acl operations like setfacl and getfacl.  They give the 
error "Operation not supported".



Did u meant client reboot ?

Correct me if I am wrong,

You have mounted the glusterfs volume with acl enabled and configured 
in fstab


When you reboot client, acl operations are returning error as 
"Operation not supported".


Can please follow the steps if possible
after mounting can check the client log (in your example it should be 
/var/log/glusterfs/mnt.log)

and confirm whether following block is present in the vol graph
"volume posix-acl-autoload
type system/posix-acl
subvolumes dir
end-volume"

Clear the log file before reboot and just check whether same block is 
present after reboot


--
Jiffin


Sometimes it's not even after a reboot; it just stops supporting it.

If I unmount and remount, it starts working again.  Does anybody have 
any insight?


Thanks,

Kyle


___
Gluster-users mailing list
Gluster-users@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] setfacl: Operation not supported

2016-06-24 Thread Jiffin Tony Thottan



On 24/06/16 02:08, Evans, Kyle wrote:
I'm using gluster 3.7.5-19 on RHEL 7.2  Gluster periodically stops 
allowing ACLs.  I have it configured in fstab like this:


Server.example.com:/dir /mnt glusterfs defaults,_netdev,acl 0 0


Also, the bricks are XFS.

It usually works fine, but sometimes after a reboot, one of the nodes 
won't allow acl operations like setfacl and getfacl.  They give the 
error "Operation not supported".



Did u meant client reboot ?

Correct me if I am wrong,

You have mounted the glusterfs volume with acl enabled and configured in 
fstab


When you reboot client, acl operations are returning error as "Operation 
not supported".


Can please follow the steps if possible
after mounting can check the client log (in your example it should be 
/var/log/glusterfs/mnt.log)

and confirm whether following block is present in the vol graph
"volume posix-acl-autoload
type system/posix-acl
subvolumes dir
end-volume"

Clear the log file before reboot and just check whether same block is 
present after reboot


--
Jiffin


Sometimes it's not even after a reboot; it just stops supporting it.

If I unmount and remount, it starts working again.  Does anybody have 
any insight?


Thanks,

Kyle


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Query!

2016-06-17 Thread Jiffin Tony Thottan



On 17/06/16 18:01, ABHISHEK PALIWAL wrote:

Hi,

I am using Gluster 3.7.6 and performing plug in plug out of the board 
but getting following brick logs after plug in board again:


[2016-06-17 07:14:36.122421] W [trash.c:1858:trash_mkdir] 
0-c_glusterfs-trash: mkdir issued on /.trashcan/, which is not permitted
[2016-06-17 07:14:36.122487] E [MSGID: 115056] 
[server-rpc-fops.c:509:server_mkdir_cbk] 0-c_glusterfs-server: 9705: 
MKDIR /.trashcan (----0001/.trashcan) ==> 
(Operation not permitted) [Operation not permitted]
[2016-06-17 07:14:36.139773] W [trash.c:1858:trash_mkdir] 
0-c_glusterfs-trash: mkdir issued on /.trashcan/, which is not permitted
[2016-06-17 07:14:36.139861] E [MSGID: 115056] 
[server-rpc-fops.c:509:server_mkdir_cbk] 0-c_glusterfs-server: 9722: 
MKDIR /.trashcan (----0001/.trashcan) ==> 
(Operation not permitted) [Operation not permitted]



Could any one tell me the reason behind this failure like when and why 
these log occurs.


This error can be seen only if I you try to create .trashcan from the mount
--
Jiffin



I have already pushed same query previously but did not get any response.

--




Regards
Abhishek Paliwal


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS ganesha client not showing files after crash

2016-06-14 Thread Jiffin Tony Thottan



On 06/06/16 08:20, Alan Hartless wrote:

Hi Jiffin,

Thanks! I have 3.7.11-ubuntu1~trusty1 installed and using NFSv4 mount 
protocols.


Doing a forced lookup lists the root directories but shows 0 files in 
each.

Hi,

Sorry for the delayed reply.

You might need to do the explicit lookup on file as well. I tried above 
mentioned scenario in my set up.
For me ganesha and fuse mount works in same manner, lots of 
file/directories were missing. `ls` on

both mount results same output.

Another thing to be noted that effect of client side caching for nfs. 
Can you disable client cache by providing

"noac" option during mounting and try the same.

mount -t nfs -o noac ...

--

Regards
Jiffin


Thanks!
Alan

On Fri, Jun 3, 2016 at 3:09 AM Jiffin Tony Thottan 
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:


Hi Alan,

I try to reproduce issue with my set up and get back to u.

can u please mention mount protocol and gluster package version(3.7-?)

Incase if u can't find /var/log/ganesha.log(it is default location
for fedora and centos),
Just the system log messages and grep for ganesha.

Also can try to perform force lookup on directory using "ls
/* -ltr"

--
Jiffin


On 02/06/16 00:16, Alan Hartless wrote:

Yes, I had a brick that I restored and so it had existing files.
After the crash, it wouldn't let me re-add it because it said the
files were already part of a gluster. So I followed

https://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
 to
reset it.

Also correct that I can access all files through fuse but only
the root directory via ganesha NFS4 or any directories/files that
have since been created.

Using a forced lookup on a specific file, I found that I can
reach it and even edit it. But a ls or dir will not list it or
any of it's parent directories. Even after editing the file, it
does not list with ls.

I'm using gluster 3.7 and ganesha 2.3 from Gluster's Ubuntu
repositories.

I don't have a /var/log/ganesha.log but I do
/var/log/ganesha-gfapi.log. I tailed it while restarting ganesha
and got this for the specific volume:

[2016-06-01 18:44:44.876385] I [MSGID: 114020]
[client.c:2106:notify] 0-letsencrypt-client-0: parent translators
are ready, attempting connect on transport
[2016-06-01 18:44:44.876903] I [MSGID: 114020]
[client.c:2106:notify] 0-letsencrypt-client-1: parent translators
are ready, attempting connect on transport
[2016-06-01 18:44:44.877193] I
[rpc-clnt.c:1868:rpc_clnt_reconfig] 0-letsencrypt-client-0:
changing port to 49154 (from 0)
[2016-06-01 18:44:44.877837] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-letsencrypt-client-0: Using Program GlusterFS 3.3, Num
(1298437), Version (330)
[2016-06-01 18:44:44.878234] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-letsencrypt-client-0: Connected to letsencrypt-client-0,
attached to remote volume '/gluster_volume/letsencrypt'.
[2016-06-01 18:44:44.878253] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-letsencrypt-client-0: Server and Client lk-version numbers are
not same, reopening the fds
[2016-06-01 18:44:44.878338] I [MSGID: 108005]
[afr-common.c:4007:afr_notify] 0-letsencrypt-replicate-0:
Subvolume 'letsencrypt-client-0' came back up; going online.
[2016-06-01 18:44:44.878390] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-letsencrypt-client-0: Server lk version = 1
[2016-06-01 18:44:44.878505] I
[rpc-clnt.c:1868:rpc_clnt_reconfig] 0-letsencrypt-client-1:
changing port to 49154 (from 0)
[2016-06-01 18:44:44.879568] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-letsencrypt-client-1: Using Program GlusterFS 3.3, Num
(1298437), Version (330)
[2016-06-01 18:44:44.880155] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-letsencrypt-client-1: Connected to letsencrypt-client-1,
attached to remote volume '/gluster_volume/letsencrypt'.
[2016-06-01 18:44:44.880175] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-letsencrypt-client-1: Server and Client lk-version numbers are
not same, reopening the fds
[2016-06-01 18:44:44.896801] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-letsencrypt-client-1: Server lk version = 1
[2016-06-01 18:44:44.898290] I [MSGID: 108031]
[afr-common.c:1900:afr_local_discovery_cbk]
0-letsencrypt-replicate-0: selecting local read_child
letsencrypt-client-0
[2016-06-01 18:44:44.898798] I [MSGID: 104041]
[glfs-resolve.c:869:__glfs_active_subvol] 0-letsencrypt: switched
to graph 676c7573-7465-7266-732d-6e6f64652d63 (0)
[2016-06-01 1

[Gluster-users] Minutes of Gluster Community Bug Triage meeting at 12:00 UTC ~(in 45 minutes)

2016-06-08 Thread Jiffin Tony Thottan

Meeting summary
---
* Roll call  (jiffin, 12:02:49)

* kkeithley Saravanakmr will set up Coverity, clang, etc on public
  facing machine and run it regularly  (jiffin, 12:05:07)
  * ACTION: kkeithley Saravanakmr will set up Coverity, clang, etc on
public facing machine and run it regularly  (jiffin, 12:07:03)
  * ACTION: ndevos need to decide on how to provide/use debug builds
(jiffin, 12:07:35)
  * ACTION: ndevos to propose some test-cases for minimal libgfapi test
(jiffin, 12:07:44)

* Manikandan and gem to followup with kshlm/misc to get access to
  gluster-infra  (jiffin, 12:07:55)
  * ACTION: Manikandan and gem to followup with kshlm/misc/nigelb to get
access to gluster-infra  (jiffin, 12:09:50)

* ? decide how component maintainers/developers use the BZ queries or
  RSS-feeds for the Triaged bugs  (jiffin, 12:10:59)
  * ACTION: Saravanakmr will host bug triage meeting on June 14th 2016
(jiffin, 12:17:51)
  * ACTION: Manikandan will host bug triage meeting on June 21st 2016
(jiffin, 12:17:59)
  * ACTION: ndevos will host bug triage meeting on June 28th 2016
(jiffin, 12:18:08)

* Group Triage  (jiffin, 12:18:23)

* Open Floor  (jiffin, 12:39:07)

Meeting ended at 12:41:56 UTC.




Action Items

* kkeithley Saravanakmr will set up Coverity, clang, etc on public
  facing machine and run it regularly
* ndevos need to decide on how to provide/use debug builds
* ndevos to propose some test-cases for minimal libgfapi test
* Manikandan and gem to followup with kshlm/misc/nigelb to get access to
  gluster-infra
* Saravanakmr will host bug triage meeting on June 14th 2016
* Manikandan will host bug triage meeting on June 21st 2016
* ndevos will host bug triage meeting on June 28th 2016




Action Items, by person
---
* gem
  * Manikandan and gem to followup with kshlm/misc/nigelb to get access
to gluster-infra
* kkeithley
  * kkeithley Saravanakmr will set up Coverity, clang, etc on public
facing machine and run it regularly
* Saravanakmr
  * kkeithley Saravanakmr will set up Coverity, clang, etc on public
facing machine and run it regularly
  * Saravanakmr will host bug triage meeting on June 14th 2016
* **UNASSIGNED**
  * ndevos need to decide on how to provide/use debug builds
  * ndevos to propose some test-cases for minimal libgfapi test
  * Manikandan will host bug triage meeting on June 21st 2016
  * ndevos will host bug triage meeting on June 28th 2016




People Present (lines said)
---
* jiffin (50)
* kkeithley (9)
* hgowtham (6)
* rafi (4)
* zodbot (3)
* Saravanakmr (3)
* gem (3)
* skoduri (1)


On 07/06/16 16:50, Jiffin Tony Thottan wrote:

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
   (in your terminal, run: date -d "12:00 UTC")
- agenda:https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Jiffin


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ~(in 45 minutes)

2016-06-07 Thread Jiffin Tony Thottan

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
  (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Jiffin

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS ganesha client not showing files after crash

2016-06-03 Thread Jiffin Tony Thottan

Hi Alan,

I try to reproduce issue with my set up and get back to u.

can u please mention mount protocol and gluster package version(3.7-?)

Incase if u can't find /var/log/ganesha.log(it is default location for 
fedora and centos),

Just the system log messages and grep for ganesha.

Also can try to perform force lookup on directory using "ls /* 
-ltr"


--
Jiffin

On 02/06/16 00:16, Alan Hartless wrote:
Yes, I had a brick that I restored and so it had existing files. After 
the crash, it wouldn't let me re-add it because it said the files were 
already part of a gluster. So I followed 
https://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/ to 
reset it.


Also correct that I can access all files through fuse but only the 
root directory via ganesha NFS4 or any directories/files that have 
since been created.


Using a forced lookup on a specific file, I found that I can reach it 
and even edit it. But a ls or dir will not list it or any of it's 
parent directories. Even after editing the file, it does not list with 
ls.


I'm using gluster 3.7 and ganesha 2.3 from Gluster's Ubuntu repositories.

I don't have a /var/log/ganesha.log but I do 
/var/log/ganesha-gfapi.log. I tailed it while restarting ganesha and 
got this for the specific volume:


[2016-06-01 18:44:44.876385] I [MSGID: 114020] [client.c:2106:notify] 
0-letsencrypt-client-0: parent translators are ready, attempting 
connect on transport
[2016-06-01 18:44:44.876903] I [MSGID: 114020] [client.c:2106:notify] 
0-letsencrypt-client-1: parent translators are ready, attempting 
connect on transport
[2016-06-01 18:44:44.877193] I [rpc-clnt.c:1868:rpc_clnt_reconfig] 
0-letsencrypt-client-0: changing port to 49154 (from 0)
[2016-06-01 18:44:44.877837] I [MSGID: 114057] 
[client-handshake.c:1437:select_server_supported_programs] 
0-letsencrypt-client-0: Using Program GlusterFS 3.3, Num (1298437), 
Version (330)
[2016-06-01 18:44:44.878234] I [MSGID: 114046] 
[client-handshake.c:1213:client_setvolume_cbk] 0-letsencrypt-client-0: 
Connected to letsencrypt-client-0, attached to remote volume 
'/gluster_volume/letsencrypt'.
[2016-06-01 18:44:44.878253] I [MSGID: 114047] 
[client-handshake.c:1224:client_setvolume_cbk] 0-letsencrypt-client-0: 
Server and Client lk-version numbers are not same, reopening the fds
[2016-06-01 18:44:44.878338] I [MSGID: 108005] 
[afr-common.c:4007:afr_notify] 0-letsencrypt-replicate-0: Subvolume 
'letsencrypt-client-0' came back up; going online.
[2016-06-01 18:44:44.878390] I [MSGID: 114035] 
[client-handshake.c:193:client_set_lk_version_cbk] 
0-letsencrypt-client-0: Server lk version = 1
[2016-06-01 18:44:44.878505] I [rpc-clnt.c:1868:rpc_clnt_reconfig] 
0-letsencrypt-client-1: changing port to 49154 (from 0)
[2016-06-01 18:44:44.879568] I [MSGID: 114057] 
[client-handshake.c:1437:select_server_supported_programs] 
0-letsencrypt-client-1: Using Program GlusterFS 3.3, Num (1298437), 
Version (330)
[2016-06-01 18:44:44.880155] I [MSGID: 114046] 
[client-handshake.c:1213:client_setvolume_cbk] 0-letsencrypt-client-1: 
Connected to letsencrypt-client-1, attached to remote volume 
'/gluster_volume/letsencrypt'.
[2016-06-01 18:44:44.880175] I [MSGID: 114047] 
[client-handshake.c:1224:client_setvolume_cbk] 0-letsencrypt-client-1: 
Server and Client lk-version numbers are not same, reopening the fds
[2016-06-01 18:44:44.896801] I [MSGID: 114035] 
[client-handshake.c:193:client_set_lk_version_cbk] 
0-letsencrypt-client-1: Server lk version = 1
[2016-06-01 18:44:44.898290] I [MSGID: 108031] 
[afr-common.c:1900:afr_local_discovery_cbk] 0-letsencrypt-replicate-0: 
selecting local read_child letsencrypt-client-0
[2016-06-01 18:44:44.898798] I [MSGID: 104041] 
[glfs-resolve.c:869:__glfs_active_subvol] 0-letsencrypt: switched to 
graph 676c7573-7465-7266-732d-6e6f64652d63 (0)
[2016-06-01 18:44:45.913545] I [MSGID: 104045] 
[glfs-master.c:95:notify] 0-gfapi: New graph 
676c7573-7465-7266-732d-6e6f64652d63 (0) coming up


I also tailed it while accessing files through a mount point but 
nothing was logged.


This is the ganesha config for the specific volume I'm testing with. I 
have others but they are the same except for export ID and the paths.


EXPORT
{
Export_Id = 3;
Path = "/letsencrypt";
Pseudo = "/letsencrypt";
FSAL {
name = GLUSTER;
hostname = "localhost";
volume = "letsencrypt";
}
Access_type = RW;
Squash = No_root_squash;
Disable_ACL = TRUE;
}

Many thanks!


On Sun, May 29, 2016 at 12:46 PM Jiffin Tony Thottan 
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:




On 28/05/16 08:07, Alan Hartless wrote:

I had everything working well when I had a complete melt down :-)
Well got all that sorted and everything back up and running or so
I thought. Now NFS ganesha is not showing any existing files but
the root level of the brick. It's empty for all subd

[Gluster-users] Minutes of Gluster Community Bug Triage meeting at 12:00 UTC ~(in 1.5 hours)

2016-06-01 Thread Jiffin Tony Thottan
facing machine and run it regularly
2. Manikandan
1. Manikandan and gem to followup with kshlm/misc to get access to
   gluster-infra
2. Manikandan will host bug triage meeting on June 21st 2016
3. ndevos
1. ndevos need to decide on how to provide/use debug builds
2. ndevos to propose some test-cases for minimal libgfapi test
3. ndevos will host bug triage meeting on June 28th 2016
4. Saravanakmr
1. kkeithley Saravanakmr will set up Coverity, clang, etc on public
   facing machine and run it regularly
2. Saravanakmr will host bug triage meeting on June 14th 2016
5. *UNASSIGNED*
1. Jiffin will host bug triage meeting on June 7th 2016
2. ? decide how component maintainers/developers use the BZ queries
   or RSS-feeds for the Triaged bugs



 People present (lines said)

1. jiffin (84)
2. ndevos (35)
3. kkeithley (19)
4. Manikandan (11)
5. Saravanakmr (8)
6. nigelb (5)
7. zodbot (3)
8. hgowtham (3)
9. msvbhat (2)
10. rafi (2)
11. partner (1)


Minutes of meeting
 zodbot: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-31/gluster_bug_triage.2016-05-31-12.00.html
 zodbot: Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-31/gluster_bug_triage.2016-05-31-12.00.txt
 zodbot: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-31/gluster_bug_triage.2016-05-31-12.00.log.html


Regards,
Jiffin

On 31/05/16 15:55, Jiffin Tony Thottan wrote:

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Jiffin


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ~(in 1.5 hours)

2016-05-31 Thread Jiffin Tony Thottan

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Jiffin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS ganesha client not showing files after crash

2016-05-29 Thread Jiffin Tony Thottan



On 28/05/16 08:07, Alan Hartless wrote:
I had everything working well when I had a complete melt down :-) Well 
got all that sorted and everything back up and running or so I 
thought. Now NFS ganesha is not showing any existing files but the 
root level of the brick. It's empty for all subdirectories. New files 
or directories added show up as well. Everything shows up when using 
the fuse client.




If I understand your issue correctly
* You have created a volume using brick which contains pre existing file 
and directories
* When you tried to access  the files via ganesha, it does not show up. 
But with fuse it is visible.


Can please try to perform force lookup on the directories/files(ls to directory/file>) from the ganesha mount?
Also check the ganesha logs (/var/log/ganesha.log and 
/var/log/ganesha-gfapi.log) for clues.
IMO there was similar issue exists for older version of ganesha(v2.1 I 
guess). if possible can you also share

the ganesha configuration for that volume

I've tried self healing, editing files, etc but the issue persists. If 
I move the folders and back, they show up. But I have a live setup and 
can't afford the time to move GBs of data to a new location and back. 
Is there anything I can do to trigger something for the files to show 
up in NFS again without having to move directories?


Thanks,
Alan


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] dict_get errors in brick log

2016-05-02 Thread Jiffin Tony Thottan



On 02/05/16 16:52, Serkan Çoban wrote:

Hi,

I am getting dict_get errors in brick log. I found following and it
get merged to master:
https://bugzilla.redhat.com/show_bug.cgi?id=1319581

How can I find if it is merged to 3.7?


You can track change for 3.7 using http://review.gluster.org/#/c/14144/

--
Jiffin


I am using 3.7.11 and affected by the problem.

Thanks,
Serkan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] MInutes of Gluster Community Bug Triage meeting at 12:00 UTC on 26th April 2016

2016-04-27 Thread Jiffin Tony Thottan

Hi all,

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-26/gluster_bug_triage_meeting.2016-04-26-12.11.html
Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-26/gluster_bug_triage_meeting.2016-04-26-12.11.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-26/gluster_bug_triage_meeting.2016-04-26-12.11.log.html



Meeting summary
---
* agenda: https://public.pad.fsfe.org/p/gluster-bug-triage (jiffin,
  12:11:39)
* Roll call  (jiffin, 12:12:07)

* msvbhat  will look into lalatenduM's automated Coverity setup in
  Jenkins   which need assistance from an admin with more permissions
  (jiffin, 12:18:13)
  * ACTION: msvbhat  will look into lalatenduM's automated Coverity
setup in   Jenkins   which need assistance from an admin with more
permissions  (jiffin, 12:21:04)

* ndevos need to decide on how to provide/use debug builds (jiffin,
  12:21:18)
  * ACTION: Manikandan to followup with kashlm to get access to
gluster-infra  (jiffin, 12:24:18)
  * ACTION: Manikandan and Nandaja will update on bug automation
(jiffin, 12:24:30)

* msvbhat  provide a simple step/walk-through on how to provide
  testcases for the nightly rpm tests  (jiffin, 12:25:09)
  * ACTION: msvbhat  provide a simple step/walk-through on how to
provide testcases for the nightly rpm tests  (jiffin, 12:27:00)

* rafi needs to followup on #bug 1323895  (jiffin, 12:27:15)

* ndevos need to decide on how to provide/use debug builds (jiffin,
  12:30:44)
  * ACTION: ndevos need to decide on how to provide/use debug builds
(jiffin, 12:32:09)
  * ACTION: ndevos to propose some test-cases for minimal libgfapi test
(jiffin, 12:32:21)
  * ACTION: ndevos need to discuss about writing a script to update bug
assignee from gerrit patch  (jiffin, 12:32:31)

* Group triage  (jiffin, 12:33:07)

* openfloor  (jiffin, 12:52:52)

* gluster bug triage meeting schedule May 2016  (jiffin, 12:55:33)
  * ACTION: hgowtham will host meeting on 03/05/2016  (jiffin, 12:56:18)
  * ACTION: Saravanakmr will host meeting on 24/05/2016  (jiffin,
12:56:49)
  * ACTION: kkeithley_ will host meeting on 10/05/2016  (jiffin,
13:00:13)
  * ACTION: jiffin will host meeting on 17/05/2016  (jiffin, 13:00:28)

Meeting ended at 13:01:34 UTC.




Action Items

* msvbhat  will look into lalatenduM's automated Coverity setup in
  Jenkins   which need assistance from an admin with more permissions
* Manikandan to followup with kashlm to get access to gluster-infra
* Manikandan and Nandaja will update on bug automation
* msvbhat  provide a simple step/walk-through on how to provide
  testcases for the nightly rpm tests
* ndevos need to decide on how to provide/use debug builds
* ndevos to propose some test-cases for minimal libgfapi test
* ndevos need to discuss about writing a script to update bug assignee
  from gerrit patch
* hgowtham will host meeting on 03/05/2016
* Saravanakmr will host meeting on 24/05/2016
* kkeithley_ will host meeting on 10/05/2016
* jiffin will host meeting on 17/05/2016

People Present (lines said)
---
* jiffin (87)
* rafi1 (21)
* ndevos (10)
* hgowtham (8)
* kkeithley_ (6)
* Saravanakmr (6)
* Manikandan (5)
* zodbot (3)
* post-factum (2)
* lalatenduM (1)
* glusterbot (1)


Cheers,

Jiffin

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] How to enable ACL support in Glusterfs volume

2016-04-26 Thread Jiffin Tony Thottan



On 26/04/16 15:28, ABHISHEK PALIWAL wrote:

Hi Jiffin,

Any clue you have on this I am seeing some logs related to ACL in 
command and some .so file in glusterfs/tmp-a2.log file but no failure 
is there.



Hi Abhishek,

Can u attach the logs files (/var/log/glusterfs/tmp-a2.log)?
Also u can try out ganesha which can export gluster volumes as well as 
other exports using single server.
Right now ganesha only supports nfsv4 acl (not the posix acl). And also 
ganesha well supported with gluster volume

when we compare with knfs.


--
Jiffin


Regards,
Abhishek

On Tue, Apr 26, 2016 at 1:17 PM, ABHISHEK PALIWAL 
<abhishpali...@gmail.com <mailto:abhishpali...@gmail.com>> wrote:




On Tue, Apr 26, 2016 at 12:54 PM, Jiffin Tony Thottan
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:



On 26/04/16 12:22, ABHISHEK PALIWAL wrote:



On Tue, Apr 26, 2016 at 12:18 PM, Jiffin Tony Thottan
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:

On 26/04/16 12:11, ABHISHEK PALIWAL wrote:

Hi,
I want to enable ACL support on gluster volume using the
kernel NFS ACL support so I have followed below steps
after creation of gluster volume:


Is there any specific reason to knfs instead of in build
gluster nfs server ?

Yes, because we have other NFS mounted volume as well in system.


Did u mean to say that knfs is running on each gluster nodes
(i mean bricks) ?

Yes.






1. mount -t glusterfs -o acl 10.32.0.48:/c_glusterfs /tmp/a2
2.update the /etc/exports file
/tmp/a2
10.32.*(rw,acl,sync,no_subtree_check,no_root_squash,fsid=14)
3.exportfs –ra
4.gluster volume set c_glusterfs nfs.acl off
5.gluster volume set c_glusterfs nfs.disable on
we have disabled above two options because we are using
Kernel NFS ACL support and that is already enabled.
on other board mounting it using
mount -t nfs -o acl,vers=3 10.32.0.48:/tmp/a2 /tmp/e/
setfacl -m u:application:rw /tmp/e/usr
setfacl: /tmp/e/usr: Operation not supported


Can you please check the clients for the hints ?

What I need to check here?


can u check /var/log/glusterfs/tmp-a2.log?


There is no failure in server sidein /var/log/glusterfs/tmp-a2.log
file but on the board where I am getting this failure don't
running gluster here so not possible to check
/var/log/glusterfs/tmp-a2.log file.






and application is the system user like below
application:x:102:0::/home/application:/bin/sh

I don't why I am getting this failure when I enabled all
the acl support in each steps.

Please let me know how can I enable this.

Regards,
Abhishek



--
Jiffin



___
Gluster-devel mailing list
gluster-de...@gluster.org <mailto:gluster-de...@gluster.org>
http://www.gluster.org/mailman/listinfo/gluster-devel





-- 





Regards
Abhishek Paliwal





-- 





Regards
Abhishek Paliwal




--




Regards
Abhishek Paliwal


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] How to enable ACL support in Glusterfs volume

2016-04-26 Thread Jiffin Tony Thottan



On 26/04/16 12:22, ABHISHEK PALIWAL wrote:



On Tue, Apr 26, 2016 at 12:18 PM, Jiffin Tony Thottan 
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:


On 26/04/16 12:11, ABHISHEK PALIWAL wrote:

Hi,
I want to enable ACL support on gluster volume using the kernel
NFS ACL support so I have followed below steps after creation of
gluster volume:


Is there any specific reason to knfs instead of in build gluster
nfs server ?

Yes, because we have other NFS mounted volume as well in system.


Did u mean to say that knfs is running on each gluster nodes (i mean 
bricks) ?






1. mount -t glusterfs -o acl 10.32.0.48:/c_glusterfs /tmp/a2
2.update the /etc/exports file
/tmp/a2 10.32.*(rw,acl,sync,no_subtree_check,no_root_squash,fsid=14)
3.exportfs –ra
4.gluster volume set c_glusterfs nfs.acl off
5.gluster volume set c_glusterfs nfs.disable on
we have disabled above two options because we are using Kernel
NFS ACL support and that is already enabled.
on other board mounting it using
mount -t nfs -o acl,vers=3 10.32.0.48:/tmp/a2 /tmp/e/
setfacl -m u:application:rw /tmp/e/usr
setfacl: /tmp/e/usr: Operation not supported


Can you please check the clients for the hints ?

What I need to check here?


can u check /var/log/glusterfs/tmp-a2.log?





and application is the system user like below
application:x:102:0::/home/application:/bin/sh

I don't why I am getting this failure when I enabled all the acl
support in each steps.

Please let me know how can I enable this.

Regards,
Abhishek



--
Jiffin



___
Gluster-devel mailing list
gluster-de...@gluster.org <mailto:gluster-de...@gluster.org>
http://www.gluster.org/mailman/listinfo/gluster-devel





--




Regards
Abhishek Paliwal


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] How to enable ACL support in Glusterfs volume

2016-04-26 Thread Jiffin Tony Thottan



On 26/04/16 12:18, Jiffin Tony Thottan wrote:

On 26/04/16 12:11, ABHISHEK PALIWAL wrote:

Hi,
I want to enable ACL support on gluster volume using the kernel NFS 
ACL support so I have followed below steps after creation of gluster 
volume:


Is there any specific reason to knfs instead of in build gluster nfs 
server ?



1. mount -t glusterfs -o acl 10.32.0.48:/c_glusterfs /tmp/a2
2.update the /etc/exports file
/tmp/a2 10.32.*(rw,acl,sync,no_subtree_check,no_root_squash,fsid=14)
3.exportfs –ra
4.gluster volume set c_glusterfs nfs.acl off
5.gluster volume set c_glusterfs nfs.disable on
we have disabled above two options because we are using Kernel NFS 
ACL support and that is already enabled.

on other board mounting it using
mount -t nfs -o acl,vers=3 10.32.0.48:/tmp/a2 /tmp/e/
setfacl -m u:application:rw /tmp/e/usr
setfacl: /tmp/e/usr: Operation not supported


Can you please check the clients for the hints ?


What I intend to say can please check the client logs and also if 
possible take the packet trace from server machine.





and application is the system user like below
application:x:102:0::/home/application:/bin/sh

I don't why I am getting this failure when I enabled all the acl 
support in each steps.


Please let me know how can I enable this.

Regards,
Abhishek



--
Jiffin



___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel




___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] How to enable ACL support in Glusterfs volume

2016-04-26 Thread Jiffin Tony Thottan

On 26/04/16 12:11, ABHISHEK PALIWAL wrote:

Hi,
I want to enable ACL support on gluster volume using the kernel NFS 
ACL support so I have followed below steps after creation of gluster 
volume:


Is there any specific reason to knfs instead of in build gluster nfs 
server ?



1. mount -t glusterfs -o acl 10.32.0.48:/c_glusterfs /tmp/a2
2.update the /etc/exports file
/tmp/a2 10.32.*(rw,acl,sync,no_subtree_check,no_root_squash,fsid=14)
3.exportfs –ra
4.gluster volume set c_glusterfs nfs.acl off
5.gluster volume set c_glusterfs nfs.disable on
we have disabled above two options because we are using Kernel NFS ACL 
support and that is already enabled.

on other board mounting it using
mount -t nfs -o acl,vers=3 10.32.0.48:/tmp/a2 /tmp/e/
setfacl -m u:application:rw /tmp/e/usr
setfacl: /tmp/e/usr: Operation not supported


Can you please check the clients for the hints ?


and application is the system user like below
application:x:102:0::/home/application:/bin/sh

I don't why I am getting this failure when I enabled all the acl 
support in each steps.


Please let me know how can I enable this.

Regards,
Abhishek



--
Jiffin



___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] nfs-ganesha/pnfs read/write path on EC volume

2016-03-19 Thread Jiffin Tony Thottan



On 17/03/16 23:17, Serkan Çoban wrote:

Hi Jiffin,
Will these patches land in 3.7.9?


Hi Serkan,

I moved all changes to ganesha [1] and got merged upstream (ganesha 
V2.4-dev-9),

I missed to back port it to 2.3, so which ganesha build are u using?

[1] https://review.gerrithub.io/#/c/263180/

Jiffin

Serkan

On Tue, Feb 16, 2016 at 2:47 PM, Jiffin Tony Thottan
<jthot...@redhat.com> wrote:

Hi Serkan,

I had moved out previous gfapi-side to ganesha and include all those change
in single patch https://review.gerrithub.io/#/c/263180/

I will try get it reviewed and merge the patch as soon as possible.

With Regards,
Jiffin

On 14/02/16 21:54, Serkan Çoban wrote:

Thanks for the answer,
AFAIK, when using pNFS, every different file read/write should go to
different server in order to utilize all servers in parallel.
I am waiting for the patches and future releases.Thanks for all your
efforts.

Serkan

On Wed, Feb 10, 2016 at 2:21 PM, Jiffin Tony Thottan
<jthot...@redhat.com> wrote:

Hi,

Sorry for the delayed delayed response


On 10/02/16 13:51, Pranith Kumar Karampuri wrote:



On 02/10/2016 01:15 PM, Serkan Çoban wrote:

Hi Jiffin,

Any update about the write path?

I saw him send some mails related to this, yesterday and day before. You
will hear from him soon.

Pranith


Serkan

On Sun, Jan 31, 2016 at 5:00 PM, Jiffin Tony Thottan
<jthot...@redhat.com> wrote:


On 31/01/16 16:19, Serkan Çoban wrote:

Hi,
I am testing nfs-ganesha with pNFS on EC volume and I want to ask
some
questions.
Assume we have two clients: c1,c2
and six servers with one 4+2 EC volume constructed as below:

gluster volume create vol1 disperse 6 redundancy 2
server{1..6}:/brick/b1
\

 server{1..6}:/brick/b2 \

 server{1..6}:/brick/b3 \

 server{1..6}:/brick/b4 \

 server{1..6}:/brick/b5 \

 server{1..6}:/brick/b6
vol1 is mounted on both clients as server1:/vol1

Here is first question: When I write file1 from client1 and file2
from
client2; which servers get the files? In my opinion server1 gets
file1
and server2 gets file2 and do EC calculations and distribute chunks
to
other servers. Am I right?

Can anyone explain detailed read/write path with pNFS and EC volumes?



Currently in pNFS cluster , request is send to the first DS available in
the
list.
So I am to planning to distribute different files among the available DS,
as
a first step
I had send out patch in gluster (http://review.gluster.org/#/c/13402/)
After this one got merged, there are certain changes in ganesha side too.
I had tested both changes in my setup and it was working as accepted.

Once again thanks for pointing out this issue

--
Regards,
Jiffin





I never tried pNFS with EC volume, will try the same by my own and
reply
to
your question as soon as possible.
--
Jiffin


Thanks,
Serkan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs-ganesha/pnfs read/write path on EC volume

2016-03-18 Thread Jiffin Tony Thottan



On 18/03/16 11:04, Serkan Çoban wrote:

I am using 2.3. But I can test 2.4 if there are centos packages..
Or I can try to build rpm packages from latest ganesha if it is easy to build..
Unfortunately I can't find any epel/centos[1] packages for 2.4, its only 
for fedora24[2] packages

CCing Kaleb who knows about next 2.3.x update.

Following are the steps  used in ci.centos.org to build rpms(I hope it 
works)


# install NFS-Ganesha build dependencies
yum -y install git bison flex cmake gcc-c++ libacl-devel krb5-devel 
dbus-devel libnfsidmap-devel libwbclient-devel libcap-devel 
libblkid-devel rpm-build redhat-rpm-config


# install the latest version of gluster
yum -y install centos-release-gluster
yum -y install glusterfs-api-devel

git init nfs-ganesha
cd nfs-ganesha
git fetch https://review.gerrithub.io/ffilz/nfs-ganesha next

git checkout -b next FETCH_HEAD

# update libntirpc
git submodule update --init || git submodule sync

mkdir build
cd build

cmake -DCMAKE_BUILD_TYPE=Maintainer ../src && make rpm

[1] http://mirror.centos.org/centos-7/7.2.1511/storage/x86_64/gluster-3.7/
[2] http://arm.koji.fedoraproject.org/koji/packageinfo?packageID=17239

--
Jiffin

On Fri, Mar 18, 2016 at 7:03 AM, Jiffin Tony Thottan
<jthot...@redhat.com> wrote:


On 17/03/16 23:17, Serkan Çoban wrote:

Hi Jiffin,
Will these patches land in 3.7.9?


Hi Serkan,

I moved all changes to ganesha [1] and got merged upstream (ganesha
V2.4-dev-9),
I missed to back port it to 2.3, so which ganesha build are u using?

[1] https://review.gerrithub.io/#/c/263180/

Jiffin


Serkan

On Tue, Feb 16, 2016 at 2:47 PM, Jiffin Tony Thottan
<jthot...@redhat.com> wrote:

Hi Serkan,

I had moved out previous gfapi-side to ganesha and include all those
change
in single patch https://review.gerrithub.io/#/c/263180/

I will try get it reviewed and merge the patch as soon as possible.

With Regards,
Jiffin

On 14/02/16 21:54, Serkan Çoban wrote:

Thanks for the answer,
AFAIK, when using pNFS, every different file read/write should go to
different server in order to utilize all servers in parallel.
I am waiting for the patches and future releases.Thanks for all your
efforts.

Serkan

On Wed, Feb 10, 2016 at 2:21 PM, Jiffin Tony Thottan
<jthot...@redhat.com> wrote:

Hi,

Sorry for the delayed delayed response


On 10/02/16 13:51, Pranith Kumar Karampuri wrote:



On 02/10/2016 01:15 PM, Serkan Çoban wrote:

Hi Jiffin,

Any update about the write path?

I saw him send some mails related to this, yesterday and day before.
You
will hear from him soon.

Pranith


Serkan

On Sun, Jan 31, 2016 at 5:00 PM, Jiffin Tony Thottan
<jthot...@redhat.com> wrote:


On 31/01/16 16:19, Serkan Çoban wrote:

Hi,
I am testing nfs-ganesha with pNFS on EC volume and I want to ask
some
questions.
Assume we have two clients: c1,c2
and six servers with one 4+2 EC volume constructed as below:

gluster volume create vol1 disperse 6 redundancy 2
server{1..6}:/brick/b1
\

  server{1..6}:/brick/b2 \

  server{1..6}:/brick/b3 \

  server{1..6}:/brick/b4 \

  server{1..6}:/brick/b5 \

  server{1..6}:/brick/b6
vol1 is mounted on both clients as server1:/vol1

Here is first question: When I write file1 from client1 and file2
from
client2; which servers get the files? In my opinion server1 gets
file1
and server2 gets file2 and do EC calculations and distribute chunks
to
other servers. Am I right?

Can anyone explain detailed read/write path with pNFS and EC
volumes?



Currently in pNFS cluster , request is send to the first DS available
in
the
list.
So I am to planning to distribute different files among the available
DS,
as
a first step
I had send out patch in gluster (http://review.gluster.org/#/c/13402/)
After this one got merged, there are certain changes in ganesha side
too.
I had tested both changes in my setup and it was working as accepted.

Once again thanks for pointing out this issue

--
Regards,
Jiffin





I never tried pNFS with EC volume, will try the same by my own and
reply
to
your question as soon as possible.
--
Jiffin


Thanks,
Serkan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Improving subdir export for NFS-Ganesha

2016-03-16 Thread Jiffin Tony Thottan



On 16/03/16 09:09, Atin Mukherjee wrote:


On 03/15/2016 06:39 PM, Jiffin Tony Thottan wrote:


On 15/03/16 12:23, Atin Mukherjee wrote:

On 03/15/2016 11:48 AM, Jiffin Tony Thottan wrote:

Hi all,

The subdir export is one of key features for NFS server. NFS-ganesha
have already supports subdir export,
   but it has lot of limitations when it is intregrated with gluster.

Current Implementation :
Following steps are required for a subdir export
* export volume using ganesha.enable option
* edit the export configuration file by adding subdir options
* do refresh-config in that node using ganesha-ha.sh
* limitation : multiple directories cannot be exported at a time via
script.
 If user to need to do that(it is possible), all the
steps should be done in manually
 which includes creating export conf file, use latest
export id, include it in ganesha conf etc.
 And also here it become mandatory to export root
before exporting subdir.

Suggested approach :

* Introduce new volume set command  "ganesha.subdir" which will handle
above mentioned issue cleanly
 for example, gluster volume set  ganesha.subdir
<path1,path2,path3 ...>
 if u want to unexport path2, use the same command  with
mentioning path2
 gluster volume set  ganesha.subdir <path1,path3 ...>.(Is
different option required ?)

How do you handle a case where you have to unexport all the paths?

How about the above question?


Sorry missed out in previous mail.
using gluster volume set  ganesha.subdir ""
--
Jiffin

 The root of the volume should be export only using ganesha.enable
options.
 This require a lot of additions in glusterd code base and minor
changes in snapshot functionality.

Could you detail out what all changes will be required in glusterd
codebase when volume set ganesha.subdir  is executed?
Based on that we can only take a call whether its feasible to take this
in 3.7.x or move it to 3.8.

This is just a rough estimation :
1.) glusterd/cli for introducing new option
2.) need to modify functions like ganesha_manage_export() to accommodate
new option
3.) changes related to ganesha scripts
4.) minor modification to snapshot
functionality(glusterd_copy_nfs_ganesha_file)  for ganesha
approximately I expect around 100-150 lines of code to be added

--
Jiffin

Can above mentioned improvement  targeted  for 3.7.x release (3.7.10 or
3.7.11) or should I need to move it for 3.8 release ?
Please provide your valuable feedback on the same.

Please Note : It is not related to subdir export for fuse mount.

Regards,
Jiffin




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Improving subdir export for NFS-Ganesha

2016-03-15 Thread Jiffin Tony Thottan



On 15/03/16 12:23, Atin Mukherjee wrote:


On 03/15/2016 11:48 AM, Jiffin Tony Thottan wrote:

Hi all,

The subdir export is one of key features for NFS server. NFS-ganesha
have already supports subdir export,
  but it has lot of limitations when it is intregrated with gluster.

Current Implementation :
Following steps are required for a subdir export
* export volume using ganesha.enable option
* edit the export configuration file by adding subdir options
* do refresh-config in that node using ganesha-ha.sh
* limitation : multiple directories cannot be exported at a time via
script.
If user to need to do that(it is possible), all the
steps should be done in manually
which includes creating export conf file, use latest
export id, include it in ganesha conf etc.
And also here it become mandatory to export root
before exporting subdir.

Suggested approach :

* Introduce new volume set command  "ganesha.subdir" which will handle
above mentioned issue cleanly
for example, gluster volume set  ganesha.subdir
<path1,path2,path3 ...>
if u want to unexport path2, use the same command  with mentioning path2
gluster volume set  ganesha.subdir <path1,path3 ...>.(Is
different option required ?)

How do you handle a case where you have to unexport all the paths?

The root of the volume should be export only using ganesha.enable
options.
This require a lot of additions in glusterd code base and minor
changes in snapshot functionality.

Could you detail out what all changes will be required in glusterd
codebase when volume set ganesha.subdir  is executed?
Based on that we can only take a call whether its feasible to take this
in 3.7.x or move it to 3.8.


This is just a rough estimation :
1.) glusterd/cli for introducing new option
2.) need to modify functions like ganesha_manage_export() to accommodate 
new option

3.) changes related to ganesha scripts
4.) minor modification to snapshot 
functionality(glusterd_copy_nfs_ganesha_file)  for ganesha

approximately I expect around 100-150 lines of code to be added

--
Jiffin

Can above mentioned improvement  targeted  for 3.7.x release (3.7.10 or
3.7.11) or should I need to move it for 3.8 release ?
Please provide your valuable feedback on the same.

Please Note : It is not related to subdir export for fuse mount.

Regards,
Jiffin




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Improving subdir export for NFS-Ganesha

2016-03-15 Thread Jiffin Tony Thottan

Hi all,

The subdir export is one of key features for NFS server. NFS-ganesha 
have already supports subdir export,

 but it has lot of limitations when it is intregrated with gluster.

Current Implementation :
Following steps are required for a subdir export
* export volume using ganesha.enable option
* edit the export configuration file by adding subdir options
* do refresh-config in that node using ganesha-ha.sh
* limitation : multiple directories cannot be exported at a time via 
script.
   If user to need to do that(it is possible), all the 
steps should be done in manually
   which includes creating export conf file, use latest 
export id, include it in ganesha conf etc.
   And also here it become mandatory to export root 
before exporting subdir.


Suggested approach :

* Introduce new volume set command  "ganesha.subdir" which will handle 
above mentioned issue cleanly
   for example, gluster volume set  ganesha.subdir 


   if u want to unexport path2, use the same command  with mentioning path2
   gluster volume set  ganesha.subdir .(Is 
different option required ?)
   The root of the volume should be export only using ganesha.enable 
options.
   This require a lot of additions in glusterd code base and minor 
changes in snapshot functionality.


Can above mentioned improvement  targeted  for 3.7.x release (3.7.10 or 
3.7.11) or should I need to move it for 3.8 release ?

Please provide your valuable feedback on the same.

Please Note : It is not related to subdir export for fuse mount.

Regards,
Jiffin


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster and SL storage

2016-03-11 Thread Jiffin Tony Thottan



On 11/03/16 17:50, Venkatesh Gopal wrote:
I have a Softlayer Endurance storage volume and have mounted it using 
NFS..



 mount -t nfs4 -o hard,intr 
hostname.service.softlayer.com:/IBM01SEV330022_1 /mnt/slvol


[root@mycentostester1 ~]# gluster volume create test-volume transport 
tcp mycentostester1:/mnt/slvol/gvol


volume create: test-volume: failed: Glusterfs is not supported on 
brick: mycentostester1:/mnt/slvol/gvol.

Setting extended attributes failed, reason: Operation not supported.

[root@mycentostester1 ~]# gluster volume create test-volume  
mycentostester1:/mnt/slvol/gvol


volume create: test-volume: failed: Glusterfs is not supported on 
brick: mycentostester1:/mnt/slvol/gvol.

Setting extended attributes failed, reason: Operation not supported.



So, can we not create a gluster volume on a NFS mount point directory?


Hi Venkatesh,

One of the requirements for glusterfs volume is that backend need tot 
support extended attributes.

The nfs shares does not support xttars.
Is there any specfic reason to use nfs share as brick(glusterfs server)?

--
Jiffin



Venkatesh.

*___*
*Venkatesh Gopal*
*STSM, dashDB/Puffin Development, IBM Analytics*
*email  : gop...@us.ibm.com*
*Phone : 913-599-8721 (T/L 337-8721)*
*Mobile : 913-231-5907*
*___*



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] REMINDER: MInutes of Gluster Community Bug Triage meeting at 12:00 UTC on 1st March, 2016

2016-03-06 Thread Jiffin Tony Thottan



On 01/03/16 14:32, Jiffin Tony Thottan wrote:

Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.



Minutes: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-11-24/gluster_bug_triage.2015-11-24-12.00.html 

Minutes (text): 
http://meetbot.fedoraproject.org/gluster-meeting/2015-11-24/gluster_bug_triage.2015-11-24-12.00.txt
Log: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-11-24/gluster_bug_triage.2015-11-24-12.00.log.html


Meeting summary
Meeting started by jiffin at 12:00:19 UTC. The full logs are available
at
https://meetbot.fedoraproject.org/gluster-meeting/2016-03-01/gluster_bug_triage.2016-03-01-12.00.log.html
.



Meeting summary
---
* Roll Call  (jiffin, 12:00:27)

* Manikandan and Nandaja will update on bug automation  (jiffin,
  12:05:05)

* Scheduling moderators for Gluster Community Bug Triage meeting for a
  month  (jiffin, 12:06:32)
  * rafi will host bug triage on MArch 8th  (jiffin, 12:11:07)
  * ggarg will host on March 8  (jiffin, 12:15:12)
  * skoduri will host meeting on March 15th  (jiffin, 12:18:04)
  * rafi will host meeting on March 22nd  (jiffin, 12:19:57)
  * Manikandan will host meeting on March 29th  (jiffin, 12:21:26)

* Group Triage  (jiffin, 12:21:53)
  * LINK: https://public.pad.fsfe.org/p/gluster-bugs-to-triage
(jiffin, 12:21:59)
  * LINK:
http://gluster.readthedocs.org/en/latest/Contributors-Guide/Bug-Triage/
contains more details about the triaging itself  (jiffin, 12:22:35)

* Open Floor  (jiffin, 12:45:47)
  * no more pending bugs  (jiffin, 12:47:52)

Meeting ended at 12:50:14 UTC.



People Present (lines said)
---
* jiffin (83)
* Manikandan (53)
* hgowtham (19)
* ggarg (18)
* zodbot (3)
* aravindavk (2)
* atinm (2)
* glusterbot (1)


Next week ggarg will host Gluster community  bug triage meeting.


Thank you
Jiffin
___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS Client issues with Gluster Server 3.6.9

2016-03-06 Thread Jiffin Tony Thottan



On 05/03/16 07:12, Mark Selby wrote:
I am trying to use GlusterFS as a general purpose NFS file server. I 
have tried using the FUSE client but the performance fall off vs NFS 
is quite large


Both the client and the server are Ubuntu 14.04.

I am using Gluster 3.6.9 because of the FUSE performance issues that 
have been reported with 3.7.8 (see 
https://bugzilla.redhat.com/show_bug.cgi?id=1309462)


I am having serious issues with a generic NFS client as shown by the 
issues below. Basically most FOPs are giving me a Remote I/O error.


I would not think I was 1st person to see these issues - but my Google 
Fu is not working.


Any and all help would be much appreciated

BTW - These operation against a plain Linux NFS server work fine.


root@dc1strg001x /var/log 448# gluster volume status
Status of volume: backups
Gluster process PortOnline  Pid
-- 


Brick dc1strg001x:/zfspool/glusterfs/backups/data 49152   Y 6462
Brick dc1strg002x:/zfspool/glusterfs/backups/data 49152   Y 6382
NFS Server on localhost 2049Y   6619
Self-heal Daemon on localhost N/A Y   6626
NFS Server on dc1strg002x 2049Y   6502
Self-heal Daemon on dc1strg002x N/A Y   6509


root@vc1test001 /root 735# mount -o vers=3 -t nfs dc1strg001x:/backups 
/mnt/backups_nfs


root@vc1test001 /mnt/backups_nfs 737# dd if=/dev/zero of=testfile 
bs=16k count=16384

16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 2.46237 s, 109 MB/s

root@vc1test001 /mnt/backups_nfs 738# rm testfile

root@vc1test001 /mnt/backups_nfs 739# dd if=/dev/zero of=testfile 
bs=16k count=16384

dd: failed to open ~testfile~: Remote I/O error

root@vc1test001 /var/tmp 743# rsync -av testfile /mnt/backups_nfs/
sending incremental file list
testfile
rsync: mkstemp "/mnt/backups_nfs/.testfile.bzg47C" failed: Remote I/O 
error (121)


sent 1,074,004,056 bytes  received 121 bytes 165,231,411.85 bytes/sec
total size is 1,073,741,824  speedup is 1.00
rsync error: some files/attrs were not transferred (see previous 
errors) (code 23) at main.c(1183) [sender=3.1.0]




Can you please provide the volume configuration(gluster vol info )
and log file for nfs server which u mounted (/var/log/glusterfs)

--
Jiffin

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Is NFS available / enabled on purpose as default in the Ubuntu PPA?

2016-03-02 Thread Jiffin Tony Thottan



On 02/03/16 19:48, Fabian Wenk wrote:

Hello Joe

On 01.03.16 19:07, Joe Julian wrote:

On 03/01/2016 09:43 AM, Fabian Wenk wrote:


With some testing, I did realize, that I can mount the volume with NFS
from anywhere in my local network. According to the documentation [1],
the option nfs.rpc-auth-allow should be set to 'Reject All' as
default, but somehow it is not.

  [1]
https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Managing%20Volumes/ 



Yep, that's a documentation bug. The source says, "By default, all
connections are allowed." - xlators/nfs/server/src/nfs.c#1848..1849


Not nice, but thank you for the clarification.


Hi Fabian,

You can use following feature for nfs which is more advanced than the 
normal nfs.rpc-auth-allow

http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Export%20And%20Netgroup%20Authentication/

And also here,  by default all the clients are rejected.

--
Jiffin
So my workaround then is probably the best solution which needs to be 
done on each volume.



bye
Fabian
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 3 hours)

2016-03-01 Thread Jiffin Tony Thottan

Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thank you
Jiffin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs-ganesha/pnfs read/write path on EC volume

2016-02-16 Thread Jiffin Tony Thottan


Hi Serkan,

I had moved out previous gfapi-side to ganesha and include all those 
change in single patch https://review.gerrithub.io/#/c/263180/


I will try get it reviewed and merge the patch as soon as possible.

With Regards,
Jiffin
On 14/02/16 21:54, Serkan Çoban wrote:

Thanks for the answer,
AFAIK, when using pNFS, every different file read/write should go to
different server in order to utilize all servers in parallel.
I am waiting for the patches and future releases.Thanks for all your efforts.

Serkan

On Wed, Feb 10, 2016 at 2:21 PM, Jiffin Tony Thottan
<jthot...@redhat.com> wrote:

Hi,

Sorry for the delayed delayed response


On 10/02/16 13:51, Pranith Kumar Karampuri wrote:



On 02/10/2016 01:15 PM, Serkan Çoban wrote:

Hi Jiffin,

Any update about the write path?

I saw him send some mails related to this, yesterday and day before. You
will hear from him soon.

Pranith


Serkan

On Sun, Jan 31, 2016 at 5:00 PM, Jiffin Tony Thottan
<jthot...@redhat.com> wrote:


On 31/01/16 16:19, Serkan Çoban wrote:

Hi,
I am testing nfs-ganesha with pNFS on EC volume and I want to ask some
questions.
Assume we have two clients: c1,c2
and six servers with one 4+2 EC volume constructed as below:

gluster volume create vol1 disperse 6 redundancy 2
server{1..6}:/brick/b1
\

server{1..6}:/brick/b2 \

server{1..6}:/brick/b3 \

server{1..6}:/brick/b4 \

server{1..6}:/brick/b5 \

server{1..6}:/brick/b6
vol1 is mounted on both clients as server1:/vol1

Here is first question: When I write file1 from client1 and file2 from
client2; which servers get the files? In my opinion server1 gets file1
and server2 gets file2 and do EC calculations and distribute chunks to
other servers. Am I right?

Can anyone explain detailed read/write path with pNFS and EC volumes?



Currently in pNFS cluster , request is send to the first DS available in the
list.
So I am to planning to distribute different files among the available DS, as
a first step
I had send out patch in gluster (http://review.gluster.org/#/c/13402/)
After this one got merged, there are certain changes in ganesha side too.
I had tested both changes in my setup and it was working as accepted.

Once again thanks for pointing out this issue

--
Regards,
Jiffin





I never tried pNFS with EC volume, will try the same by my own and reply
to
your question as soon as possible.
--
Jiffin


Thanks,
Serkan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] nfs-ganesha/pnfs read/write path on EC volume

2016-02-10 Thread Jiffin Tony Thottan

Hi,

Sorry for the delayed delayed response

On 10/02/16 13:51, Pranith Kumar Karampuri wrote:



On 02/10/2016 01:15 PM, Serkan Çoban wrote:

Hi Jiffin,

Any update about the write path?
I saw him send some mails related to this, yesterday and day before. 
You will hear from him soon.


Pranith


Serkan

On Sun, Jan 31, 2016 at 5:00 PM, Jiffin Tony Thottan
<jthot...@redhat.com> wrote:


On 31/01/16 16:19, Serkan Çoban wrote:

Hi,
I am testing nfs-ganesha with pNFS on EC volume and I want to ask some
questions.
Assume we have two clients: c1,c2
and six servers with one 4+2 EC volume constructed as below:

gluster volume create vol1 disperse 6 redundancy 2 
server{1..6}:/brick/b1

\

   server{1..6}:/brick/b2 \

   server{1..6}:/brick/b3 \

   server{1..6}:/brick/b4 \

   server{1..6}:/brick/b5 \

   server{1..6}:/brick/b6
vol1 is mounted on both clients as server1:/vol1

Here is first question: When I write file1 from client1 and file2 from
client2; which servers get the files? In my opinion server1 gets file1
and server2 gets file2 and do EC calculations and distribute chunks to
other servers. Am I right?

Can anyone explain detailed read/write path with pNFS and EC volumes?


Currently in pNFS cluster , request is send to the first DS available in 
the list.
So I am to planning to distribute different files among the available 
DS, as a first step

I had send out patch in gluster (http://review.gluster.org/#/c/13402/)
After this one got merged, there are certain changes in ganesha side too.
I had tested both changes in my setup and it was working as accepted.

Once again thanks for pointing out this issue

--
Regards,
Jiffin



I never tried pNFS with EC volume, will try the same by my own and 
reply to

your question as soon as possible.
--
Jiffin


Thanks,
Serkan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] posix_acl_default [Invalid argument] issue with distributed geo-rep

2016-01-31 Thread Jiffin Tony Thottan



On 31/01/16 23:25, ML mail wrote:

Hello,

I just set up distributed geo-replication to a slave on my 2 nodes' replicated 
volume and so far it works but I see every 60 seconds in the slave's 
geo-replication-slaves gluster log file the following message:


[2016-01-31 17:38:48.027792] I [dict.c:473:dict_get] 
(-->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.6/xlator/system/posix-acl.so(posix_acl_setxattr_cbk+0x26)
 [0x7f2334c5c166] 
-->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.6/xlator/system/posix-acl.so(handling_other_acl_related_xattr+0xb0)
 [0x7f2334c5c0f0] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get+0x93) 
[0x7f233c04b0c3] ) 0-dict: !this || key=system.posix_acl_default [Invalid argument]


This is not an error to be afraid of. Above log can be poped up on every 
setxattr call.

You can track the fix in http://review.gluster.org/#/c/13325/
--
Jiffin


The exact log file name is the following: 
/var/log/glusterfs/geo-replication-slaves/d11ac2ca-439b-4b23-ba3a-18f3849f83ed:gluster%3A%2F%2F127.0.0.1%3Amyvolume-geo.gluster.log.

Because I have a lot of files this log file is already 16 MB big after not even 
24 hours of running geo-replication. Does anyone understand what issue is? As 
far as I understand it has to do with the file system's ACL. Let me detail my 
setup below:

OS: Debian GNU/Linux 8.3
Gluster version: 3.7.6
Filesystem on master nodes: ZFS (with "acltype" parameter set to "posixacl")
Filesystem on slave node: XFS


Volume info:

Type: Replicate
Volume ID: d11ac2ca-439b-4b23-ba3a-18f3849f83ed
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gfs1a.domain.com:/data/myvolume/brick
Brick2: gfs1b.domain.com:/data/myvolume/brick
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
performance.readdir-ahead: on
nfs.disable: on

Any ideas? I would be really interested to find a solution here and also make 
sure my data is fine.

Best regards
ML
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


  1   2   >