Re: [Gluster-devel] SELinux and GlusterFS

2020-09-28 Thread jiffin tony Thottan
Hey,

The selinux feature supported in glusterfs, but for fuse clients there is
some patch missing in kernel side AFAIR.  The feature works with nfs4 via
nfs-ganesha. Currently I am not actively working on those projects. Adding
Arjun in loop who worked for the support in nfs-ganesha. Also ccing dev
list of both projects.

--
Jiffin

On Fri, 25 Sep 2020 at 3:07 AM, Karl MacMillan 
wrote:

> Hey,
>
> I saw your Fosdem and other presentations about GlusterFS and SELinux -
> last I saw was about missing 3.10 merge. I'm looking for a distributed
> filesystem with SELinux support and was wondering what the current status
> is. Sorry to bother you with email - I just couldn't find anything
> definitive about where things are at the moment.
>
> If GlusterFS isn't ready, are you aware of another option? I guess maybe
> NFSv4? But that's not exactly the same kind of thing.
>
> Thanks - Karl
>
>
> --
Jiffin Tony Thottan
Ph:+918129412217
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Requesting reviews [Re: Release 7 Branch Created]

2019-07-23 Thread Jiffin Tony Thottan

It have passed all the regression  and tested the packages on 3 node set up.

--

Jiffin

On 15/07/19 7:44 PM, Atin Mukherjee wrote:

Please ensure :
1. commit message has the explanation on the motive behind this change.
2. I always feel more confident if a patch has passed regression to 
kick start the review. Can you please ensure that verified flag is put up?


On Mon, Jul 15, 2019 at 5:27 PM Jiffin Tony Thottan 
mailto:jthot...@redhat.com>> wrote:


Hi,

The "Add Ganesha HA bits back to glusterfs code repo"[1] is
targeted for
glusterfs-7. Requesting maintainers to review below two patches

[1]
https://review.gluster.org/#/q/topic:ref-663+(status:open+OR+status:merged)

Regards,

Jiffin

On 15/07/19 5:23 PM, Jiffin Thottan wrote:
>
> - Original Message -
> From: "Rinku Kothiya" mailto:rkoth...@redhat.com>>
> To: maintain...@gluster.org <mailto:maintain...@gluster.org>,
gluster-devel@gluster.org <mailto:gluster-devel@gluster.org>,
"Shyam Ranganathan" mailto:srang...@redhat.com>>
> Sent: Wednesday, July 3, 2019 10:30:58 AM
> Subject: [Gluster-devel] Release 7 Branch Created
>
> Hi Team,
>
> Release 7 branch has been created in upstream.
>
> ## Schedule
>
> Curretnly the plan working backwards on the schedule, here's
what we have:
> - Announcement: Week of Aug 4th, 2019
> - GA tagging: Aug-02-2019
> - RC1: On demand before GA
> - RC0: July-03-2019
> - Late features cut-off: Week of June-24th, 2018
> - Branching (feature cutoff date): June-17-2018
>
> Regards
> Rinku
>
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org <mailto:Gluster-devel@gluster.org>
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Requesting reviews [Re: Release 7 Branch Created]

2019-07-15 Thread Jiffin Tony Thottan

Hi,

The "Add Ganesha HA bits back to glusterfs code repo"[1] is targeted for 
glusterfs-7. Requesting maintainers to review below two patches


[1] 
https://review.gluster.org/#/q/topic:ref-663+(status:open+OR+status:merged)


Regards,

Jiffin

On 15/07/19 5:23 PM, Jiffin Thottan wrote:


- Original Message -
From: "Rinku Kothiya" 
To: maintain...@gluster.org, gluster-devel@gluster.org, "Shyam Ranganathan" 

Sent: Wednesday, July 3, 2019 10:30:58 AM
Subject: [Gluster-devel] Release 7 Branch Created

Hi Team,

Release 7 branch has been created in upstream.

## Schedule

Curretnly the plan working backwards on the schedule, here's what we have:
- Announcement: Week of Aug 4th, 2019
- GA tagging: Aug-02-2019
- RC1: On demand before GA
- RC0: July-03-2019
- Late features cut-off: Week of June-24th, 2018
- Branching (feature cutoff date): June-17-2018

Regards
Rinku

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature

2019-05-06 Thread Jiffin Tony Thottan

Hi

On 04/05/19 12:04 PM, Strahil wrote:

Hi Jiffin,

No vendor will support your corosync/pacemaker stack if you do not have proper 
fencing.
As Gluster is already a cluster of its own, it makes sense to control 
everything from there.

Best Regards,



Yeah I agree with your point. What I meant to say by default this 
feature won't provide any fencing mechanism,


user need to manually configure fencing for the cluster. In future we 
can try to include to default fencing configuration


for the ganesha cluster as part of the Ganesha HA configuration

Regards,

Jiffin



Strahil NikolovOn May 3, 2019 09:08, Jiffin Tony Thottan  
wrote:


On 30/04/19 6:59 PM, Strahil Nikolov wrote:

Hi,

I'm posting this again as it got bounced.
Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.


It do take care those, but need to follow certain prerequisite, but
please fencing won't configured for this setup. May we think about in
future.

--

Jiffin


Still, this will be a lot of work to achieve.

Best Regards,
Strahil Nikolov

On Apr 30, 2019 15:19, Jim Kinney  wrote:
 
+1!

I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.

On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
wrote:
 
Hi all,


Some of you folks may be familiar with HA solution provided for nfs-ganesha by 
gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA project 
"Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting for next 
gluster release 7.

     I have opened up an issue [1] with details and posted initial set of 
patches [2]

Please share your thoughts on the same


Regards,

Jiffin

[1] https://github.com/gluster/glusterfs/issues/663

[2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)



--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.

Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.

Still, this will be a lot of work to achieve.

Best Regards,
Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney  wrote:

+1!
I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.

On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
wrote:

Hi all,

Some of you folks may be familiar with HA solution provided for nfs-ganesha by 
gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA project 
"Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting for next 
gluster release 7.

I have opened up an issue [1] with details and posted initial set of patches [2]

Please share your thoughts on the same

Regards,

Jiffin

[1] https://github.com/gluster/glusterfs/issues/663

[2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)

--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature

2019-05-03 Thread Jiffin Tony Thottan


On 30/04/19 6:59 PM, Strahil Nikolov wrote:

Hi,

I'm posting this again as it got bounced.
Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.



It do take care those, but need to follow certain prerequisite, but 
please fencing won't configured for this setup. May we think about in 
future.


--

Jiffin



Still, this will be a lot of work to achieve.

Best Regards,
Strahil Nikolov

On Apr 30, 2019 15:19, Jim Kinney  wrote:
   
+1!

I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.

On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
wrote:
   
Hi all,


Some of you folks may be familiar with HA solution provided for nfs-ganesha by 
gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA project 
"Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting for next 
gluster release 7.

   I have opened up an issue [1] with details and posted initial set of patches 
[2]

Please share your thoughts on the same


Regards,

Jiffin

[1] https://github.com/gluster/glusterfs/issues/663

[2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)



--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.

Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.

Still, this will be a lot of work to achieve.

Best Regards,
Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney  wrote:

+1!
I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.

On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
wrote:

Hi all,

Some of you folks may be familiar with HA solution provided for nfs-ganesha by 
gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA project 
"Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting for next 
gluster release 7.

I have opened up an issue [1] with details and posted initial set of patches [2]

Please share your thoughts on the same

Regards,

Jiffin

[1] https://github.com/gluster/glusterfs/issues/663

[2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)


--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA cluster solution back to gluster code as gluster-7 feature

2019-05-03 Thread Jiffin Tony Thottan


On 30/04/19 6:41 PM, Renaud Fortier wrote:


IMO, you should keep storhaug and maintain it. At the beginning, we 
were with pacemaker and corosync. Then we move to storhaug with the 
upgrade to gluster 4.1.x. Now you are talking about going back like it 
was. Maybe it will be better with pacemake and corosync but the 
important is to have a solution that will be stable and maintained.




I agree it is very frustrating, there is no longer development planned 
for future unless someone pick it and work on for its stabilization and 
improvement.


My plan is just to get back what gluster and nfs-ganesha had before

--

Jiffin


thanks

Renaud

*De :*gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] *De la part de* Jim Kinney

*Envoyé :* 30 avril 2019 08:20
*À :* gluster-us...@gluster.org; Jiffin Tony Thottan 
; gluster-us...@gluster.org; Gluster Devel 
; gluster-maintain...@gluster.org; 
nfs-ganesha ; de...@lists.nfs-ganesha.org
*Objet :* Re: [Gluster-users] Proposing to previous ganesha HA cluster 
solution back to gluster code as gluster-7 feature


+1!
I'm using nfs-ganesha in my next upgrade so my client systems can use 
NFS instead of fuse mounts. Having an integrated, designed in process 
to coordinate multiple nodes into an HA cluster will very welcome.


On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan 
mailto:jthot...@redhat.com>> wrote:


Hi all,

Some of you folks may be familiar with HA solution provided for
nfs-ganesha by gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA
project "Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state,
hence planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting
for next gluster release 7.

I have opened up an issue [1] with details and posted initial set
of patches [2]

Please share your thoughts on the same

Regards,

Jiffin

[1]https://github.com/gluster/glusterfs/issues/663
<https://github.com/gluster/glusterfs/issues/663>

[2]
https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)


--
Sent from my Android device with K-9 Mail. All tyopes are thumb 
related and reflect authenticity.


___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Proposing to previous ganesha HA cluster solution back to gluster code as gluster-7 feature

2019-04-30 Thread Jiffin Tony Thottan

Hi all,

Some of you folks may be familiar with HA solution provided for 
nfs-ganesha by gluster using pacemaker and corosync.


That feature was removed in glusterfs 3.10 in favour for common HA 
project "Storhaug". Even Storhaug was not progressed


much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back


to gluster code repository with some improvement and targetting for next 
gluster release 7.


I have opened up an issue [1] with details and posted initial set of 
patches [2]


Please share your thoughts on the same

Regards,

Jiffin

[1]https://github.com/gluster/glusterfs/issues/663 



[2] 
https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)


___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Announcing Glusterfs release 3.12.15 (Long Term Maintenance)

2018-10-16 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.15 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.15/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12 


[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.15/


___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Problems about cache virtual glusterfs ACLs for ganesha in md-cache

2018-10-10 Thread Jiffin Tony Thottan



On Thursday 11 October 2018 08:10 AM, Raghavendra Gowdappa wrote:



On Thu, Oct 11, 2018 at 7:47 AM Kinglong Mee > wrote:


Cc nfs-ganesha,

Md-cache has option "cache-posix-acl" that controls caching of
posix ACLs
("system.posix_acl_access"/"system.posix_acl_default") and virtual
glusterfs ACLs
("glusterfs.posix.acl"/"glusterfs.posix.default_acl") now.


Not sure how virtual xattrs are consumed or who generates them. 
+Raghavendra Talur  +Thottan, Jiffin 
 - acl maintainers.


The currently only consumers of this virtual xattr is nfs-ganesha. Nfsv4 
acls were sent from client and ganesha converts to posix acl


and sent as virtual xattr to glusterfs bricks using 
pub_glfs_h_acl_set/get api's. AFAIR  in samba vfs module they convert 
windows acl to


posix acl and sent as actual getxattr/setxattr calls on "system.posixl-acl"

--

Jiffin




But, _posix_xattr_get_set does not fill virtual glusterfs ACLs
when lookup requests.
So, md-cache caches bad virtual glusterfs ACLs.

After I turn on "cache-posix-acl" option to cache ACLs at
md-cache, nfs client gets many EIO errors.

https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/427305

There are two chooses for cache virtual glusterfs ACLs in md-cache,
1. Cache it separately as posix ACLs (a new option maybe
"cache-glusterfs-acl" is added);
   And make sure _posix_xattr_get_set fills them when lookup requests.

2. Does not cache it, only cache posix ACLs;
   If gfapi request it, md-cache lookup according posix ACL at cache,
   if exist, make the virtual glusterfs ACL locally and return to
gfapi;
   otherwise, send the request to glusterfsd.

Virtual glusterfs ACLs are another format of posix ACLs, there are
larger than posix ACLs,
and always exist no matter the really posix ACL exist or not.

So, I'd prefer #2.
Any comments are welcome.

thanks,
Kinglong Mee

___
Gluster-devel mailing list
Gluster-devel@gluster.org 
https://lists.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] [IMPORTANT] Announcing Gluster 3.12.14 and Gluster 4.1.4

2018-09-07 Thread Jiffin Tony Thottan

Hi,

The next set of minor updates for 3.12(.4) and 4.1(.4) are available 
earlier than expected.


These releases were made together mainly to address a security 
vulnerabilities in Gluster [1].


The packages for Gluster 3.12.14 packages available at [5,6,7] and  
release notes [8].


The packages for Gluster 4.1.4 packages available at [9,10,11] and  
release notes [12].


Thanks,

Jiffin

[1] The list of security

   - https://nvd.nist.gov/vuln/detail/CVE-2018-10904
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10907
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10911
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10913
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10914
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10923
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10926
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10927
   -https://nvd.nist.gov/vuln/detail/CVE-2018-10928 


   - https://nvd.nist.gov/vuln/detail/CVE-2018-10929
   - https://nvd.nist.gov/vuln/detail/CVE-2018-10930

[5] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.14/
[6] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[7] https://build.opensuse.org/project/subprojects/home:glusterfs
[8] Release 
notes:https://gluster.readthedocs.io/en/latest/release-notes/3.12.14/


[9] https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.4/
[10] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-4.1
[11] https://build.opensuse.org/project/subprojects/home:glusterfs
[12] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/4.1.4/


___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Announcing Glusterfs release 3.12.12 (Long Term Maintenance)

2018-07-12 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.12 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.12/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12 


[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.12/


___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Release 3.12.12: Scheduled for the 11th of July

2018-07-11 Thread Jiffin Tony Thottan

Hi Mabi,

I have checked with afr maintainer, all of the required changes is 
merged in 3.12.


Hence moving forward with 3.12.12 release

Regards,

Jiffin


On Monday 09 July 2018 01:04 PM, mabi wrote:

Hi Jiffin,

Based on the issues I am encountering on a nearly daily basis (See 
"New 3.12.7 possible split-brain on replica 3" thread in this ML) 
since now already 2-3 months I would be really glad if the required 
fixes as mentioned by Ravi could make it into the 3.12.12 release. 
Ravi mentioned the following:


afr: heal gfids when file is not present on all bricks
afr: don't update readables if inode refresh failed on all children
afr: fix bug-1363721.t failure
afr: add quorum checks in pre-op
afr: don't treat all cases all bricks being blamed as split-brain
afr: capture the correct errno in post-op quorum check
afr: add quorum checks in post-op

Right now I only see the first one pending in the review dashboard. It 
would be great if all of them could make it into this release.


Best regards,
Mabi



‐‐‐ Original Message ‐‐‐
On July 9, 2018 7:18 AM, Jiffin Tony Thottan  wrote:


Hi,

It's time to prepare the 3.12.12 release, which falls on the 10th of
each month, and hence would be 11-07-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.12? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.12

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 





___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Release 3.12.12: Scheduled for the 11th of July

2018-07-08 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.12 release, which falls on the 10th of
each month, and hence would be 11-07-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.12? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.12

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 



___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Announcing Glusterfs release 3.12.10 (Long Term Maintenance)

2018-06-15 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.10 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.10/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12 


[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.10/


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 3.12.10: Scheduled for the 13th of July

2018-06-12 Thread Jiffin Tony Thottan

typos


On Tuesday 12 June 2018 12:15 PM, Jiffin Tony Thottan wrote:

Hi,

It's time to prepare the 3.12.7 release, which falls on the 10th of


3.12.10


each month, and hence would be 08-03-2018 this time around.



13-06-2018


This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.10? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

Plus I have cc'ed owners of patch which can be candidate for 3.12 but 
failed regressions.


Please have look into that

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.10

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 





___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Announcing Glusterfs release 3.12.8 (Long Term Maintenance)

2018-04-18 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.8 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.8/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.8/


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Release 3.12.8: Scheduled for the 12th of April

2018-04-10 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.8 release, which falls on the 10th of
each month, and hence would be 12-04-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.7? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is 
*green*

as all fixes ported to 3.10, are ported to 3.12 as well.

@Mlind

IMO https://review.gluster.org/19659 is like a minor feature to me. Can 
please provide a justification for why it need to include in 3.12 stable 
release?


And please rebase the change as well

@Raghavendra

The smoke failed for https://review.gluster.org/#/c/19818/. Can please 
check the same?


Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.8

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Release 3.12.7: Scheduled for the 8th of March

2018-03-05 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.7 release, which falls on the 10th of
each month, and hence would be 08-03-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.7? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is 
*green*

as all fixes ported to 3.10, are ported to 3.12 as well.

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.7

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Announcing Glusterfs release 3.12.6 (Long Term Maintenance)

2018-02-19 Thread Jiffin Tony Thottan



On Tuesday 20 February 2018 09:37 AM, Jiffin Tony Thottan wrote:


The Gluster community is pleased to announce the release of Gluster 
3.12.6 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry following major issue that is reported in the 
release-notes as follows,


1.) - Expanding a gluster volume that is sharded may cause file 
corruption


    Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


    The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


    Status of this bug can be tracked here, #1465123



Above issue got fixed in 3.12.6. Sorry for mentioning it in the announce 
mail.


--

Jiffin



Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.6/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.6/




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Announcing Glusterfs release 3.12.6 (Long Term Maintenance)

2018-02-19 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.6 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry following major issue that is reported in the 
release-notes as follows,


1.) - Expanding a gluster volume that is sharded may cause file corruption

    Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


    The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


    Status of this bug can be tracked here, #1465123

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.6/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.6/


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] IMP: upgrade issue

2018-02-12 Thread Jiffin Tony Thottan
Since the change was there from 3.10 onwards, only upgrade from eoled 
version to stable will break right?


I didn't notice anyone complaining about the issue in community till now.

--

Jiffin


On Tuesday 13 February 2018 08:21 AM, Hari Gowtham wrote:

I'm working on it.

On Tue, Feb 13, 2018 at 8:11 AM, Atin Mukherjee  wrote:

FYI.. We need to backport https://review.gluster.org/#/c/19552 (yet to be
merged in mainline) in all the active release branches to avoid users to get
into upgrade failures. The bug and the commit has the further details.





___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Test case failure

2018-02-07 Thread Jiffin Tony Thottan

Hi Vaibhav,


On Wednesday 07 February 2018 02:28 PM, Vaibhav Vaingankar wrote:


Hi,

I am building GlusterFS from source on my machine. Build is 
successful, one test case if failing while testing. Test 
"/tests/basic/mount-nfs-auth.t/" seen repeatedly failing.

following are the details of environment setup I have done:

apt-get install make automake autoconf libtool flex bison
pkg-config libssl-dev libxml2-dev python-dev libaio-dev
libibverbs-dev librdmacm-dev libreadline-dev liblvm2-dev
libglib2.0-dev liburcu-dev libcmocka-dev libsqlite3-dev
libacl1-dev wget tar dbench git xfsprogs attr nfs-common
yajl-tools sqlite3 libxml2-utils thin-provisioning-tools bc 



Also, I have configured with "--enable-gnfs".

So is this known failure or is it a environment specific issue?

Please do let me know if I am missing something here.


Can u please proivde output of the test which ran and u will get logs in 
archived log"moun-nfs-auth.tar" in /var/log/glusterfs (i hope u are 
running version above 3.8) please that as well.


--

Jiffin



Thanks and regards,
Vaibhav Vaingankar


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Release 3.12.6: Scheduled for the 12th of February

2018-02-01 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.6 release, which falls on the 10th of
each month, and hence would be 12-02-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.6? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is 
*green*

as all fixes ported to 3.10, are ported to 3.12 as well.

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.6

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 3.12.5: Scheduled for the 12th of January

2018-01-31 Thread Jiffin Tony Thottan
The glusterfs 3.12.5 got released on Jan 12th 2018. Apologies for not 
sending the announcement mail on time


Release notes for the release can be found at [4].

We still carry following major issue that is reported in the 
release-notes as follows,


1.) - Expanding a gluster volume that is sharded may cause file corruption

    Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


    The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


    Status of this bug can be tracked here, #1465123

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.5/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.5/



On Thursday 11 January 2018 11:32 AM, Jiffin Tony Thottan wrote:


Hi,

It's time to prepare the 3.12.5 release, which falls on the 10th of
each month, and hence would be 12-01-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.5? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this 
is *green*

as all fixes ported to 3.10, are ported to 3.12 as well.

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.5

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 4.0: Making it happen!

2018-01-18 Thread Jiffin Tony Thottan



On Wednesday 17 January 2018 04:55 PM, Jiffin Tony Thottan wrote:



On Tuesday 16 January 2018 08:57 PM, Shyam Ranganathan wrote:

On 01/10/2018 01:14 PM, Shyam Ranganathan wrote:

Hi,

4.0 branching date is slated on the 16th of Jan 2018 and release is
slated for the end of Feb (28th), 2018.

This is today! So read on...

Short update: I am going to wait a couple more days before branching, to
settle release content and exceptions. Branching is hence on Jan, 18th
(Thursday).


We are at the phase when we need to ensure our release scope is correct
and *must* release features are landing. Towards this we need the
following information for all contributors.

1) Features that are making it to the release by branching date

- There are currently 35 open github issues marked as 4.0 milestone [1]
- Need contributors to look at this list and let us know which will 
meet

the branching date

Other than the protocol changes (from Amar), I did not receive any
requests for features that are making it to the release. I have compiled
a list of features based on patches in gerrit that are open, to check
what features are viable to make it to 4.0. This can be found here [3].

NOTE: All features, other than the ones in [3] are being moved out of
the 4.0 milestone.


- Need contributors to let us know which may slip and hence needs a
backport exception to 4.0 branch (post branching).
- Need milestone corrections on features that are not making it to the
4.0 release

I need the following contributors to respond and state if the feature in
[3] should still be tracked against 4.0 and how much time is possibly
needed to make it happen.

- Poornima, Amar, Jiffin, Du, Susant, Sanoj, Vijay


Hi,

The two gfapi[1,2] related changes have ack from poornima and Niels 
mentioned that he will do the review by EOD.


[1] https://review.gluster.org/#/c/18784/
[2] https://review.gluster.org/#/c/18785/




Niels has few comments on above patch. I need to have one week 
extension(26th Jan 2018)

--
Jiffin


Regards,
Jiffin




NOTE: Slips are accepted if they fall 1-1.5 weeks post branching, not
post that, and called out before branching!

2) Reviews needing priority

- There could be features that are up for review, and considering we
have about 6-7 days before branching, we need a list of these commits,
that you want review attention on.
- This will be added to this [2] dashboard, easing contributor 
access to

top priority reviews before branching

As of now, I am adding a few from the list in [3] for further review
attention as I see things evolving, more will be added as the point
above is answered by the respective contributors.


3) Review help!

- This link [2] contains reviews that need attention, as they are
targeted for 4.0. Request maintainers and contributors to pay close
attention to this list on a daily basis and help out with reviews.

Thanks,
Shyam

[1] github issues marked for 4.0:
https://github.com/gluster/glusterfs/milestone/3

[2] Review focus for features planned to land in 4.0:
https://review.gluster.org/#/q/owner:srangana%2540redhat.com+is:starred
[3] Release 4.0 features with pending code reviews: 
http://bit.ly/2rbjcl8



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 4.0: Making it happen!

2018-01-17 Thread Jiffin Tony Thottan



On Tuesday 16 January 2018 08:57 PM, Shyam Ranganathan wrote:

On 01/10/2018 01:14 PM, Shyam Ranganathan wrote:

Hi,

4.0 branching date is slated on the 16th of Jan 2018 and release is
slated for the end of Feb (28th), 2018.

This is today! So read on...

Short update: I am going to wait a couple more days before branching, to
settle release content and exceptions. Branching is hence on Jan, 18th
(Thursday).


We are at the phase when we need to ensure our release scope is correct
and *must* release features are landing. Towards this we need the
following information for all contributors.

1) Features that are making it to the release by branching date

- There are currently 35 open github issues marked as 4.0 milestone [1]
- Need contributors to look at this list and let us know which will meet
the branching date

Other than the protocol changes (from Amar), I did not receive any
requests for features that are making it to the release. I have compiled
a list of features based on patches in gerrit that are open, to check
what features are viable to make it to 4.0. This can be found here [3].

NOTE: All features, other than the ones in [3] are being moved out of
the 4.0 milestone.


- Need contributors to let us know which may slip and hence needs a
backport exception to 4.0 branch (post branching).
- Need milestone corrections on features that are not making it to the
4.0 release

I need the following contributors to respond and state if the feature in
[3] should still be tracked against 4.0 and how much time is possibly
needed to make it happen.

- Poornima, Amar, Jiffin, Du, Susant, Sanoj, Vijay


Hi,

The two gfapi[1,2] related changes have ack from poornima and Niels 
mentioned that he will do the review by EOD.


[1] https://review.gluster.org/#/c/18784/
[2] https://review.gluster.org/#/c/18785/

Regards,
Jiffin




NOTE: Slips are accepted if they fall 1-1.5 weeks post branching, not
post that, and called out before branching!

2) Reviews needing priority

- There could be features that are up for review, and considering we
have about 6-7 days before branching, we need a list of these commits,
that you want review attention on.
- This will be added to this [2] dashboard, easing contributor access to
top priority reviews before branching

As of now, I am adding a few from the list in [3] for further review
attention as I see things evolving, more will be added as the point
above is answered by the respective contributors.


3) Review help!

- This link [2] contains reviews that need attention, as they are
targeted for 4.0. Request maintainers and contributors to pay close
attention to this list on a daily basis and help out with reviews.

Thanks,
Shyam

[1] github issues marked for 4.0:
https://github.com/gluster/glusterfs/milestone/3

[2] Review focus for features planned to land in 4.0:
https://review.gluster.org/#/q/owner:srangana%2540redhat.com+is:starred

[3] Release 4.0 features with pending code reviews: http://bit.ly/2rbjcl8


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Release 3.12.5: Scheduled for the 12th of January

2018-01-10 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.5 release, which falls on the 10th of
each month, and hence would be 12-01-2018 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.5? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is 
*green*

as all fixes ported to 3.10, are ported to 3.12 as well.

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.5

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Announcing Glusterfs release 3.13.1 (Short Term Maintenance)

2017-12-21 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.13.1 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry following major issue that is reported in the 
release-notes as follows,


1.) - Expanding a gluster volume that is sharded may cause file corruption

    Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


    The last known cause for corruption (Bug #1515434) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


    Status of this bug can be tracked here, #1515434

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.13/3.13.1/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.13
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.13.1/


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Announcing Glusterfs release 3.12.4 (Long Term Maintenance)

2017-12-18 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.4 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry following major issue that is reported in the 
release-notes as follows,


1.) - Expanding a gluster volume that is sharded may cause file corruption

    Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


    The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


    Status of this bug can be tracked here, #1465123

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.4/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.4/


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Release 3.12.4 : Scheduled for the 12th of December

2017-12-11 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.4 release, which falls on the 10th of
each month, and hence would be 12-12-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.4? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is 
*green*

as all fixes ported to 3.10, are ported to 3.12 as well.


Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.4

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 3.12.3 : Scheduled for the 10th of November

2017-11-09 Thread Jiffin Tony Thottan

Hi,

I am planning to do 3.12.3 release today 10:00 pm IST (4:30 pm GMT).

Following bugs is removed from tracker list

Bug 1501235 - [SNAPSHOT] Unable to mount a snapshot on client -- based 
on discussion on gerrit, it can be target for 3.13

instead of the 3.12.x release

Bug 1507006 - Read-only option is ignored and volume mounted in r/w mode 
-- assigned to none, no progress in master bug as well,


will be tracked as part of 3.12.4

Regards,

Jiffin


On 06/11/17 11:52, Jiffin Tony Thottan wrote:


Hi,

It's time to prepare the 3.12.3 release, which falls on the 10th of
each month, and hence would be 10-11-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.3? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this 
is *green*

as all fixes ported to 3.10, are ported to 3.12 as well.

There are two patches under this category

a.) https://review.gluster.org/#/c/18422/ -- Kaleb has -1 from me and 
Niels


b.) https://review.gluster.org/#/c/18459/ -- @Gunther, can u pls 
trigger regression for this patch(give verified +1 from gerrit)



Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.3

[2] 3.12 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 





___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Release 3.12.3 : Scheduled for the 10th of November

2017-11-05 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.3 release, which falls on the 10th of
each month, and hence would be 10-11-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.3? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is 
*green*

as all fixes ported to 3.10, are ported to 3.12 as well.

There are two patches under this category

a.) https://review.gluster.org/#/c/18422/ -- Kaleb has -1 from me and Niels

b.) https://review.gluster.org/#/c/18459/ -- @Gunther, can u pls trigger 
regression for this patch(give verified +1 from gerrit)



Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.2

[2] 3.10 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Announcing Glusterfs release 3.12.2 (Long Term Maintenance)

2017-10-13 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.2 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry following major issues that is reported in the 
release-notes as follows,


1.) - Expanding a gluster volume that is sharded may cause file corruption

Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


Status of this bug can be tracked here, #1465123


2 .) Gluster volume restarts fail if the sub directory export feature is 
in use. Status of this issue can be tracked here, #1501315


3.) Mounting a gluster snapshot will fail, when attempting a FUSE based 
mount of the snapshot. So for the current users, it is recommend to only 
access snapshot via


".snaps" directory on a mounted gluster volume. Status of this issue can 
be tracked here, #1501378


Thanks,
 Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.2/ 


[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs

[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.2/


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] [Gluster-users] Release 3.12.2 : Scheduled for the 10th of October

2017-10-12 Thread Jiffin Tony Thottan



On 12/10/17 16:05, Amar Tumballi wrote:



On Thu, Oct 12, 2017 at 3:43 PM, Mohammed Rafi K C 
<rkavu...@redhat.com <mailto:rkavu...@redhat.com>> wrote:


Hi Jiffin/Shyam,


Snapshot volume has been broken in 3.12 . We just got the bug, I have
send a patch [1]  . Let me know your thought.



Similar with subdir mount's authentication. [2]

[2] : https://review.gluster.org/#/c/18489/

[1] : https://review.gluster.org/18506
<https://review.gluster.org/18506>


Hi,

Both issues looks like a regression. Master patch [2] got merged in 
master but [1] is still pending.

@Rafi : Can you get the reviews done ASAP and merge it on master.
I hope both can be make it in 3.12 before the time deadline. If not 
please let me know.


Thanks,
Jiffin



On 10/12/2017 12:32 PM, Jiffin Tony Thottan wrote:
> Hi,
>
> I am planning to do 3.12.2 release today 11:00 pm IST (5:30 pm GMT).
>
> Following bugs is removed from tracker list
>
> Bug 1493422 - AFR : [RFE] Improvements needed in "gluster volume
heal
> info" commands -- feature request will be target for 3.13
>
> Bug 1497989 - Gluster 3.12.1 Packages require manual systemctl
daemon
> reload after install -- "-1" from Kaleb, no progress from Oct 4th,
>
> will be tracked as part of 3.12.3
>
    > Regards,
>
> Jiffin
>
>
>
>
> On 06/10/17 12:36, Jiffin Tony Thottan wrote:
>> Hi,
>>
>> It's time to prepare the 3.12.2 release, which falls on the 10th of
>> each month, and hence would be 10-10-2017 this time around.
>>
>> This mail is to call out the following,
>>
>> 1) Are there any pending *blocker* bugs that need to be tracked for
>> 3.12.2? If so mark them against the provided tracker [1] as
blockers
>> for the release, or at the very least post them as a response
to this
>> mail
>>
>> 2) Pending reviews in the 3.12 dashboard will be part of the
release,
>> *iff* they pass regressions and have the review votes, so use the
>> dashboard [2] to check on the status of your patches to 3.12
and get
>> these going
>>
>> 3) I have made checks on what went into 3.10 post 3.12 release
and if
>> these fixes are already included in 3.12 branch, then status on
this
>> is *green*
>> as all fixes ported to 3.10, are ported to 3.12 as well.
>>
>> Thanks,
>> Jiffin
>>
>> [1] Release bug tracker:
>> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.2
<https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.2>
>>
>> [2] 3.10 review dashboard:
>>

https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard

<https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard>
>>
>>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org <mailto:gluster-us...@gluster.org>
> http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>

___
maintainers mailing list
maintain...@gluster.org <mailto:maintain...@gluster.org>
http://lists.gluster.org/mailman/listinfo/maintainers
<http://lists.gluster.org/mailman/listinfo/maintainers>




--
Amar Tumballi (amarts)


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 3.12.2 : Scheduled for the 10th of October

2017-10-12 Thread Jiffin Tony Thottan

Hi,

I am planning to do 3.12.2 release today 11:00 pm IST (5:30 pm GMT).

Following bugs is removed from tracker list

Bug 1493422 - AFR : [RFE] Improvements needed in "gluster volume heal 
info" commands -- feature request will be target for 3.13


Bug 1497989 - Gluster 3.12.1 Packages require manual systemctl daemon 
reload after install -- "-1" from Kaleb, no progress from Oct 4th,


will be tracked as part of 3.12.3

Regards,

Jiffin




On 06/10/17 12:36, Jiffin Tony Thottan wrote:

Hi,

It's time to prepare the 3.12.2 release, which falls on the 10th of
each month, and hence would be 10-10-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.2? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this 
is *green*

as all fixes ported to 3.10, are ported to 3.12 as well.

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.2

[2] 3.10 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard 





___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Release 3.12.2 : Scheduled for the 10th of October

2017-10-06 Thread Jiffin Tony Thottan

Hi,

It's time to prepare the 3.12.2 release, which falls on the 10th of
each month, and hence would be 10-10-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.12.2? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.12 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.12 and get
these going

3) I have made checks on what went into 3.10 post 3.12 release and if
these fixes are already included in 3.12 branch, then status on this is 
*green*

as all fixes ported to 3.10, are ported to 3.12 as well.

Thanks,
Jiffin

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.2

[2] 3.10 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Announcing Glusterfs release 3.12.1 (Long Term Maintenance)

2017-09-14 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.1 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry a major issue that is reported in the release-notes as 
follows,


- Expanding a gluster volume that is sharded may cause file corruption

Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


Status of this bug can be tracked here, #1465123


Thanks,
Gluster community

[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.1/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs

[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.1/


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Release 3.12: Scope and calendar!

2017-06-05 Thread Jiffin Tony Thottan



On 01/06/17 22:47, Shyam wrote:

Hi,

Here are some top reminders for the 3.12 release:

1) When 3.12 is released 3.8 will be EOL'd, hence users are encouraged 
to prepare for the same as per the calendar posted here.


2) 3.12 is a long term maintenance (LTM) release, and potentially the 
last in the 3.x line of Gluster!


3) From this release onward, the feature freeze date is moved ~45 days 
in advance, before the release. Hence, for this one release you will 
have lesser time to get your features into the release.


Release calendar:

- Feature freeze, or branching date: July 17th, 2017
   - All feature post this date need exceptions granted to make it 
into the 3.12 release


- Release date: August 30th, 2017

Release owners:

- Shyam
-  Any volunteers?



I am interested
--
Jiffin


Features and major changes process in a nutshell:
1) Open a github issue

2) Refer the issue # in the commit messages of all changes against the 
feature (specs, code, tests, docs, release notes) (refer to the issue 
as "updates gluster/glusterfs#N" where N is the issue)


3) We will ease out release-notes updates form this release onward. 
Still thinking how to get that done, but the intention is that a 
contributor can update release notes before/on/after completion of the 
feature and not worry about branching dates etc. IOW, you can control 
when you are done, than the release dates controlling the same for you.


Thanks,
Shyam
___
Gluster-users mailing list
gluster-us...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [Brick-Multiplexing] Failure to create .trashcan in all bricks per node

2017-05-02 Thread Jiffin Tony Thottan
Following bugs were reported against trash translator in a brick 
multiplexing enabled environment:


1447389 
1447390 
1447392 

In all the above cases trash directory, namely .trashcan, was being 
created only on one brick per node.
Trash directory is usually created within notify() inside trash 
translator on receiving CHILD_UP event
from posix translator. [trash.c:2367]. This CHILD_UP event is sent by 
posix translator on receiving PARENT_UP.


When brick multiplexing is enabled it seems that notify() is invoked 
only on the first brick which follows normal
path, but not in other bricks. On further debugging, we could see that 
in glusterfs_graph_attach(), apart from
graph preparation and initialization it lacks xlator_notify()  and 
parent_up calls as in normal cases.


Can you please shed some light on how we can move forward with these bugs?

Also thanks Atin and Nithya for help in debugging above issues.

Regards,
Jiffin

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Minutes of Gluster Community Bug Triage meeting

2017-03-07 Thread Jiffin Tony Thottan

Hi,

Thanks for everyone's participation


Meeting summary
---
* agenda:https://github.com/gluster/glusterfs/wiki/Bug-Triage-Meeting
  (jiffin, 12:00:30)
* Roll call  (jiffin, 12:00:39)

* Next weeks meeting host  (jiffin, 12:06:15)
  * ACTION: hgowtham will host on March  7th  (jiffin, 12:07:21)
  * ACTION: ndevos need to decide on how to provide/use debug builds
(jiffin, 12:08:15)
  * ACTION: jiffin  needs to send the changes to check-bugs.py (jiffin,
12:08:22)

* Group Triage  (jiffin, 12:08:28)
  * you can find the bugs to triage here in
http://bit.ly/gluster-bugs-to-triage  (jiffin, 12:08:34)
  *
https://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Triage/
(jiffin, 12:08:40)

* Open Floor  (jiffin, 12:19:27)

Meeting ended at 12:20:19 UTC.




Action Items

* hgowtham will host on March  7th
* ndevos need to decide on how to provide/use debug builds
* jiffin  needs to send the changes to check-bugs.py




Action Items, by person
---
* hgowtham
  * hgowtham will host on March  7th
* jiffin
  * jiffin  needs to send the changes to check-bugs.py
* **UNASSIGNED**
  * ndevos need to decide on how to provide/use debug builds




People Present (lines said)
---
* jiffin (25)
* hgowtham (5)
* zodbot (3)
* skoduri (2)

See everyone at same time on March 7th 2017

Regards,

Jiffin

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 3.10: Pending reviews

2017-01-17 Thread Jiffin Tony Thottan

Hi,


On 17/01/17 00:38, Shyam wrote:

Hi,

Release 3.10 branching is slated for tomorrow 17th Jan. This means 
features that slip the branching date need to be a part of the next 
release (if they are ready by then), i.e 3.11.


Of course there are going to be features that are on the edge of 
getting done, so consider this mail a push for completion of reviews 
of such features, so that they can be a part of the 3.10 release.


If there are some critical features that the owners think can make it 
(as backports say) within a week of branching, shout out now, so that 
we can compile and asses an exception list for the same.




The review comments in patch http://review.gluster.org/#/c/12256/ can be 
addressed within a day or two.
I really love to make this change as part of 3.10, since lot of users as 
developers request for the same.

--
Jiffin


Thanks,
Shyam

Review dash: http://bit.ly/2iXor01

On 01/10/2017 05:01 AM, Shyam wrote:

Hi,

Here [1] is a quick dashboard that details the list of pending reviews
for features planned for the 3.10 release (see scope here [2]).

Request feature owners to check and see if all their reviews are
appearing here, and let me know if not.

Further, the larger request is to the community, to focus on these
reviews first, so that we can meet the branching date with the required
set of features as a part of the release. We have 7 days before we 
branch.


Thanks,
Shyam

[1] Short URL, may change if the list is incomplete: 
http://bit.ly/2iXor01


[2] List of features, see "Release 3.10" lane:
https://github.com/gluster/glusterfs/projects/1
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
maintainers mailing list
maintain...@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Release 3.10 feature proposal : Setting SELinux Context for entries inside Gluster Volumes

2016-12-09 Thread Jiffin Tony Thottan

Hi all,

This was a proposed feature for 3.9. So more details can be found in the 
below mail thread :


https://www.gluster.org/pipermail/gluster-users/2016-March/025919.html

Feature BZ : https://bugzilla.redhat.com/show_bug.cgi?id=1318100

Design Doc : http://review.gluster.org/#/c/13789/

Regards,

Jiffin & Niels

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] NFS-Ganesha last week update

2016-10-26 Thread Jiffin Tony Thottan

Brief update on progress we made on NFS-Ganesha

In last week NFS-Ganesha with FSAL_GLUSTER was tested in Bakeathon- 
Boston 2016 and


found issues in ACL, solaris client and pNFS. There were four different 
test environment


used with default configuration, kerberos configured, ACLs enabled and 
pNFS configured.


All of them were  installed with nfs-ganesha 2.4.0 and glusterfs 3.9rc0. 
The patches were sent


to fix all the reported issue and planning to tag nfs-ganesha 2.4.1 by 
end of this week


Patches worked on

I.) Gluster side

http://review.gluster.org/#/c/15618/ -- issue related to open call in 
pynfs test


http://review.gluster.org/#/c/15680/ -- glfs_free () introduction

http://review.gluster.org/#/c/15640/ -- redesign of gfapi related to upcall

http://review.gluster.org/#/c/15617/ -- introduce dbus command for 
UpdateExports in refresh config


II.) NFS-Ganesha side

https://review.gerrithub.io/#/c/299035/ -- Invalidate cache entry on 
pNFS/DS Writes


https://review.gerrithub.io/#/c/298874/ -- fsal_open2: only fail 
directories on createmode != FSAL_NO_CREATE


https://review.gerrithub.io/#/c/298859/ -- Fix READDIR cookie traversal 
during deletes


https://review.gerrithub.io/#/c/298883/ -- stack DS operations to MDCACHE

https://review.gerrithub.io/#/c/298963/ -- related to layoutcommit calll 
in pNFS


https://review.gerrithub.io/#/c/299022/ -- related nfsv3 subdir mount

https://review.gerrithub.io/#/c/299006/ -- related to mask used in ACL calls

--

Jiffin

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Being Gluster NFS off

2016-10-10 Thread Jiffin Tony Thottan

Hi all,

I am trying to list out  glusterd issues with the 3.8 feature "Gluster 
NFS being off by default".


As per current implementation,

1.) On a freshly installed setup with 3.8/3.9, if u create a volume, 
then Gluster NFS won't


 come by default and in the vol info we can see " nfs.disable on"

2.) For existing volumes(created in 3.7 or below), there are two 
possibilities


a.) if there are only volumes with default configuration, Gluster NFS 
won't come up and in


 vol info "nfs.disable on" won't displayed. In volume status 
command pid of Gluster NFS


will be N/A.

b.) if there is a volume with explicit "nfs.disable off" set , then 
after upgrade Gluster NFS will


come and export all the existing volumes and vol info will have similar 
value as a.)


Currently three bugs[1,2,3] have opened to address these issues.

As per 3.8 release note, gluster nfs should be up for all existing 
volumes with default


configuration. We are planning to change this behavior from 3.9 onwards 
and Atin send out a patch[4]


With his patch after upgrade all the existing volumes with default 
configuration will have


nfs.disable value set to on explicitly in the vol info. So Gluster NFS 
won't export that volume at all


and gluster v status does not display status of Gluster NFS server.

This patch also solves bugs 2 and 3 as well

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1383006 - gluster nfs 
not coming for existing volumes on 3.8


[2] https://bugzilla.redhat.com/show_bug.cgi?id=1383005 - getting n/a 
entry in volume status command


[3] https://bugzilla.redhat.com/show_bug.cgi?id=1379223 - nfs.disable: 
on" is not showing in Vol info by default
  for the 3.7.x volumes after updating to 
3.9.0


[4] http://review.gluster.org/#/c/15568/

Regards,
Jiffin

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Trash test aborted netbsd regression

2016-09-09 Thread Jiffin Tony Thottan

Hi Atin,

From the console log:

01:22:55 rm: /mnt/glusterfs/0/abc/internal_op: Operation not permitted
01:22:55 rm: /mnt/glusterfs/0/abc: Directory not empty
01:22:55 mv: rename /mnt/glusterfs/0/abc to /mnt/glusterfs/0/trash: 
Operation not permitted

05:09:06 Build timed out (after 300 minutes). Marking the build as aborted.

rename is the last test case in trash.t and it got executed successfully.

snippet from trash.t

TEST [ -e $M0/abc ]
mv $M0/abc $M0/trash
TEST [ -e $M0/abc ]

cleanup

So it may hung either TEST [-e $MO/abc] or in cleanup.

IMHO possibility of TEST [-e $M0/abc] getting time out much less when 
compared to cleanup.


Regards,

Jiffin


On 09/09/16 18:34, Atin Mukherjee wrote:

http://build.gluster.org/job/netbsd7-regression/634/

(I had observed same failure earlier as well but didnt have link)

--
--Atin


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Weekly Community Meeting 31/Aug/2016 - Minutes

2016-09-07 Thread Jiffin Tony Thottan

Hi all,

Thanks for everyone's  participation and making it success.

The minutes and logs for todays meeting are available from the links below,
 Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-08-31/weekly_community_meeting_31aug2015.2016-08-31-12.01.html
 Minutes 
(text):https://meetbot.fedoraproject.org/gluster-meeting/2016-08-31/weekly_community_meeting_31aug2015.2016-08-31-12.01.txt
 Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-08-31/weekly_community_meeting_31aug2015.2016-08-31-12.01.log.html


kshlm will host next week's meeting. See you all again at same time on 
7th September 2016 at #gluster-meeting.


Regards,

Jiffin

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-30 Thread Jiffin Tony Thottan
y 31st August.

When introduced at a later date, the RPM won't have version 3.9 and
also wouldn't be dependent on 3.9 and will have any version (>=3.7) of
python-gluster package as a requirement.

[1]: http://review.gluster.org/#/c/15204/


Thanks,
Niels



[1]: https://pypi.python.org/pypi/gfapi
[2]:
http://nongnu.13855.n7.nabble.com/Packaging-libgfapi-python-td214308.html


11) Management REST APIs
Feature owners: Aravinda VK

12) Events APIs
Feature owners: Aravinda VK

13) CLI to get state representation of a cluster from the local
glusterd
pov
Feature owners: Samikshan Bairagya

14) Posix-locks Reclaim support
Feature owners: Soumya Koduri

Sorry this feature will not make it 3.9. Hopefully will get it in the
next release.


15) Deprecate striped volumes
Feature owners: Vijay Bellur, Niels de Vos

16) Improvements in Gluster NFS-Ganesha integration
Feature owners: Jiffin Tony Thottan, Soumya Koduri

This one is already merged.

Thanks,
Soumya


*The following need to be added to the roadmap:*

Features that made it to master already but were not palnned:
1) Multi threaded self-heal in EC
Feature owner: Pranith (Did this because serkan asked for it. He has
9PB
volume, self-healing takes a long time :-/)

2) Lock revocation (Facebook patch)
Feature owner: Richard Wareing

Features that look like will make it to 3.9.0:
1) Hardware extension support for EC
Feature owner: Xavi

2) Reset brick support for replica volumes:
Feature owner: Anuradha

3) Md-cache perf improvements in smb:
Feature owner: Poornima

--
Pranith


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] CFP Gluster Developer Summit

2016-08-19 Thread Jiffin Tony Thottan



On 17/08/16 19:26, Kaleb S. KEITHLEY wrote:

I propose to present on one or more of the following topics:

* NFS-Ganesha Architecture, Roadmap, and Status


Sorry for the late notice. I am willing to be a co-presenter for the 
above topic.

--
Jiffin


* Architecture of the High Availability Solution for Ganesha and Samba
  - detailed walk through and demo of current implementation
  - difference between the current and storhaug implementations
* High Level Overview of autoconf/automake/libtool configuration
  (I gave a presentation in BLR in 2015, so this is perhaps less
interesting?)
* Packaging Howto — RPMs and .debs
  (maybe a breakout session or a BOF. Would like to (re)enlist volunteers
to help build packages.)




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Improvements in Glusterd NFS-Ganesha integration for GlusterFS3.9

2016-08-08 Thread Jiffin Tony Thottan

Hi all,

Currently all the configuration related NFS Ganesha is stored 
individually in each
node belong to ganesha cluster at /etc/ganesha.  The following are the 
files

present in it :
- ganesha.conf - configuration file for ganesha process
- ganesha-ha.conf - configuration file high availablity cluster
- files under export directory - export configuration file for gluster 
volume

- .export_added  - to track no of volumes got exported

The glusterd does not have specific control over this file or in other 
words there
is no specific way to synchronize these files. So this will result in 
having different

values for above files. For example consider following node down scenario,
 * Two volumes volA and volB got exported one after another while one of
node(lets call it as tmp)  the ganesha cluster was down.
 * Now in current cluster volA will be export with Id = 2, for volB it 
will be 3.
 * When tmp comes up, there is a chance in which volA with id 3 and 
volB with 2
 * This a give undesired behavior during failover and failback with the 
node tmp.


More such scenarios is described in the bug[1]. A proposed solution to 
overcome
such situations is that store above mentioned configuration files in 
shared storage.
Then it can be shared by every node in ganesha cluster and all such mess 
can be

avoided. More detailed description can be found in feature page[2]

So here as prerequisite, user need to create a folder nfs-ganesha in 
shared storage
and save ganesha.conf, ganesha-ha.conf in it. When the cli "gluster 
nfs-ganesha enable"
got executed, glusterd creates a symlink in /etc/ganesha using 
ganesha.conf, then start
ganesha process and sets up HA. During disable it teardowns the HA, 
stops ganesha

process and then remove entry from /etc/ganesha.

For existing users, scripts will be provided for the smooth migration.

Please share your thoughts on the same

[1] Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=1355956
[2] Feature page : http://review.gluster.org/#/c/15105/
[3] Patches posted upstream for share storage migration
* http://review.gluster.org/14906
* http://review.gluster.org/14908
* http://review.gluster.org/14909
[4] Patches posted/merged upstream as part of clean up
* http://review.gluster.org/15055
* http://review.gluster.org/14871
* http://review.gluster.org/14812
* http://review.gluster.org/14907

Regards,
Jiffin


Regards,
Jiffin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Mark all the xlator fops 'static '

2016-07-12 Thread Jiffin Tony Thottan



On 12/07/16 16:00, Kaleb KEITHLEY wrote:

On 07/12/2016 03:00 AM, Jiffin Tony Thottan wrote:


On 31/07/15 19:29, Kaleb S. KEITHLEY wrote:

On 07/30/2015 05:16 PM, Niels de Vos wrote:

On Thu, Jul 30, 2015 at 08:27:15PM +0530, Soumya Koduri wrote:

Hi,

With the applications using and loading different libraries, the
function
symbols with the same name may get resolved incorrectly depending on
the
order in which those libraries get dynamically loaded.

Recently we have seen an issue with 'snapview-client' xlator lookup
fop -
'svc_lookup' which matched with one of the routines provided by
libntirpc,
used by NFS-Ganesha. More details are in [1], [2].

Indeed, the problem seems to be caused in an execution flow like this:

1. nfs-ganesha main binary starts
2. the dynamic linker loads libntirpc (and others)
3. the dynamic linker retrieves symbols from the libntirpc (and others)
4. 'svc_lookup' is amoung the symbols added to a lookup table (or such)
5. during execution, ganesha loads plugins with dlopen()
6. the fsalgluster.so plugin is linked against libgfapi and gfapi gets
 loaded
7. libgfapi retrieves the .vol file and loads the xlators, including
 snapview-client
8. snapview-client provices a 'svc_lookup' symbol, same name as
 libntirpc provides, complete different functionality

So far so good. But I would have expected the compiler to have populated
the function pointers in snapview-client's fops table at compile time;
the dynamic loader should not have been needed to resolve
snapview-client's svc_lookup, because it was (should have been) already
resolved at compile time.

And in fact it is, but, there are semantics for global (.globl) symbols
and run-time linkage that are biting us.



Hi all,

I am hitting a similar type of collision on ganesha 2.4.  In ganesha
2.4, we introduced stackable
mdcache at top of every FSAL. The lookup(mdc_lookup) function has
similar signature to gluster
mdcache lookup fop. In my case ganesha always pick up mdc_lookup from
its layer not from
gfapi graph. When I disabled md-cache it worked perfectly. As Soumya
suggested before do we
need to change every xlator fop to static?


The xlator fops are already effectively static, at least in 3.8 and later.

Starting in 3.8 the xlators are linked with an export map that only
exposes init(), fini(), fops, cbks, options, notify(), mem_acct_init(),
reconfigure(), and dumpops. (A few xlators export other symbols too.)

If this is biting us in 3.7 then we need to make the mdcache fops static.

This isn't C++, so all it takes for a symbol name collision is for the
functions to have the same name, i.e. mdc_lookup() in this case. :-/


Thanks Kaleb  for the information. I was using gluster 3.7 in my set up
--
Jiffin


--

Kaleb



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Mark all the xlator fops 'static '

2016-07-12 Thread Jiffin Tony Thottan



On 31/07/15 19:29, Kaleb S. KEITHLEY wrote:

On 07/30/2015 05:16 PM, Niels de Vos wrote:

On Thu, Jul 30, 2015 at 08:27:15PM +0530, Soumya Koduri wrote:

Hi,

With the applications using and loading different libraries, the function
symbols with the same name may get resolved incorrectly depending on the
order in which those libraries get dynamically loaded.

Recently we have seen an issue with 'snapview-client' xlator lookup fop -
'svc_lookup' which matched with one of the routines provided by libntirpc,
used by NFS-Ganesha. More details are in [1], [2].

Indeed, the problem seems to be caused in an execution flow like this:

1. nfs-ganesha main binary starts
2. the dynamic linker loads libntirpc (and others)
3. the dynamic linker retrieves symbols from the libntirpc (and others)
4. 'svc_lookup' is amoung the symbols added to a lookup table (or such)
5. during execution, ganesha loads plugins with dlopen()
6. the fsalgluster.so plugin is linked against libgfapi and gfapi gets
loaded
7. libgfapi retrieves the .vol file and loads the xlators, including
snapview-client
8. snapview-client provices a 'svc_lookup' symbol, same name as
libntirpc provides, complete different functionality

So far so good. But I would have expected the compiler to have populated
the function pointers in snapview-client's fops table at compile time;
the dynamic loader should not have been needed to resolve
snapview-client's svc_lookup, because it was (should have been) already
resolved at compile time.

And in fact it is, but, there are semantics for global (.globl) symbols
and run-time linkage that are biting us.




Hi all,

I am hitting a similar type of collision on ganesha 2.4.  In ganesha 
2.4, we introduced stackable
mdcache at top of every FSAL. The lookup(mdc_lookup) function has 
similar signature to gluster
mdcache lookup fop. In my case ganesha always pick up mdc_lookup from 
its layer not from
gfapi graph. When I disabled md-cache it worked perfectly. As Soumya 
suggested before do we

need to change every xlator fop to static?

Regards,
Jiffin




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Query!

2016-06-17 Thread Jiffin Tony Thottan



On 17/06/16 18:01, ABHISHEK PALIWAL wrote:

Hi,

I am using Gluster 3.7.6 and performing plug in plug out of the board 
but getting following brick logs after plug in board again:


[2016-06-17 07:14:36.122421] W [trash.c:1858:trash_mkdir] 
0-c_glusterfs-trash: mkdir issued on /.trashcan/, which is not permitted
[2016-06-17 07:14:36.122487] E [MSGID: 115056] 
[server-rpc-fops.c:509:server_mkdir_cbk] 0-c_glusterfs-server: 9705: 
MKDIR /.trashcan (----0001/.trashcan) ==> 
(Operation not permitted) [Operation not permitted]
[2016-06-17 07:14:36.139773] W [trash.c:1858:trash_mkdir] 
0-c_glusterfs-trash: mkdir issued on /.trashcan/, which is not permitted
[2016-06-17 07:14:36.139861] E [MSGID: 115056] 
[server-rpc-fops.c:509:server_mkdir_cbk] 0-c_glusterfs-server: 9722: 
MKDIR /.trashcan (----0001/.trashcan) ==> 
(Operation not permitted) [Operation not permitted]



Could any one tell me the reason behind this failure like when and why 
these log occurs.


This error can be seen only if I you try to create .trashcan from the mount
--
Jiffin



I have already pushed same query previously but did not get any response.

--




Regards
Abhishek Paliwal


___
Gluster-users mailing list
gluster-us...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Encountered "Directory not empty" error during remove a directory in .trashcan

2016-06-17 Thread Jiffin Tony Thottan



On 17/06/16 11:19, Sakshi Bansal wrote:

Hi,

We need some more information regarding the test case:
1) Are you parallely running rm -rf or an rm -rf and ls from multiple mount 
points?
2) On the bricks do you see the directories holding files or directories?


3.) Can you also check the brick log file for errors and provide volume 
configuration?

- Original Message -
From: "Deng ShaoHui" 
To: gluster-devel@gluster.org
Sent: Friday, June 17, 2016 8:08:12 AM
Subject: [Gluster-devel] Encountered "Directory not empty" error during remove 
a directory in .trashcan

Hi:

Recently I encountered an error msg "Directory not empty" during I execute "rm 
-rf" command in .trashcan.
I used glusterfs 3.7.6.

Here is my reproducer:
1. I created a volume titled "eee", then I turned the "features.trash" on.

2. Mount it on /volume/eee

3. Extract a package, let's say "php-5.6.19.tar.gz". And then I delete 
itimmediately.
[root@cosmo eee]# ls -a
.  ..  php-5.6.19  php-5.6.19.tar.gz  .trashcan
[root@cosmo eee]# rm -rf php-5.6.19
[root@cosmo eee]# ls -a
.  ..  php-5.6.19.tar.gz  .trashcan

4. I could find it in .trashcan path.
[root@cosmo .trashcan]# ls
internal_op  php-5.6.19

5. I intend to delete them in .trashcan.
[root@cosmo .trashcan]# rm -rf php-5.6.19/
rm: cannot remove `php-5.6.19/Zend/tests/generators': Directory not empty
rm: cannot remove `php-5.6.19/Zend/tests/constants/fixtures/folder4': Directory 
not empty
rm: cannot remove `php-5.6.19/Zend/tests/constants/fixtures/folder3': Directory 
not empty
rm: cannot remove `php-5.6.19/Zend/tests/traits': Directory not empty
rm: cannot remove `php-5.6.19/ext/odbc': Directory not empty
rm: cannot remove `php-5.6.19/ext/gettext/tests/locale/en': Directory not empty
rm: cannot remove `php-5.6.19/ext/simplexml': Directory not empty
rm: cannot remove `php-5.6.19/ext/curl/tests': Directory not empty

6. I checked every directory in this mount point, it seems like they all are 
empty. But I jumped into bricks directory which is used by this volume, I found 
these directorys, they are not empty, still hold some files in there.

Is there anyone hit this issues before? Or I mistake some steps?
Thank you.

BR



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] IMPORTANT: Patches that need attention for 3.8

2016-06-08 Thread Jiffin Tony Thottan



On 09/06/16 09:04, Raghavendra Gowdappa wrote:


- Original Message -

From: "Poornima Gurusiddaiah" 
To: "Gluster Devel" , "Raghavendra Gowdappa" 
, "Atin Mukherjee"
, "Niels de Vos" , "Shyam" , 
"Rajesh Joseph"
, "Raghavendra Talur" 
Sent: Wednesday, June 8, 2016 6:34:34 PM
Subject: IMPORTANT: Patches that need attention for 3.8

Hi,

Here is the list of patches that need to go for 3.8. I request the
maintainers of each component mentioned here to review/merge the same at the
earliest:

Protocol/RPC:
http://review.gluster.org/#/c/14647/
http://review.gluster.org/#/c/14648/

Is there a deadline you are targeting these for? I can plan the reviews based 
on that.


The deadline for 3.8 GA is 14th June, 2016. So it should merge on 3.8 
branch before that.

--
Jiffin


Glusterd:
http://review.gluster.org/#/c/14626/

Lease:
http://review.gluster.org/#/c/14568/

Gfapi:
http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:bug-1319992

Regards,
Poornima


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Minutes of Gluster Community Bug Triage meeting at 12:00 UTC ~(in 45 minutes)

2016-06-08 Thread Jiffin Tony Thottan

Meeting summary
---
* Roll call  (jiffin, 12:02:49)

* kkeithley Saravanakmr will set up Coverity, clang, etc on public
  facing machine and run it regularly  (jiffin, 12:05:07)
  * ACTION: kkeithley Saravanakmr will set up Coverity, clang, etc on
public facing machine and run it regularly  (jiffin, 12:07:03)
  * ACTION: ndevos need to decide on how to provide/use debug builds
(jiffin, 12:07:35)
  * ACTION: ndevos to propose some test-cases for minimal libgfapi test
(jiffin, 12:07:44)

* Manikandan and gem to followup with kshlm/misc to get access to
  gluster-infra  (jiffin, 12:07:55)
  * ACTION: Manikandan and gem to followup with kshlm/misc/nigelb to get
access to gluster-infra  (jiffin, 12:09:50)

* ? decide how component maintainers/developers use the BZ queries or
  RSS-feeds for the Triaged bugs  (jiffin, 12:10:59)
  * ACTION: Saravanakmr will host bug triage meeting on June 14th 2016
(jiffin, 12:17:51)
  * ACTION: Manikandan will host bug triage meeting on June 21st 2016
(jiffin, 12:17:59)
  * ACTION: ndevos will host bug triage meeting on June 28th 2016
(jiffin, 12:18:08)

* Group Triage  (jiffin, 12:18:23)

* Open Floor  (jiffin, 12:39:07)

Meeting ended at 12:41:56 UTC.




Action Items

* kkeithley Saravanakmr will set up Coverity, clang, etc on public
  facing machine and run it regularly
* ndevos need to decide on how to provide/use debug builds
* ndevos to propose some test-cases for minimal libgfapi test
* Manikandan and gem to followup with kshlm/misc/nigelb to get access to
  gluster-infra
* Saravanakmr will host bug triage meeting on June 14th 2016
* Manikandan will host bug triage meeting on June 21st 2016
* ndevos will host bug triage meeting on June 28th 2016




Action Items, by person
---
* gem
  * Manikandan and gem to followup with kshlm/misc/nigelb to get access
to gluster-infra
* kkeithley
  * kkeithley Saravanakmr will set up Coverity, clang, etc on public
facing machine and run it regularly
* Saravanakmr
  * kkeithley Saravanakmr will set up Coverity, clang, etc on public
facing machine and run it regularly
  * Saravanakmr will host bug triage meeting on June 14th 2016
* **UNASSIGNED**
  * ndevos need to decide on how to provide/use debug builds
  * ndevos to propose some test-cases for minimal libgfapi test
  * Manikandan will host bug triage meeting on June 21st 2016
  * ndevos will host bug triage meeting on June 28th 2016




People Present (lines said)
---
* jiffin (50)
* kkeithley (9)
* hgowtham (6)
* rafi (4)
* zodbot (3)
* Saravanakmr (3)
* gem (3)
* skoduri (1)


On 07/06/16 16:50, Jiffin Tony Thottan wrote:

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
   (in your terminal, run: date -d "12:00 UTC")
- agenda:https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Jiffin


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ~(in 45 minutes)

2016-06-07 Thread Jiffin Tony Thottan

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
  (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Jiffin

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Minutes of Gluster Community Bug Triage meeting at 12:00 UTC ~(in 1.5 hours)

2016-06-01 Thread Jiffin Tony Thottan
facing machine and run it regularly
2. Manikandan
1. Manikandan and gem to followup with kshlm/misc to get access to
   gluster-infra
2. Manikandan will host bug triage meeting on June 21st 2016
3. ndevos
1. ndevos need to decide on how to provide/use debug builds
2. ndevos to propose some test-cases for minimal libgfapi test
3. ndevos will host bug triage meeting on June 28th 2016
4. Saravanakmr
1. kkeithley Saravanakmr will set up Coverity, clang, etc on public
   facing machine and run it regularly
2. Saravanakmr will host bug triage meeting on June 14th 2016
5. *UNASSIGNED*
1. Jiffin will host bug triage meeting on June 7th 2016
2. ? decide how component maintainers/developers use the BZ queries
   or RSS-feeds for the Triaged bugs



 People present (lines said)

1. jiffin (84)
2. ndevos (35)
3. kkeithley (19)
4. Manikandan (11)
5. Saravanakmr (8)
6. nigelb (5)
7. zodbot (3)
8. hgowtham (3)
9. msvbhat (2)
10. rafi (2)
11. partner (1)


Minutes of meeting
 zodbot: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-31/gluster_bug_triage.2016-05-31-12.00.html
 zodbot: Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-31/gluster_bug_triage.2016-05-31-12.00.txt
 zodbot: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-31/gluster_bug_triage.2016-05-31-12.00.log.html


Regards,
Jiffin

On 31/05/16 15:55, Jiffin Tony Thottan wrote:

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Jiffin


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ~(in 1.5 hours)

2016-05-31 Thread Jiffin Tony Thottan

Hi,

This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Jiffin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Idea: Alternate Release process

2016-05-17 Thread Jiffin Tony Thottan
+1 Proposed alternative 2 - One LTS every year and non LTS stable 
release once in every 3 months



On 13/05/16 13:46, Aravinda wrote:

Hi,

Based on the discussion in last community meeting and previous 
discussions,


1. Too frequent releases are difficult to manage.(without dedicated 
release manager)

2. Users wants to see features early for testing or POC.
3. Backporting patches to more than two release branches is pain

Enclosed visualizations to understand existing release and support 
cycle and proposed alternatives.


- Each grid interval is 6 months
- Green rectangle shows supported release or LTS
- Black dots are minor releases till it is supported(once a month)
- Orange rectangle is non LTS release with minor releases(Support ends 
when next version released)


Enclosed following images
1. Existing Release cycle and support plan(6 months release cycle, 3 
releases supported all the time)
2. Proposed alternative 1 - One LTS every year and non LTS stable 
release once in every 2 months
3. Proposed alternative 2 - One LTS every year and non LTS stable 
release once in every 3 months
4. Proposed alternative 3 - One LTS every year and non LTS stable 
release once in every 4 months
5. Proposed alternative 4 - One LTS every year and non LTS stable 
release once in every 6 months (Similar to existing but only alternate 
one will become LTS)


Please do vote for the proposed alternatives about release intervals 
and LTS releases. You can also vote for the existing plan.


Do let me know if I missed anything.
regards
Aravinda
On 05/11/2016 12:01 AM, Aravinda wrote:


I couldn't find any solution for the backward incompatible changes. 
As you mentioned this model will not work for LTS.


How about adopting this only for non LTS releases? We will not have 
backward incompatibility problem since we need not release minor 
updates to non LTS releases.


regards
Aravinda
On 05/05/2016 04:46 PM, Aravinda wrote:


regards
Aravinda

On 05/05/2016 03:54 PM, Kaushal M wrote:

On Thu, May 5, 2016 at 11:48 AM, Aravinda  wrote:

Hi,

Sharing an idea to manage multiple releases without maintaining
multiple release branches and backports.

This idea is heavily inspired by the Rust release model(you may feel
exactly same except the LTS part). I think Chrome/Firefox also 
follows

the same model.

http://blog.rust-lang.org/2014/10/30/Stability.html

Feature Flag:
--
Compile time variable to prevent compiling featurerelated code when
disabled. (For example, ./configure--disable-geo-replication
or ./configure --disable-xml etc)

Plan
-
- Nightly build with all the features enabled(./build --nightly)

- All new patches will land in Master, if the patch belongs to a
   existing feature then it should be written behind that feature 
flag.


- If a feature is still work in progress then it will be only 
enabled in

   nightly build and not enabled in beta or stable builds.
   Once the maintainer thinks the feature is ready for testing 
then that

   feature will be enabled in beta build.

- Every 6 weeks, beta branch will be created by enabling all the
   features which maintainers thinks it is stable and previous beta
   branch will be promoted as stable.
   All the previous beta features will be enabled in stable unless it
   is marked as unstable during beta testing.

- LTS builds are same as stable builds but without enabling all the
   features. If we decide last stable build will become LTS release,
   then the feature list from last stable build will be saved as
   `features-release-.yaml`, For example:
   features-release-3.9.yaml`
   Same feature list will be used while building minor releases 
for the
   LTS. For example, `./build --stable --features 
features-release-3.8.yaml`


- Three branches, nightly/master, testing/beta, stable

To summarize,
- One stable release once in 6 weeks
- One Beta release once in 6 weeks
- Nightly builds every day
- LTS release once in 6 months or 1 year, Minor releases once in 6 
weeks.


Advantageous:
-
1. No more backports required to different release branches.(only
exceptional backports, discussed below)
2. Non feature Bugfix will never get missed in releases.
3. Release process can be automated.
4. Bugzilla process can be simplified.

Challenges:

1. Enforcing Feature flag for every patch
2. Tests also should be behind feature flag
3. New release process

Backports, Bug Fixes and Features:
--
- Release bug fix - Patch only to Master, which will be available in
   next beta/stable build.
- Urgent bug fix - Patch to Master and Backport to beta and stable
   branch, and early release stable and beta build.
- Beta bug fix - Patch to Master and Backport to Beta branch if 
urgent.
- Security fix - Patch to Master, Beta and last stable branch and 
build

   all LTS releases.
- Features - Patch only to Master, which will be available in
   stable/beta builds once feature becomes stable.


[Gluster-devel] [REMINDER] Adding release notes and DiSTAF for 3.8 features

2016-05-12 Thread Jiffin Tony Thottan

Hi all,

There are around 20 features targeted for 3.8 release. All the gluster 
related
code changes got merged by first week of May(rc-1 is not yet done). 
There is
a long way to go with approximately 3 weeks remaining. A public pad[1] 
is created

for adding drafted release note. Niels provided two sample release notes for
"SEEK operations" and  "Disable Gluster NFS by default".  So I request 
remaining

feature owner to have a look and update the same.

Also it become high time to add DiSTAF test cases. MS already mention 
that he is
ready to provide any assistance for the queries related to DiSTAF test 
suite.


Niels had already removed the features which are not planned for 3.8 
roadmap [2],
but it still looks much outdated. So please update the roadmap with 
progress you

achieved.

[1] https://public.pad.fsfe.org/p/glusterfs-3.8-release-notes
[2] https://www.gluster.org/community/roadmap/3.8/

Regards,
Niels & Jiffin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [IMPORTANT] Adding release notes for 3.8 features

2016-04-29 Thread Jiffin Tony Thottan

 Hi all,

The branching for 3.8 will happen on April 30th, 2016. Since we are 
approaching the last stage for
3.8 release a public pad [1] is created for adding release notes. It 
should mention about major changes
that may impact overall working of a feature. For example we are 
planning to depreciate gluster nfs
from 3.8, i.e when volume get started, nfs server won't start by 
default.The user need to turn off

"nfs.disable" option to bring up gluster nfs.

So I kindly request all the feature owners to update the release note 
about their feature.


Also please update your progress on 3.8 features in the roadmap [2]

[1] https://public.pad.fsfe.org/p/glusterfs-3.8-release-notes
[2] https://www.gluster.org/community/roadmap/3.8/

Thanks,
Niels & Jiffin



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] MInutes of Gluster Community Bug Triage meeting at 12:00 UTC on 26th April 2016

2016-04-27 Thread Jiffin Tony Thottan

Hi all,

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-26/gluster_bug_triage_meeting.2016-04-26-12.11.html
Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-26/gluster_bug_triage_meeting.2016-04-26-12.11.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-26/gluster_bug_triage_meeting.2016-04-26-12.11.log.html



Meeting summary
---
* agenda: https://public.pad.fsfe.org/p/gluster-bug-triage (jiffin,
  12:11:39)
* Roll call  (jiffin, 12:12:07)

* msvbhat  will look into lalatenduM's automated Coverity setup in
  Jenkins   which need assistance from an admin with more permissions
  (jiffin, 12:18:13)
  * ACTION: msvbhat  will look into lalatenduM's automated Coverity
setup in   Jenkins   which need assistance from an admin with more
permissions  (jiffin, 12:21:04)

* ndevos need to decide on how to provide/use debug builds (jiffin,
  12:21:18)
  * ACTION: Manikandan to followup with kashlm to get access to
gluster-infra  (jiffin, 12:24:18)
  * ACTION: Manikandan and Nandaja will update on bug automation
(jiffin, 12:24:30)

* msvbhat  provide a simple step/walk-through on how to provide
  testcases for the nightly rpm tests  (jiffin, 12:25:09)
  * ACTION: msvbhat  provide a simple step/walk-through on how to
provide testcases for the nightly rpm tests  (jiffin, 12:27:00)

* rafi needs to followup on #bug 1323895  (jiffin, 12:27:15)

* ndevos need to decide on how to provide/use debug builds (jiffin,
  12:30:44)
  * ACTION: ndevos need to decide on how to provide/use debug builds
(jiffin, 12:32:09)
  * ACTION: ndevos to propose some test-cases for minimal libgfapi test
(jiffin, 12:32:21)
  * ACTION: ndevos need to discuss about writing a script to update bug
assignee from gerrit patch  (jiffin, 12:32:31)

* Group triage  (jiffin, 12:33:07)

* openfloor  (jiffin, 12:52:52)

* gluster bug triage meeting schedule May 2016  (jiffin, 12:55:33)
  * ACTION: hgowtham will host meeting on 03/05/2016  (jiffin, 12:56:18)
  * ACTION: Saravanakmr will host meeting on 24/05/2016  (jiffin,
12:56:49)
  * ACTION: kkeithley_ will host meeting on 10/05/2016  (jiffin,
13:00:13)
  * ACTION: jiffin will host meeting on 17/05/2016  (jiffin, 13:00:28)

Meeting ended at 13:01:34 UTC.




Action Items

* msvbhat  will look into lalatenduM's automated Coverity setup in
  Jenkins   which need assistance from an admin with more permissions
* Manikandan to followup with kashlm to get access to gluster-infra
* Manikandan and Nandaja will update on bug automation
* msvbhat  provide a simple step/walk-through on how to provide
  testcases for the nightly rpm tests
* ndevos need to decide on how to provide/use debug builds
* ndevos to propose some test-cases for minimal libgfapi test
* ndevos need to discuss about writing a script to update bug assignee
  from gerrit patch
* hgowtham will host meeting on 03/05/2016
* Saravanakmr will host meeting on 24/05/2016
* kkeithley_ will host meeting on 10/05/2016
* jiffin will host meeting on 17/05/2016

People Present (lines said)
---
* jiffin (87)
* rafi1 (21)
* ndevos (10)
* hgowtham (8)
* kkeithley_ (6)
* Saravanakmr (6)
* Manikandan (5)
* zodbot (3)
* post-factum (2)
* lalatenduM (1)
* glusterbot (1)


Cheers,

Jiffin

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] How to enable ACL support in Glusterfs volume

2016-04-26 Thread Jiffin Tony Thottan



On 26/04/16 15:28, ABHISHEK PALIWAL wrote:

Hi Jiffin,

Any clue you have on this I am seeing some logs related to ACL in 
command and some .so file in glusterfs/tmp-a2.log file but no failure 
is there.



Hi Abhishek,

Can u attach the logs files (/var/log/glusterfs/tmp-a2.log)?
Also u can try out ganesha which can export gluster volumes as well as 
other exports using single server.
Right now ganesha only supports nfsv4 acl (not the posix acl). And also 
ganesha well supported with gluster volume

when we compare with knfs.


--
Jiffin


Regards,
Abhishek

On Tue, Apr 26, 2016 at 1:17 PM, ABHISHEK PALIWAL 
<abhishpali...@gmail.com <mailto:abhishpali...@gmail.com>> wrote:




On Tue, Apr 26, 2016 at 12:54 PM, Jiffin Tony Thottan
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:



On 26/04/16 12:22, ABHISHEK PALIWAL wrote:



On Tue, Apr 26, 2016 at 12:18 PM, Jiffin Tony Thottan
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:

On 26/04/16 12:11, ABHISHEK PALIWAL wrote:

Hi,
I want to enable ACL support on gluster volume using the
kernel NFS ACL support so I have followed below steps
after creation of gluster volume:


Is there any specific reason to knfs instead of in build
gluster nfs server ?

Yes, because we have other NFS mounted volume as well in system.


Did u mean to say that knfs is running on each gluster nodes
(i mean bricks) ?

Yes.






1. mount -t glusterfs -o acl 10.32.0.48:/c_glusterfs /tmp/a2
2.update the /etc/exports file
/tmp/a2
10.32.*(rw,acl,sync,no_subtree_check,no_root_squash,fsid=14)
3.exportfs –ra
4.gluster volume set c_glusterfs nfs.acl off
5.gluster volume set c_glusterfs nfs.disable on
we have disabled above two options because we are using
Kernel NFS ACL support and that is already enabled.
on other board mounting it using
mount -t nfs -o acl,vers=3 10.32.0.48:/tmp/a2 /tmp/e/
setfacl -m u:application:rw /tmp/e/usr
setfacl: /tmp/e/usr: Operation not supported


Can you please check the clients for the hints ?

What I need to check here?


can u check /var/log/glusterfs/tmp-a2.log?


There is no failure in server sidein /var/log/glusterfs/tmp-a2.log
file but on the board where I am getting this failure don't
running gluster here so not possible to check
/var/log/glusterfs/tmp-a2.log file.






and application is the system user like below
application:x:102:0::/home/application:/bin/sh

I don't why I am getting this failure when I enabled all
the acl support in each steps.

Please let me know how can I enable this.

Regards,
Abhishek



--
Jiffin



___
Gluster-devel mailing list
Gluster-devel@gluster.org <mailto:Gluster-devel@gluster.org>
http://www.gluster.org/mailman/listinfo/gluster-devel





-- 





Regards
Abhishek Paliwal





-- 





Regards
Abhishek Paliwal




--




Regards
Abhishek Paliwal


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] How to enable ACL support in Glusterfs volume

2016-04-26 Thread Jiffin Tony Thottan



On 26/04/16 12:22, ABHISHEK PALIWAL wrote:



On Tue, Apr 26, 2016 at 12:18 PM, Jiffin Tony Thottan 
<jthot...@redhat.com <mailto:jthot...@redhat.com>> wrote:


On 26/04/16 12:11, ABHISHEK PALIWAL wrote:

Hi,
I want to enable ACL support on gluster volume using the kernel
NFS ACL support so I have followed below steps after creation of
gluster volume:


Is there any specific reason to knfs instead of in build gluster
nfs server ?

Yes, because we have other NFS mounted volume as well in system.


Did u mean to say that knfs is running on each gluster nodes (i mean 
bricks) ?






1. mount -t glusterfs -o acl 10.32.0.48:/c_glusterfs /tmp/a2
2.update the /etc/exports file
/tmp/a2 10.32.*(rw,acl,sync,no_subtree_check,no_root_squash,fsid=14)
3.exportfs –ra
4.gluster volume set c_glusterfs nfs.acl off
5.gluster volume set c_glusterfs nfs.disable on
we have disabled above two options because we are using Kernel
NFS ACL support and that is already enabled.
on other board mounting it using
mount -t nfs -o acl,vers=3 10.32.0.48:/tmp/a2 /tmp/e/
setfacl -m u:application:rw /tmp/e/usr
setfacl: /tmp/e/usr: Operation not supported


Can you please check the clients for the hints ?

What I need to check here?


can u check /var/log/glusterfs/tmp-a2.log?





and application is the system user like below
application:x:102:0::/home/application:/bin/sh

I don't why I am getting this failure when I enabled all the acl
support in each steps.

Please let me know how can I enable this.

Regards,
Abhishek



--
Jiffin



___
Gluster-devel mailing list
Gluster-devel@gluster.org <mailto:Gluster-devel@gluster.org>
http://www.gluster.org/mailman/listinfo/gluster-devel





--




Regards
Abhishek Paliwal


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] How to enable ACL support in Glusterfs volume

2016-04-26 Thread Jiffin Tony Thottan



On 26/04/16 12:18, Jiffin Tony Thottan wrote:

On 26/04/16 12:11, ABHISHEK PALIWAL wrote:

Hi,
I want to enable ACL support on gluster volume using the kernel NFS 
ACL support so I have followed below steps after creation of gluster 
volume:


Is there any specific reason to knfs instead of in build gluster nfs 
server ?



1. mount -t glusterfs -o acl 10.32.0.48:/c_glusterfs /tmp/a2
2.update the /etc/exports file
/tmp/a2 10.32.*(rw,acl,sync,no_subtree_check,no_root_squash,fsid=14)
3.exportfs –ra
4.gluster volume set c_glusterfs nfs.acl off
5.gluster volume set c_glusterfs nfs.disable on
we have disabled above two options because we are using Kernel NFS 
ACL support and that is already enabled.

on other board mounting it using
mount -t nfs -o acl,vers=3 10.32.0.48:/tmp/a2 /tmp/e/
setfacl -m u:application:rw /tmp/e/usr
setfacl: /tmp/e/usr: Operation not supported


Can you please check the clients for the hints ?


What I intend to say can please check the client logs and also if 
possible take the packet trace from server machine.





and application is the system user like below
application:x:102:0::/home/application:/bin/sh

I don't why I am getting this failure when I enabled all the acl 
support in each steps.


Please let me know how can I enable this.

Regards,
Abhishek



--
Jiffin



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] How to enable ACL support in Glusterfs volume

2016-04-26 Thread Jiffin Tony Thottan

On 26/04/16 12:11, ABHISHEK PALIWAL wrote:

Hi,
I want to enable ACL support on gluster volume using the kernel NFS 
ACL support so I have followed below steps after creation of gluster 
volume:


Is there any specific reason to knfs instead of in build gluster nfs 
server ?



1. mount -t glusterfs -o acl 10.32.0.48:/c_glusterfs /tmp/a2
2.update the /etc/exports file
/tmp/a2 10.32.*(rw,acl,sync,no_subtree_check,no_root_squash,fsid=14)
3.exportfs –ra
4.gluster volume set c_glusterfs nfs.acl off
5.gluster volume set c_glusterfs nfs.disable on
we have disabled above two options because we are using Kernel NFS ACL 
support and that is already enabled.

on other board mounting it using
mount -t nfs -o acl,vers=3 10.32.0.48:/tmp/a2 /tmp/e/
setfacl -m u:application:rw /tmp/e/usr
setfacl: /tmp/e/usr: Operation not supported


Can you please check the clients for the hints ?


and application is the system user like below
application:x:102:0::/home/application:/bin/sh

I don't why I am getting this failure when I enabled all the acl 
support in each steps.


Please let me know how can I enable this.

Regards,
Abhishek



--
Jiffin



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] pNFS server for FreeBSD using GlusterFS

2016-04-25 Thread Jiffin Tony Thottan

CCing ganesha list

On 22/04/16 04:18, Rick Macklem wrote:

Jiffin Tony Thottan wrote:


On 21/04/16 04:43, Rick Macklem wrote:

Hi,

Just to let you know, I did find the email responses to my
queries some months ago helpful and I now have a pNFS server
for FreeBSD using the GlusterFS port at the alpha test stage.
So far I have not made any changes to GlusterFS except the little
poll() patch that was already discussed on this list last December.

Anyhow, if anyone is interested in taking a look at this,
I have a primitive document at:
http://people.freebsd.org/~rmacklem/pnfs-setup.txt
that will hopefully give you a starting point.

Thanks to everyone that helped via email a few months ago, rick

Hi Rick,

Awesome some work man. You have cracked Flexfile layout for gluster volume.

I still wondering why you picked knfs instead of nfs-ganesha?

I don't believe that ganesha will be ported to FreeBSD any time soon. If it


I believe the support is already there. CCing ganesha list to confirm 
the same.



is ported, that would be an alternative for FreeBSD users to consider.
(I work on the kernel nfsd as a hobby, so I probably wouldn't do this myself.)


There will
lot of context switches
between kernel space and user space which may effect the metadata
performance.

Yes, I do see a lot of context switches.

rick


I still remembering the discussion[1] in which I mentioned to use
ganesha server as MDS.
And usually gluster volume won't export using knfs.

--
Jiffin


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] pNFS server for FreeBSD using GlusterFS

2016-04-20 Thread Jiffin Tony Thottan



On 21/04/16 07:53, Jiffin Tony Thottan wrote:



On 21/04/16 04:43, Rick Macklem wrote:

Hi,

Just to let you know, I did find the email responses to my
queries some months ago helpful and I now have a pNFS server
for FreeBSD using the GlusterFS port at the alpha test stage.
So far I have not made any changes to GlusterFS except the little
poll() patch that was already discussed on this list last December.

Anyhow, if anyone is interested in taking a look at this,
I have a primitive document at:
   http://people.freebsd.org/~rmacklem/pnfs-setup.txt
that will hopefully give you a starting point.

Thanks to everyone that helped via email a few months ago, rick


Hi Rick,

Awesome some work man. You have cracked Flexfile layout for gluster 
volume.


I still wondering why you picked knfs instead of nfs-ganesha? There 
will lot of context switches
between kernel space and user space which may effect the metadata 
performance.
I still remembering the discussion[1] in which I mentioned to use 
ganesha server as MDS.

And usually gluster volume won't export using knfs.



Sorry I missed link in my previous mail

[1] http://www.gluster.org/pipermail/gluster-devel/2015-June/045433.html


--
Jiffin


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] pNFS server for FreeBSD using GlusterFS

2016-04-20 Thread Jiffin Tony Thottan



On 21/04/16 04:43, Rick Macklem wrote:

Hi,

Just to let you know, I did find the email responses to my
queries some months ago helpful and I now have a pNFS server
for FreeBSD using the GlusterFS port at the alpha test stage.
So far I have not made any changes to GlusterFS except the little
poll() patch that was already discussed on this list last December.

Anyhow, if anyone is interested in taking a look at this,
I have a primitive document at:
   http://people.freebsd.org/~rmacklem/pnfs-setup.txt
that will hopefully give you a starting point.

Thanks to everyone that helped via email a few months ago, rick


Hi Rick,

Awesome some work man. You have cracked Flexfile layout for gluster volume.

I still wondering why you picked knfs instead of nfs-ganesha? There will 
lot of context switches
between kernel space and user space which may effect the metadata 
performance.
I still remembering the discussion[1] in which I mentioned to use 
ganesha server as MDS.

And usually gluster volume won't export using knfs.

--
Jiffin


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [REMINDER] Adding DiSTAF test cases for 3.8 Feature

2016-04-14 Thread Jiffin Tony Thottan

Hi all,

As per 3.8 feature matrix , the feature owner should add distributed 
testing with DiSTAF.
The DiSTAF test frame work[1] got merged on master branch. Thanks for 
the all efforts
put forward by MS and team to make it possible. So I request all the 
feature owners to
look through user documentation [2,3] and add the test cases on the 
feature completion.
Lets keep up pace and meet all the requirements for the feature before 
April 30th.


[1] http://review.gluster.org/#/c/13853/
[2] https://github.com/gluster/distaf/blob/master/README.md
[3] https://github.com/gluster/distaf/blob/master/docs/HOWTO.md

Thanks,
Niels and Jiffin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Improving subdir export for NFS-Ganesha

2016-03-15 Thread Jiffin Tony Thottan



On 15/03/16 12:23, Atin Mukherjee wrote:


On 03/15/2016 11:48 AM, Jiffin Tony Thottan wrote:

Hi all,

The subdir export is one of key features for NFS server. NFS-ganesha
have already supports subdir export,
  but it has lot of limitations when it is intregrated with gluster.

Current Implementation :
Following steps are required for a subdir export
* export volume using ganesha.enable option
* edit the export configuration file by adding subdir options
* do refresh-config in that node using ganesha-ha.sh
* limitation : multiple directories cannot be exported at a time via
script.
If user to need to do that(it is possible), all the
steps should be done in manually
which includes creating export conf file, use latest
export id, include it in ganesha conf etc.
And also here it become mandatory to export root
before exporting subdir.

Suggested approach :

* Introduce new volume set command  "ganesha.subdir" which will handle
above mentioned issue cleanly
for example, gluster volume set  ganesha.subdir
<path1,path2,path3 ...>
if u want to unexport path2, use the same command  with mentioning path2
gluster volume set  ganesha.subdir <path1,path3 ...>.(Is
different option required ?)

How do you handle a case where you have to unexport all the paths?

The root of the volume should be export only using ganesha.enable
options.
This require a lot of additions in glusterd code base and minor
changes in snapshot functionality.

Could you detail out what all changes will be required in glusterd
codebase when volume set ganesha.subdir  is executed?
Based on that we can only take a call whether its feasible to take this
in 3.7.x or move it to 3.8.


This is just a rough estimation :
1.) glusterd/cli for introducing new option
2.) need to modify functions like ganesha_manage_export() to accommodate 
new option

3.) changes related to ganesha scripts
4.) minor modification to snapshot 
functionality(glusterd_copy_nfs_ganesha_file)  for ganesha

approximately I expect around 100-150 lines of code to be added

--
Jiffin

Can above mentioned improvement  targeted  for 3.7.x release (3.7.10 or
3.7.11) or should I need to move it for 3.8 release ?
Please provide your valuable feedback on the same.

Please Note : It is not related to subdir export for fuse mount.

Regards,
Jiffin




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] REMINDER: MInutes of Gluster Community Bug Triage meeting at 12:00 UTC on 1st March, 2016

2016-03-06 Thread Jiffin Tony Thottan



On 01/03/16 14:32, Jiffin Tony Thottan wrote:

Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.



Minutes: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-11-24/gluster_bug_triage.2015-11-24-12.00.html 

Minutes (text): 
http://meetbot.fedoraproject.org/gluster-meeting/2015-11-24/gluster_bug_triage.2015-11-24-12.00.txt
Log: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-11-24/gluster_bug_triage.2015-11-24-12.00.log.html


Meeting summary
Meeting started by jiffin at 12:00:19 UTC. The full logs are available
at
https://meetbot.fedoraproject.org/gluster-meeting/2016-03-01/gluster_bug_triage.2016-03-01-12.00.log.html
.



Meeting summary
---
* Roll Call  (jiffin, 12:00:27)

* Manikandan and Nandaja will update on bug automation  (jiffin,
  12:05:05)

* Scheduling moderators for Gluster Community Bug Triage meeting for a
  month  (jiffin, 12:06:32)
  * rafi will host bug triage on MArch 8th  (jiffin, 12:11:07)
  * ggarg will host on March 8  (jiffin, 12:15:12)
  * skoduri will host meeting on March 15th  (jiffin, 12:18:04)
  * rafi will host meeting on March 22nd  (jiffin, 12:19:57)
  * Manikandan will host meeting on March 29th  (jiffin, 12:21:26)

* Group Triage  (jiffin, 12:21:53)
  * LINK: https://public.pad.fsfe.org/p/gluster-bugs-to-triage
(jiffin, 12:21:59)
  * LINK:
http://gluster.readthedocs.org/en/latest/Contributors-Guide/Bug-Triage/
contains more details about the triaging itself  (jiffin, 12:22:35)

* Open Floor  (jiffin, 12:45:47)
  * no more pending bugs  (jiffin, 12:47:52)

Meeting ended at 12:50:14 UTC.



People Present (lines said)
---
* jiffin (83)
* Manikandan (53)
* hgowtham (19)
* ggarg (18)
* zodbot (3)
* aravindavk (2)
* atinm (2)
* glusterbot (1)


Next week ggarg will host Gluster community  bug triage meeting.


Thank you
Jiffin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 3 hours)

2016-03-01 Thread Jiffin Tony Thottan

Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thank you
Jiffin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Core from gNFS process

2016-01-14 Thread Jiffin Tony Thottan



On 14/01/16 14:28, Jiffin Tony Thottan wrote:

Hi,

The core generated when encryption xlator is enabled

[2016-01-14 08:13:15.740835] E 
[crypt.c:4298:master_set_master_vol_key] 0-test1-crypt: FATAL: missing 
master key
[2016-01-14 08:13:15.740859] E [MSGID: 101019] 
[xlator.c:429:xlator_init] 0-test1-crypt: Initialization of volume 
'test1-crypt' failed, review your volfile again
[2016-01-14 08:13:15.740890] E [MSGID: 101066] 
[graph.c:324:glusterfs_graph_init] 0-test1-crypt: initializing 
translator failed
[2016-01-14 08:13:15.740904] E [MSGID: 101176] 
[graph.c:670:glusterfs_graph_activate] 0-graph: init failed
[2016-01-14 08:13:15.741676] W [glusterfsd.c:1231:cleanup_and_exit] 
(-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x307) [0x40d287] 
-->/usr/sbin/glusterfs(glusterfs_process_volfp+0x117) [0x4086c7] 
-->/usr/sbin/glusterfs(cleanup_and_exit+0x4d) [0x407e1d] ) 0-: 
received signum (0), shutting down





Forgot to mention this last mail,  for crypt xlator needs master key 
before enabling the translator which cause the issue

--
Jiffin

With regards,
Jiffin


On 14/01/16 12:28, Raghavendra Talur wrote:

Hi Jiffin and Soumya,

Ravishankar told me about core generated by gNFS process during 
./tests/bugs/snapshot/bug-1140162-file-snapshot-features-encrypt-opts-validation.t. 



Here is console output:
https://build.gluster.org/job/rackspace-regression-2GB-triggered/17525/console 



And here is the backtrace for convenience

(gdb) thread apply all bt

Thread 9 (LWP 12499):
#0  0x7f622f4fda0e in pthread_cond_timedwait@@GLIBC_2.3.2 () from 
./lib64/libpthread.so.0

#1  0x7f6230258a61 in syncenv_task (proc=0x7f621c0332f0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/syncop.c:603

#2  0x7f6230258d08 in syncenv_processor (thdata=0x7f621c0332f0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/syncop.c:695

#3  0x7f622f4f9a51 in start_thread () from ./lib64/libpthread.so.0
#4  0x7f622ee6393d in clone () from ./lib64/libc.so.6

Thread 8 (LWP 12497):
#0  0x7f622edc2e2c in vfprintf () from ./lib64/libc.so.6
#1  0x7f622edea752 in vsnprintf () from ./lib64/libc.so.6
#2  0x7f6230243f70 in gf_vasprintf (string_ptr=0x7f6220a66ba8,
format=0x7f62302aeacd "[%s] %s [%s:%d:%s] %d-%s: ", 
arg=0x7f6220a66a70)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/mem-pool.c:219

#3  0x7f62302440ad in gf_asprintf (string_ptr=0x7f6220a66ba8,
format=0x7f62302aeacd "[%s] %s [%s:%d:%s] %d-%s: ")
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/mem-pool.c:239
#4  0x7f623021d387 in _gf_log (domain=0x7f621c00cde0 
"d_exit+0x87) [0x407cdf]",
file=0x7f622272b468 0x7f622272b468>,
function=0x7f622272d130 0x7f622272d130>, line=2895,
level=GF_LOG_INFO, fmt=0x7f622272c690 memory at address 0x7f622272c690>)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/logging.c:2216

#5  0x7f6222725d99 in ?? ()
#6  0x0005 in ?? ()
#7  0x in ?? ()

Thread 7 (LWP 12460):
#0  0x7f622f4fda0e in pthread_cond_timedwait@@GLIBC_2.3.2 () from 
./lib64/libpthread.so.0

#1  0x7f6230258a61 in syncenv_task (proc=0x241b210)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/syncop.c:603

#2  0x7f6230258d08 in syncenv_processor (thdata=0x241b210)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/syncop.c:695

#3  0x7f622f4f9a51 in start_thread () from ./lib64/libpthread.so.0
#4  0x7f622ee6393d in clone () from ./lib64/libc.so.6

Thread 6 (LWP 12476):
#0  0x7f622f5002e4 in __lll_lock_wait () from 
./lib64/libpthread.so.0

#1  0x7f622f4fb588 in _L_lock_854 () from ./lib64/libpthread.so.0
#2  0x7f622f4fb457 in pthread_mutex_lock () from 
./lib64/libpthread.so.0
#3  0x7f623021ca6c in _gf_msg (domain=0x4117ef access memory at address 0x4117ef>,

---Type  to continue, or q  to quit---
file=0x411468 ,
function=0x4125d0 <__FUNCTION__.18918> memory at address 0x4125d0>, line=1231,

level=GF_LOG_WARNING, errnum=0, trace=1, msgid=100032,
fmt=0x411bb0 )
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/logging.c:2055

#4  0x00407cdf in cleanup_and_exit (signum=0)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/glusterfsd/src/glusterfsd.c:1231
#5  0x00409ee4 in glusterfs_process_volfp (ctx=0x23f6010, 
fp=0x7f621c001400)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/glusterfsd/src/glusterfsd.c:2202
#6  0x0040e71d in mgmt_getspec_cbk (req=0x7f621c001a4c, 
iov=0x7f621c001a8c, count=1,

myframe=0x7f621c00135c)
at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/

Re: [Gluster-devel] freebsd smoke failure

2016-01-11 Thread Jiffin Tony Thottan



On 12/01/16 11:19, Atin Mukherjee wrote:

I've been observing freebsd smoke failure for all the patches for last
few days with the following error:

mkdir: /usr/local/lib/python2.7/site-packages/gluster: Permission denied
mkdir: /usr/local/lib/python2.7/site-packages/gluster: Permission denied

Can any one from infra team can help here?

Niels send a patch for the same http://review.gluster.org/#/c/13208/
--
Jiffin



~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Minutes of Gluster Community Bug Triage meeting at 12:00 UTC (~in 45 minutes)

2015-11-30 Thread Jiffin Tony Thottan



On 24/11/15 16:45, Jiffin Tony Thottan wrote:

Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
 (https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda:https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Sorry for the delay.

Minutes: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-11-24/gluster_bug_triage.2015-11-24-12.00.html 

Minutes (text): 
http://meetbot.fedoraproject.org/gluster-meeting/2015-11-24/gluster_bug_triage.2015-11-24-12.00.txt
Log: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-11-24/gluster_bug_triage.2015-11-24-12.00.log.html


Meeting summary

agenda: https://public.pad.fsfe.org/p/gluster-bug-triage 
(jiffin, 12:00:28)


Roll Call (jiffin, 12:00:36)
Group Triage (jiffin, 12:05:51)
https://public.pad.fsfe.org/p/gluster-bugs-to-triage (jiffin, 
12:06:00)
There are 7 new bugs + 10 backlog bugs(decided per last 
meeting) (jiffin, 12:06:57)
http://www.gluster.org/community/documentation/index.php/Features/worm 
(Humble, 12:18:25)
kkeithley_ will send a mail to gluster-dev regarding target 
release for older bugs (jiffin, 12:40:20)
ACTION: kkeithley_ will send a mail to gluster-dev regarding 
target release for older bugs (jiffin, 12:41:01)


Open Floor (jiffin, 12:41:16)
a. Nandaja and Manikandan will keep updating about Automated 
bug work flow in gluster-dev ML (Manikandan, 12:45:16)
b. Manikandan will host next Gluster Community Bug Triage 
meeting (Manikandan, 12:47:33)





Meeting ended at 12:50:53 UTC (full logs).

Action items

kkeithley_ will send a mail to gluster-dev regarding target release 
for older bugs




Action items, by person

kkeithley_
kkeithley_ will send a mail to gluster-dev regarding target 
release for older bugs




People present (lines said)

1.  jiffin (44)
2.  Humble (30)
3.  Manikandan (19)
4.  kkeithley_ (11)
5.  hgowtham (6)
6.  Saravana_ (3)
7.  ashiq (3)
8.  zodbot (3)
9.  overclk (1)
10.  sac (1)
11.  gem (1)


Thanks,
Jiffin




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 45 minutes)

2015-11-24 Thread Jiffin Tony Thottan

Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d "12:00 UTC")
- agenda:https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Jiffin



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Implementing Flat Hierarchy for trashed files

2015-08-18 Thread Jiffin Tony Thottan

Comments inline.

On 18/08/15 09:54, Niels de Vos wrote:

On Mon, Aug 17, 2015 at 06:20:50PM +0530, Anoop C S wrote:

Hi all,

As we move forward, in order to fix the limitations with current trash
translator we are planning to replace the existing criteria for trashed
files inside trash directory with a general flat hierarchy as described
in the following sections. Please have your thoughts on following
design considerations.

Current implementation
==
* Trash translator resides on glusterfs server stack just above posix.
* Trash directory (.trashcan) is created during volume start and is
   visible under root of the volume.
* Each trashed file is moved (renamed) to trash directory with an
   appended time stamp in the file name.
* Exact directory hierarchy (w.r.t the root of volume) is maintained
   inside trash directory whenever a file is deleted/truncated from a
   directory

Outstanding issues
==
* Since renaming occurs at the server side, client-side is unaware of
   trash doing rename or create operations.
* As a result files/directories may not be visible from mount point.

This might be something upcall could help with. If the trash xlator is
placed above upcall, any clients interested in the .trashcan directory
(or subdirs) could get an in/revalidation request.


* Files/Directories created from from trash translator will not have
   gfid associated with it until lookup is performed.

When a client receives an invalidation of the parent directory (from
upcall), a LOOKUP will follow on the next request.


If I understand it correctly , solution become more complex if integrate 
both translator and upcall together.
1.) Upcall notification can be send to a client only if it has accessed 
.trashcan
2.) There should be translator at client side to initiate lookup after 
receiving upcall notification
3.) Performance hit. Say file `foo`is present in a/b/c/. We need to 
create path a/b/c/ inside trash directory.
So ideally trash xlator will first create directory 'a' , then send 
upcall notification to all of the client and then clients will initiate 
lookup on 'a',
perform gfid healing on that directory. After that it will create `b` 
and repeat the same procedure.

Proposed Flat hierarchy
===

I'm missing a bit of info here, what limitations need to be addressed?


all above mentioned outstanding issues can be addressed by the flat 
hierarchy.

* Instead of creating the whole directory under trash, we will rename
   the file and place it directly under trash directory (of course with
   appended time stamp).
* Directory hierarchy can be stored via either of the following two
   approaches:
(a) File name will contain the whole path with time stamp
appended
(b) Store whole hierarchy as an xattr

If this is needed, definitely go with (b). Filenames have a limit, and
the full path (directories + filename + timestamp) could surely hit
that.


Thanks for the suggestion.


Other enhancements
==

Have these been filed as bugs/RFEs? If not, please do so and include a
good description of the work that is needed. Maybe others in the Gluster
community are interested in providing patches, and details on what to do
is very helpful.


Sure. We will file different RFE's as soon as possible and sent it in 
different mail.



Thanks,
Niels


* Create the trash directory only
when trash xlator is enabled.
* Operations such as unlink, rename etc
will be prevented on trash
   directory only when trash xlator is
enabled.
* A new trash helper translator on client side(loaded only when
trash
   is enabled) to resolve split brain issues with truncation of
files.
* Restore files from trash with the help of an explicit setfattr
call.

Thanks  Regards,
-Anoop C S
-Jiffin Tony Thottan
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



--
Jiffin



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD regression failures

2015-08-17 Thread Jiffin Tony Thottan



On 17/08/15 12:29, Vijaikumar M wrote:



On Monday 17 August 2015 12:22 PM, Avra Sengupta wrote:

Hi,

The NetBSD regression tests are continuously failing with errors in 
the following tests:


./tests/basic/mount-nfs-auth.t

I will look into this issue.
--
Jiffin

./tests/basic/quota-anon-fd-nfs.t
quota-anon-fd-nfs.t is known issues with NFS client caching so it is 
marked as bad test, final test will be marked as success even if this 
test fails.






Is there any recent change that is trigerring this behaviour. Also 
currently one machine is running NetBSD tests. Can someone with 
access to Jenkins, bring up a few more slaves to run NetBSD 
regressions in parallel.


Regards,
Avra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Future of access-control translator ?

2015-06-10 Thread Jiffin Tony Thottan

Hi,

In the current implementation of access-control translator,  it takes 
care of the following :

a.) conversion of acl xattr - gluster supported posix-acl format
(at the backend acl is stored as xattr know as system.posix_acl* for linux)
b.) Cache that posix-acl in its context.
c.) And enforce permissions based on the cached entries.

This translator is loaded in the server side by default and  in the 
client side if acl option is mentioned.


A new portable acl conversion was introduced in posix by [1] to fix 
limitations in (a). Refer mail thread [2]
for further details. Enforcement can be handled by posix translator(In 
that case, caching will be redundant,

because same permission are checked twice).

Therefore should we remove access-control translator entirely from vol 
graph or
Retain the translator for (b) and (c) by modifying them based on 
standard acl format.


Please provide your thoughts on the same.

[1] : http://review.gluster.org/#/c/9627/
[2] : http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/9036

Thanks  and Regards,
Jiffin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] using GlusterFS to build an NFSv4.1 pNFS server

2015-06-03 Thread Jiffin Tony Thottan



On 03/06/15 04:41, Niels de Vos wrote:

On Tue, Jun 02, 2015 at 06:18:54PM -0400, Rick Macklem wrote:

Jiffin Tony Thottan wrote:

Hi Rick,

There is already support for pNFS in gluster volumes using
nfs-ganesha :
http://gluster.readthedocs.org/en/latest/Features/mount_gluster_volume_using_pnfs/
It supports normal FILE_LAYOUT architecture.

Yes, I am aware of this (although I'll admit I noticed it in the docs after I
posted the email).

Just fyi, if I wanted to set up a (near) production NFSv4.1/pNFS server, this 
would be
fine, but that's not me;-)
I'm interested in extending the NFSv4.1 server I've already written to do
pNFS. Why? Well, mostly because it interests me. (I've never been paid any $$
to do any of the FreeBSD NFS work I've done, so I pretty much do it as a hobby.)



If the result never works or never performs well enough to be useful for
production environments then...oh well, it was an interesting experiment.

Definitely sounds interesting! I don't have much to do with FreeBSD, but
I'm certainly happy to help on the Gluster side if you have any
questions.


+1. Also I can  help  you with pNFS related queries


If it ever is useful for (near) production environments, I suspect it would be
users that have set up a FreeBSD NFS server and it is outgrowing what a single
server can handle. In other words, they would come from the FreeBSD NFS server
side and not the GlusterFS side.

Other comments are inline

On 02/06/15 05:18, Rick Macklem wrote:

Hi,

Btw, I do most of the FreeBSD NFSv4 work.
I am interested in trying to use GlusterFS
to build a FreeBSD NFSv4.1 pNFS server.
My hope is that, by directing the NFSv4.1 client
to the host where the file resides, the client will
be able to do I/O on it efficiently via the NFSv3
server. (The new layout type called flex files allows
an NFSv3 server to be a storage/data server for pNFS.)

It will be good to use gluster-nfs  as a data-server(which is more
tightly coupled with bricks)
CCing Anand who has better idea about flex file layout architecture


Flex file is pretty straightforward. It simply allows the NFSv3 server
to be what they call a storage server. All that it does is use a fake
uid/gid that is allowed rw/ro access to the file. (This implies that
the client is responsible for deciding if a user is allowed access to
the file. Not a big deal for AUTH_SYS, since the server trusts the
client's choice of uid/gid anyhow.)
-- As such, the NFSv3 server needs to have a small change applied to
 it to allow access via this fake uid/gid.

This sounds simple enough to do. File a feature request and describe how
you can use this. Patches are welcome too, of course, but we can likely
code something up quickly.

 https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFScomponent=nfs


Basically, the NFSv4.1 server needs to know what the NFSv3 server's
host IP address is and what FH to use for the file on it. (I do see
the code in the NFS xlator for generating an FH, but haven't looked
much yet.) As noted below in the original post.

The FH in Gluster/NFS is based on the volume-id and the GFID. Both are
UUIDs. The volume-id is a unique identifier for the volume, and the GFID
is like a volume-wide inode-nr (volumes consist out of multiple bricks
with their own filesystems, a storage server can host multiple bricks).


It is not required to create FH in MDS(which might not be consistent in 
other gluster-nfs-server),
Instead create ds_wire(for me it was combination of GFID and IP of the 
server) and handle will created at each

data server based on the ds_wire for the I/O's


There is no way to know which brick should handle a FH. Looking for the
GFID on all the bricks that participate in the volume is a rather
expensive operation (many LOOKUPs). You will always need to find the
location of the file with a request through FUSE.


To do this, I need to be able to poke the
glusterfs server and get the following information:
- The NFSv3 file handle and the IP address for
the host(s) the file lives on.
-- Using this, I am planning on creating a layout
that tells the NFSv4.1 client to use NFSv3 to
do I/O on the file. (What NFSv4.1 calls a storage
server, although some RFCs might call it a data
server.)
- I hope to use the fuse interface for the NFSv4.1 metadata
server.

I don't know how much it is feasible to implement meta data server
using
a fuse interface.


I guess I'll find out;-). The FreeBSD NFSv4.1 server is kernel based
and exports any local file system that has a VFS/VOP interface. So,
hopefully FUSE won't provide too many surprises.
I am curious to see how well it performs.

I have no idea how FreeBSD handles FUSE, but I'm sure you won't have an
issue with figuring that out. You should be able to get the details
about the location of the file through GETXATTR calls. In NFS-Ganesha,
these two functions parse the output:
  - get_pathinfo_host
  - glfs_get_ds_addr

 These can be found here

Re: [Gluster-devel] Spurious regression: Checking on 3 test cases

2015-05-26 Thread Jiffin Tony Thottan



On 26/05/15 20:44, Niels de Vos wrote:

On Tue, May 26, 2015 at 10:17:05AM -0400, Jiffin Thottan wrote:


- Original Message -
From: Vijay Bellur vbel...@redhat.com
To: Krishnan Parthasarathi kpart...@redhat.com, Shyam 
srang...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Friday, 22 May, 2015 9:49:15 AM
Subject: Re: [Gluster-devel] Spurious regression: Checking on 3 test cases

On 05/22/2015 07:13 AM, Krishnan Parthasarathi wrote:

Are the following tests in any spurious regression failure lists? or,
noticed by others?

...

2) ./tests/basic/mount-nfs-auth.t
 Run:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/9406/consoleFull


Right now there is no easy fix for the issue. It may require to
reconstruct entire netgroup/export structures used for this feature.

Indeed, my suspicion is that the current structures for
netgroups/exports and the auth_caching is not completely thread safe.
The structures use a dict for gathering entries, and hash some of the
contents of the entry as a key. This makes it quick to check if the
entry has been cached.

There also is a refresh thread that can read the exports and netgroups
file from disk, and creates entries in the dict for the respective
functionality.

The problem (likely) occurs when this happens:

 Step | NFS-client|  refresh thread
--+---+---
  |   |
  1   | mount request |
  | fetch entry from caching dict |
  |   |
  2   |   | timeout expired
  |   | read file from disk
  |   | create and fill a new dict
  |   | replace dict with new one
  |   |
  3   |   | free old dict and entries
  |   |
  4   | try to use the cache entry|
  | SEGFAULT  |
  |   |

Step 1 and 2 can happen at the same time, but 3 has to come after 1 and
before 4. Step 4 is a very minimal usage of the cache entry, this makes
hitting this problem very rare.

Because the netgroups/exports cache entries are kept in a dict, there is
a lot of type-casting going on. It is not trivial to understand how this
works. My preference would be to modify the structures so that we can do
without the dicts, but that is not straight forward either.

I would like to have a refcount for the entries that are fetched from
the dict. But that means that after type-casting the contents from the
dict, there still is a window where the cache entry can get free'd (or
--refcount'd).

The next best thing to do, is adding a lock on the cache structure
itself. Everywhere the cache is accessed, taking a read-lock would be
needed. Adding an entry to the cache would require a write-lock. Looks
like a decent amount of work, and needs careful checking to not miss any
occurrences. This is currently what I think is the most suitable
approach.


+1 for the detailed explanation.

Testing can probably be done by adding a delay in the mount path. Either
a gdb script or systemtap that delays the execution of the thread that
handles the mount.

Other ideas are very much welcome :)
Niels


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rebalance failure wrt trashcan

2015-05-14 Thread Jiffin Tony Thottan



On 14/05/15 12:30, SATHEESARAN wrote:

Hi All,

I was using glusterfs-3.7 beta2 build ( 
glusterfs-3.7.0beta2-0.0.el6.x86_64 )

I have seen rebalance failure in one of the node.

[2015-05-14 12:17:03.695156] E 
[dht-rebalance.c:2368:gf_defrag_settle_hash] 0-vmstore-dht: fix layout 
on /.trashcan/internal_op failed
[2015-05-14 12:17:03.695636] E [MSGID: 109016] 
[dht-rebalance.c:2528:gf_defrag_fix_layout] 0-vmstore-dht: Fix layout 
failed for /.trashcan




dht_layout is not populated for trashcan and contents inside trashcan, 
because they are created at server side (brick).We suspect this as 
reason for this error.


I had send a patch to skip rebalance for trash directory and its 
contents : http://review.gluster.org/#/c/9865/



Does it have any impact ?



CCing dht folks here.


-- sas
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rebalance failure wrt trashcan

2015-05-14 Thread Jiffin Tony Thottan



On 14/05/15 13:01, Nithya Balachandran wrote:

- Jiffin Tony Thottan jthot...@redhat.com wrote:


On 14/05/15 12:30, SATHEESARAN wrote:

Hi All,

I was using glusterfs-3.7 beta2 build (
glusterfs-3.7.0beta2-0.0.el6.x86_64 )
I have seen rebalance failure in one of the node.

[2015-05-14 12:17:03.695156] E
[dht-rebalance.c:2368:gf_defrag_settle_hash] 0-vmstore-dht: fix layout
on /.trashcan/internal_op failed
[2015-05-14 12:17:03.695636] E [MSGID: 109016]
[dht-rebalance.c:2528:gf_defrag_fix_layout] 0-vmstore-dht: Fix layout
failed for /.trashcan


dht_layout is not populated for trashcan and contents inside trashcan,
because they are created at server side (brick).We suspect this as
reason for this error.



I don't think that should be an issue. When is the .trashcan directory created?

Regards,
Nithya



When the volume is started for the first time.

Regards,
Jiffin

I had send a patch to skip rebalance for trash directory and its
contents : http://review.gluster.org/#/c/9865/


Does it have any impact ?


CCing dht folks here.


-- sas
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] New NetBSD regressions

2015-05-10 Thread Jiffin Tony Thottan



On 10/05/15 23:59, Emmanuel Dreyfus wrote:

I ran NetBSD regressions on master and I get many new failures. For
whoever is interested:

./tests/basic/afr/sparse-file-self-heal.t
   Failed test:  25
./tests/basic/geo-replication/marker-xattrs.t
   Failed test:  32
./tests/basic/mount-nfs-auth.t
   Failed test:  45


Is it spurious or fails in every run?
Regards,
Jiffin

./tests/basic/tier/bug-1214222-directories_miising_after_attach_tier.t
   Failed tests:  16-17
./tests/basic/tier/tier.t
   Failed tests:  25-26, 33
./tests/bitrot/br-stub.t
   = hang

This one already known for being a failure:
./tests/basic/quota-anon-fd-nfs.t
   Failed tests:  22, 24, 26, 28, 30, 32, 34, 36




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] new test failure in tests/basic/mount-nfs-auth.t

2015-05-06 Thread Jiffin Tony Thottan



On 06/05/15 07:03, Pranith Kumar Karampuri wrote:

Niels,
 Any ideas?

http://build.gluster.org/job/rackspace-regression-2GB-triggered/8462/consoleFull

mount.nfs: access denied by server while mounting 
slave46.cloud.gluster.org:/patchy
mount.nfs: access denied by server while mounting 
slave46.cloud.gluster.org:/patchy
mount.nfs: access denied by server while mounting 
slave46.cloud.gluster.org:/patchy
dd: closing output file `/mnt/nfs/0/test-big-write': Input/output error
[20:48:27] ./tests/basic/mount-nfs-auth.t ..
not ok 33

Pranith



This is strange issue which is not noticed till now.

There is no notable errors in nfs.log when this feature is on.

I think when the test fails, we should able fetch  the contents of 
/var/lib/glusterd/nfs/{exports,netgroups} files to get a clear picture.


Thanks,
Jiffin



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD regression status upate

2015-04-29 Thread Jiffin Tony Thottan



On 30/04/15 09:18, Pranith Kumar Karampuri wrote:


On 04/30/2015 08:44 AM, Emmanuel Dreyfus wrote:

Hi

Here is NetBSD regression status update for broken tests:

- tests/basic/afr/split-brain-resolution.t
Anuradha Talur is working on it, the change being still under review
http://review.gluster.org/10134

- tests/basic/ec/
This works but with rare spurious faiures. Nobody works on it.
This is not specific to NetBSD, This also happens on Linux. I am 
looking into them one at a time(At the moment ec-3-1.t). I will post 
the updates.
On a related note, I see glupy is failing spuriously as well: 
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/4080/consoleFull, 
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/4007/consoleFull 



Know anything about it?

Pranith

  - tests/basic/quota-anon-fd-nfs.t
Jiffin Tony Thottan is working on it



It is a misunderstanding ,I am not working on this issue. Currently in 
my test script regarding anonymous fd  libgfapi failed for glfs_fini() 
in NETBSD. I am checking on that issue.

- tests/basic/mgmt_v3-locks.t
This was fixed, changes are awaiting to be merged:
http://review.gluster.org/10425
http://review.gluster.org/10426
  - tests/basic/tier/tier.t
With the help of Dan Lambright, two bugs were fixed (change merged). A
third one awaits review for master (release-3.7 not yet submitted)
http://review.gluster.org/10411

NB: This change was merged on release-3.7 but not on master:
http://review.gluster.org/10407

- tests/bugs
Mostly uncharted terrirory, we will not work on it for release-3.7

- tests/geo-rep
I started investigating and awaits input from Kotresh Hiremath
Ravishankar.

- tests/features/trash.t
Anoop C S, Jiffin Tony Thottan and I fixed it, changes are merged.




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression failure: NFS segfault

2015-04-15 Thread Jiffin Tony Thottan



On 14/04/15 14:15, Venky Shankar wrote:


On 04/14/2015 02:09 PM, Niels de Vos wrote:

On Tue, Apr 14, 2015 at 11:30:30AM +0530, Venky Shankar wrote:

Got this backtrace in gNFS in one of the regression run:

(gdb) bt
#0  0x7f170f0fc380 in pthread_spin_lock () from /lib64/libpthread.so.0
#1  0x7f170fb85993 in dict_get (this=0x5d292c282e392d30,
key=0x7f16e8008330 \220\247)
 at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/dict.c:390
#2  0x7f1701d86340 in exp_dir_get_netgroup (expdir=0x7f16e80084d0,
netgroup=0x7f16e8008330 \220\247)
 at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/nfs/server/src/exports.c:1213
#3  0x7f1701d8724e in __export_dir_lookup_netgroup (dict=0x7f170d5c3fbc,
key=0x7f16e8008330 \220\247, val=0x7f170d3e2690,
 data=0x7f1703514cb0)
 at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/nfs/server/src/mount3-auth.c:442
#4  0x7f170fb87321 in dict_foreach_match (dict=0x7f170d5c3fbc,
match=0x7f170fb871b4 dict_foreach+5, match_data=0x0,
 action=0x7f1701d8711d __export_dir_lookup_netgroup+16,
action_data=0x7f1703514cb0)
 at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/dict.c:1179
#5  0x7f170fb87214 in dict_foreach_match (dict=0x7f1703514cb0, match=0,
match_data=0x7f170fb87214, action=0x7f1703514c70,
 action_data=0x7f170fb871b4) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/dict.c:1166
#6  0x7f1701d87653 in _mnt3_auth_check_host_in_netgroup
(auth_params=0x7f16fc03bfc0, fh=0x7f16faa66250,
 host=0x7f16fc17dfc0 104.130.192.98, dir=0x0, item=0x7f1703514d60)

Don't know if this is something that's already seen/reported.

Core is saved in slave23:/home/jenkins/dbg
# gdb /build/install/sbin/glusterfs build/install/cores/core.9468

Can you point us to the console log of this regression run so that we
can download the core archive too?


http://build.gluster.org/job/rackspace-regression-2GB-triggered/6672/consoleFull


Thanks,
Jiffin and Niels



We have send a patch for the same, so that future coredumps can be 
avoided :   http://review.gluster.org/#/c/10250/


Thanks,
Jiffin and Niels



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Fwd: Change in ffilz/nfs-ganesha[next]: pNFS code drop enablement and checkpatch warnings fixed

2015-03-29 Thread Jiffin Tony Thottan



On 27/03/15 11:48, Benjamin Kingston wrote:
will enabling pnfs just be like fhe VFS FSAL with pnfs = true? 
otherwise I'll wait for your docs




It is not required . By default , FSAL GLUSTER will use pnfs for nfsv4.1.

Only the thing you used be carefully that , current architecture 
supports single mds and multiple ds.


Also you should required to run  nfs-ganesha daemon(will act as DS) on 
every node which contains brick in the trusted pool.


MDS can be any node inside trusted pool or outside the trusted pool.

For nfs-ganesha , latest source code (v2.2-rc6) need to be used.

Thanks,
Jiffin
On Tue, Mar 24, 2015 at 1:25 AM, Jiffin Tony Thottan 
jthot...@redhat.com mailto:jthot...@redhat.com wrote:




On 24/03/15 12:37, Lalatendu Mohanty wrote:

On 03/23/2015 12:49 PM, Anand Subramanian wrote:

FYI.

GlusterFS vols can now be accessed via NFSv4.1 pNFS protocol
(mount -t nfs -o minorversion=1 ...) from nfs-ganesha 2.2-rc5
onwards.

Note: one fix is to go into libgfapi to fix up using anonymous
fd's in ds_write/make_ds_handle() (Avati's sugeestion that
really helps here).
Once Jiffin or myself get that fix in, a good large file
performance can be seen with pNFS vs V4.

All thanks and credit to Jiffin for his terrific effort in
coding things up quickly and for fixing bugs.

Anand


Great news!

I did a quick check in the docs directory i.e.
https://github.com/gluster/glusterfs/tree/master/doc to see if we
have any documentation about nfs-ganesha or pNFS and glusterfs
integration, but did not find any.

I think without howtos around this will hamper the adoption of
this feature among users. So if we can get some documentation for
this, it will be awesome.


Thanks,
Lala

Documentation for glusterfs-nfs-ganesha integration is already
present  :

https://forge.gluster.org/nfs-ganesha-and-glusterfs-integration

http://blog.gluster.org/2014/09/glusterfs-and-nfs-ganesha-integration/

For pNFS, I will send a documentation as soon as possible.

Thanks,
Jiffin




 Forwarded Message 
Subject:Change in ffilz/nfs-ganesha[next]: pNFS code drop
enablement and checkpatch warnings fixed
Date:   Sat, 21 Mar 2015 01:04:30 +0100
From:   GerritHub supp...@gerritforge.com
mailto:supp...@gerritforge.com
Reply-To:   ffilz...@mindspring.com mailto:ffilz...@mindspring.com
To: Anand Subramanian ana...@redhat.com
mailto:ana...@redhat.com
CC: onnfrhvruutnzhnaq.-g...@noclue.notk.org
mailto:onnfrhvruutnzhnaq.-g...@noclue.notk.org



 From Frank Filzffilz...@mindspring.com  mailto:ffilz...@mindspring.com:

Frank Filz has submitted this change and it was merged.

Change subject: pNFS code drop enablement and checkpatch warnings fixed
..


pNFS code drop enablement and checkpatch warnings fixed

Change-Id: Ia8c58dd6d6326f692681f76b96f29c630db21a92
Signed-off-by: Anand Subramanianana...@redhat.com  
mailto:ana...@redhat.com
---
A src/FSAL/FSAL_GLUSTER/ds.c
M src/FSAL/FSAL_GLUSTER/export.c
M src/FSAL/FSAL_GLUSTER/gluster_internal.h
M src/FSAL/FSAL_GLUSTER/handle.c
M src/FSAL/FSAL_GLUSTER/main.c
A src/FSAL/FSAL_GLUSTER/mds.c
6 files changed, 993 insertions(+), 0 deletions(-)



-- 
To view, visithttps://review.gerrithub.io/221683

To unsubscribe, visithttps://review.gerrithub.io/settings

Gerrit-MessageType: merged
Gerrit-Change-Id: Ia8c58dd6d6326f692681f76b96f29c630db21a92
Gerrit-PatchSet: 1
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Owner: Anand Subramanianana...@redhat.com  
mailto:ana...@redhat.com
Gerrit-Reviewer: Frank Filzffilz...@mindspring.com  
mailto:ffilz...@mindspring.com
Gerrit-Reviewer:onnfrhvruutnzhnaq.-g...@noclue.notk.org  
mailto:onnfrhvruutnzhnaq.-g...@noclue.notk.org




___
Gluster-devel mailing list
Gluster-devel@gluster.org  mailto:Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel





___
Gluster-devel mailing list
Gluster-devel@gluster.org mailto:Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Fwd: Change in ffilz/nfs-ganesha[next]: pNFS code drop enablement and checkpatch warnings fixed

2015-03-23 Thread Jiffin Tony Thottan

Yup

On 23/03/15 17:01, Humble Devassy Chirammal wrote:

Isnt this one http://review.gluster.org/#/c/9971 ?

--Humble


On Mon, Mar 23, 2015 at 3:11 PM, Niels de Vos nde...@redhat.com 
mailto:nde...@redhat.com wrote:


On Mon, Mar 23, 2015 at 12:49:56PM +0530, Anand Subramanian wrote:
 FYI.

 GlusterFS vols can now be accessed via NFSv4.1 pNFS protocol
(mount -t nfs
 -o minorversion=1 ...) from nfs-ganesha 2.2-rc5 onwards.

 Note: one fix is to go into libgfapi to fix up using anonymous
fd's in
 ds_write/make_ds_handle() (Avati's sugeestion that really helps
here).
 Once Jiffin or myself get that fix in, a good large file
performance can be
 seen with pNFS vs V4.

I could not find the needed change for libgfapi. Could you post
the link
to the review?

Thanks,
Niels


 All thanks and credit to Jiffin for his terrific effort in
coding things up
 quickly and for fixing bugs.

 Anand


  Forwarded Message 
 Subject:  Change in ffilz/nfs-ganesha[next]: pNFS code drop
enablement and
 checkpatch warnings fixed
 Date: Sat, 21 Mar 2015 01:04:30 +0100
 From: GerritHub supp...@gerritforge.com
mailto:supp...@gerritforge.com
 Reply-To: ffilz...@mindspring.com mailto:ffilz...@mindspring.com
 To:   Anand Subramanian ana...@redhat.com
mailto:ana...@redhat.com
 CC: onnfrhvruutnzhnaq.-g...@noclue.notk.org
mailto:onnfrhvruutnzhnaq.-g...@noclue.notk.org



 From Frank Filz ffilz...@mindspring.com
mailto:ffilz...@mindspring.com:

 Frank Filz has submitted this change and it was merged.

 Change subject: pNFS code drop enablement and checkpatch
warnings fixed

..


 pNFS code drop enablement and checkpatch warnings fixed

 Change-Id: Ia8c58dd6d6326f692681f76b96f29c630db21a92
 Signed-off-by: Anand Subramanian ana...@redhat.com
mailto:ana...@redhat.com
 ---
 A src/FSAL/FSAL_GLUSTER/ds.c
 M src/FSAL/FSAL_GLUSTER/export.c
 M src/FSAL/FSAL_GLUSTER/gluster_internal.h
 M src/FSAL/FSAL_GLUSTER/handle.c
 M src/FSAL/FSAL_GLUSTER/main.c
 A src/FSAL/FSAL_GLUSTER/mds.c
 6 files changed, 993 insertions(+), 0 deletions(-)



 --
 To view, visit https://review.gerrithub.io/221683
 To unsubscribe, visit https://review.gerrithub.io/settings

 Gerrit-MessageType: merged
 Gerrit-Change-Id: Ia8c58dd6d6326f692681f76b96f29c630db21a92
 Gerrit-PatchSet: 1
 Gerrit-Project: ffilz/nfs-ganesha
 Gerrit-Branch: next
 Gerrit-Owner: Anand Subramanian ana...@redhat.com
mailto:ana...@redhat.com
 Gerrit-Reviewer: Frank Filz ffilz...@mindspring.com
mailto:ffilz...@mindspring.com
 Gerrit-Reviewer: onnfrhvruutnzhnaq.-g...@noclue.notk.org
mailto:onnfrhvruutnzhnaq.-g...@noclue.notk.org




 ___
 Gluster-users mailing list
 gluster-us...@gluster.org mailto:gluster-us...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
gluster-us...@gluster.org mailto:gluster-us...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
gluster-us...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel