Re: [Gluster-devel] [release-3.8] Need update on the status of "gdeploy packaged for Fedora and EPEL"

2016-05-11 Thread Sachidananda URS
Hi Neils,

On Wed, May 11, 2016 at 11:47 PM, Niels de Vos  wrote:

> Hi guys,
>
> could you reply to this email with a status update of "gdeploy packaged
> for Fedora and EPEL" that is listed on the roadmap?
>
>   https://www.gluster.org/community/roadmap/3.8/
>
> Have you started the process of becoming a package maintainer for
> Fedora? If the package is available for Fedora we can easily include it
> in the CentOS Storage SIG. Several Gluster developers have recently
> gained the packagers permission in the Fedora project, if you need
> assistance or would like someone else to take it on, let us know as soon
> as possible.
>
>
We haven't started the process yet. We have been a bit held up in a
couple of assignments. If anyone is already a maintainer and OK to
pick this up, will be glad to help. Else, I will start this process in a
couple of weeks. If it is OK.

-sac
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [smoke failure] Permission denied error while install-pygluypPYTHON

2016-05-11 Thread Saravanakumar Arumugam

Hi,
I am facing the same error, Can you help?
https://build.gluster.org/job/smoke/27687/console

Thanks,
Saravana

On 05/12/2016 10:44 AM, Raghavendra Gowdappa wrote:

https://build.gluster.org/job/smoke/27674/console

06:09:06 /bin/mkdir: cannot create directory 
`/usr/lib/python2.6/site-packages/gluster': Permission denied
06:09:06 make[6]: *** [install-pyglupyPYTHON] Error 1
06:09:06 make[5]: *** [install-am] Error 2
06:09:06 make[4]: *** [install-recursive] Error 1
06:09:06 make[3]: *** [install-recursive] Error 1
06:09:06 make[2]: *** [install-recursive] Error 1
06:09:06 make[1]: *** [install-recursive] Error 1
06:09:06 make: *** [install-recursive] Error 1
06:09:06 Build step 'Execute shell' marked build as failure
06:09:06 Finished: FAILURE

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [smoke failure] Permission denied error while install-pygluypPYTHON

2016-05-11 Thread Raghavendra Gowdappa
+gluster-infra

- Original Message -
> From: "Raghavendra Gowdappa" 
> To: "Gluster Devel" 
> Sent: Thursday, May 12, 2016 10:44:07 AM
> Subject: [Gluster-devel] [smoke failure] Permission denied error while
> install-pygluypPYTHON
> 
> https://build.gluster.org/job/smoke/27674/console
> 
> 06:09:06 /bin/mkdir: cannot create directory
> `/usr/lib/python2.6/site-packages/gluster': Permission denied
> 06:09:06 make[6]: *** [install-pyglupyPYTHON] Error 1
> 06:09:06 make[5]: *** [install-am] Error 2
> 06:09:06 make[4]: *** [install-recursive] Error 1
> 06:09:06 make[3]: *** [install-recursive] Error 1
> 06:09:06 make[2]: *** [install-recursive] Error 1
> 06:09:06 make[1]: *** [install-recursive] Error 1
> 06:09:06 make: *** [install-recursive] Error 1
> 06:09:06 Build step 'Execute shell' marked build as failure
> 06:09:06 Finished: FAILURE
> 
> regards,
> Raghavendra
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [smoke failure] Permission denied error while install-pygluypPYTHON

2016-05-11 Thread Raghavendra Gowdappa
https://build.gluster.org/job/smoke/27674/console

06:09:06 /bin/mkdir: cannot create directory 
`/usr/lib/python2.6/site-packages/gluster': Permission denied
06:09:06 make[6]: *** [install-pyglupyPYTHON] Error 1
06:09:06 make[5]: *** [install-am] Error 2
06:09:06 make[4]: *** [install-recursive] Error 1
06:09:06 make[3]: *** [install-recursive] Error 1
06:09:06 make[2]: *** [install-recursive] Error 1
06:09:06 make[1]: *** [install-recursive] Error 1
06:09:06 make: *** [install-recursive] Error 1
06:09:06 Build step 'Execute shell' marked build as failure
06:09:06 Finished: FAILURE

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [release-3.8] Need update on the status of "Glusterfind and Bareos Integration"

2016-05-11 Thread Milind Changire

Looks like all relevant patches to glusterfind and libgfapi have made
it to the release-3.8 branch. Once the official release has been done
it can be communicated to Bareos and they can resume testing against
the release.

Milind

On 05/11/2016 11:46 PM, Niels de Vos wrote:

Hi Milind,

could you reply to this email with a status update of "Glusterfind and
Bareos Integration" that is listed on the roadmap?

   https://www.gluster.org/community/roadmap/3.8/

The last status that is listed is "Implementation ready, needs
communication and testing by Bareos developers". Please pass on any of
the missing details so that they can get added to the roadmap and
release notes so that users (or the Bareos devs?) can start testing.

Thanks,
Niels


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [release-3.8] Need update on the status of "Versioned Documentation"

2016-05-11 Thread Amye Scavarda
On Thu, May 12, 2016 at 9:53 AM, Prashanth Pai  wrote:

>
> >
> > Hi guys,
> >
> > could you reply to this email with a status update of "Versioned
> > Documentation" that is listed on the roadmap?
> >
> >   https://www.gluster.org/community/roadmap/3.8/
> >
> > It is unclear to me how we are adressing the documentation for different
> > versions. If there is no progress on this feature, we'll need to move it
> > out to 3.9/4.0.
>
> RTD supports rendering doc from different git repo branches. Till date, we
> haven't received any PR for 3.8 specific feature documentation. I suggest
> we
> consider branching out when that happens. There are many features in 3.8
> that are internal to GlusterFS which does not change much of user facing
> behavior. Branching out right now will be cumbersome for contributors as
> they
> have to send PRs to multiple branches on github.
>
Amy, Humble: What do you guys think ?
>

Right now, even cloning the main docs branch is a huge pain due to the size
of the repo.
I think that branching will solve not this problem, and might make the
problem worse.

Instead, and I offer this with hesitation, we can start an etherpad as a
working doc for the specific 3.8 features that do change user behavior --
but we'll also need to know what features those are.
As that's complete, we can commit that back into the RTD repo and link it
in the release notes.
It's not ideal, I admit.

Humble?
-- amye


>
> >
> > Thanks,
> > Niels
> >
>



-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [release-3.8] Need update on the status of "Versioned Documentation"

2016-05-11 Thread Prashanth Pai

> 
> Hi guys,
> 
> could you reply to this email with a status update of "Versioned
> Documentation" that is listed on the roadmap?
> 
>   https://www.gluster.org/community/roadmap/3.8/
> 
> It is unclear to me how we are adressing the documentation for different
> versions. If there is no progress on this feature, we'll need to move it
> out to 3.9/4.0.

RTD supports rendering doc from different git repo branches. Till date, we
haven't received any PR for 3.8 specific feature documentation. I suggest we
consider branching out when that happens. There are many features in 3.8
that are internal to GlusterFS which does not change much of user facing
behavior. Branching out right now will be cumbersome for contributors as they
have to send PRs to multiple branches on github.

Amy, Humble: What do you guys think ?

> 
> Thanks,
> Niels
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Need inputs in multi-threaded self-heal option name change

2016-05-11 Thread Pranith Kumar Karampuri
That sounds better. I will wait till evening for more suggestions and
change the name :-).

Pranith

On Thu, May 12, 2016 at 8:38 AM, Paul Cuzner  wrote:

> cluster.shd-max-heals  ... would work for me :)
>
> On Thu, May 12, 2016 at 3:04 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> hi
>>For multi-threaded self-heal, we have introduced new option called
>> cluster.shd-max-threads, which is confusing people who think as many new
>> threads are going to be launched to perform heals where as all it does is
>> increase number of parallel heals in multi-tasking by syncop framework. So
>> I am thinking a better name could be 'cluster.shd-num-parallel-heals' which
>> is a bit lengthy. Wondering if anyone has better suggestions.
>>
>> Pranith
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Need inputs in multi-threaded self-heal option name change

2016-05-11 Thread Paul Cuzner
cluster.shd-max-heals  ... would work for me :)

On Thu, May 12, 2016 at 3:04 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> hi
>For multi-threaded self-heal, we have introduced new option called
> cluster.shd-max-threads, which is confusing people who think as many new
> threads are going to be launched to perform heals where as all it does is
> increase number of parallel heals in multi-tasking by syncop framework. So
> I am thinking a better name could be 'cluster.shd-num-parallel-heals' which
> is a bit lengthy. Wondering if anyone has better suggestions.
>
> Pranith
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Need inputs in multi-threaded self-heal option name change

2016-05-11 Thread Pranith Kumar Karampuri
hi
   For multi-threaded self-heal, we have introduced new option called
cluster.shd-max-threads, which is confusing people who think as many new
threads are going to be launched to perform heals where as all it does is
increase number of parallel heals in multi-tasking by syncop framework. So
I am thinking a better name could be 'cluster.shd-num-parallel-heals' which
is a bit lengthy. Wondering if anyone has better suggestions.

Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Idea: Alternate Release process

2016-05-11 Thread Oleksandr Natalenko
My 2 cents on timings etc.

Rationale:

1. deliver new features to users as fast as possible to get the feedback;
2. leave an option of using LTS branch for those who do not want update too 
often.

Definition:

* "stable release" — .0 tag that receives critical bugfixes and security 
updates for 16 weeks;
* "LTS" — .0 tag that receives critical bugfixes and security updates for 1 
year;

New release happens every 8 weeks. Those 8 weeks include:

* merge window for 3 weeks, during this time all ready features get merged 
into master;
* feature freeze on -rc1 tagging;
* 5 weeks of testing, bugfixing and preparing new features;
* tagging .0 stable release.

Example (imaginary versions and dates):

March 1 — 5.0 release, merge window opens
March 22 — 6.0-rc1 release, merge window closes, feature freeze, new -rc each 
week
May 1 — 6.0 release, merge window opens, 5.0 still gets fixes
May 22 — 7.0-rc1 release
July 1 — 7.0 release, merge window closes, no more fixes for 5.0, 6.0 still 
gets fixes
...
September 1 — 8.0 release, LTS, EOT is Sep 1, next year.
...

Backward compatibility should be guaranteed during the time between two 
consecutive LTSes by excessive using of op-version. The user should have a 
possibility to upgrade from one LTS to another preferably with no downtime. 
LTS+1 is not guaranteed to backward compatible with LTS-1.

Pros:

* frequent releases with new features that do not break backward 
compatibility;
* max 2 stable branches supported simultaneously;
* guaranteed LTS branch with guaranteed upgrade to new LTS.

Cons:

* no idea what to do with things that break backward compatibility and that 
couldn't be implemented within op-version constraints (except postponing them 
for too much).
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] [release-3.8] Need update on the status of "Gluster Tools packaged for Fedora and EPEL"

2016-05-11 Thread Niels de Vos
Hi Aravinda,

could you reply to this email with a status update of "Gluster Tools
packaged for Fedora and EPEL" that is listed on the roadmap?

  https://www.gluster.org/community/roadmap/3.8/

Have you started the process of becoming a package maintainer for
Fedora? If the package is available for Fedora we can easily include it
in the CentOS Storage SIG. Several Gluster developers have recently
gained the packagers permission in the Fedora project, if you need
assistance or would like someone else to take it on, let us know as soon
as possible.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] [release-3.8] Need update on the status of "gdeploy packaged for Fedora and EPEL"

2016-05-11 Thread Niels de Vos
Hi guys,

could you reply to this email with a status update of "gdeploy packaged
for Fedora and EPEL" that is listed on the roadmap?

  https://www.gluster.org/community/roadmap/3.8/

Have you started the process of becoming a package maintainer for
Fedora? If the package is available for Fedora we can easily include it
in the CentOS Storage SIG. Several Gluster developers have recently
gained the packagers permission in the Fedora project, if you need
assistance or would like someone else to take it on, let us know as soon
as possible.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] [release-3.8] Need update on the status of "Glusterfind and Bareos Integration"

2016-05-11 Thread Niels de Vos
Hi Milind,

could you reply to this email with a status update of "Glusterfind and
Bareos Integration" that is listed on the roadmap?

  https://www.gluster.org/community/roadmap/3.8/

The last status that is listed is "Implementation ready, needs
communication and testing by Bareos developers". Please pass on any of
the missing details so that they can get added to the roadmap and
release notes so that users (or the Bareos devs?) can start testing.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] [release-3.8] Need update on the status of "Versioned Documentation"

2016-05-11 Thread Niels de Vos
Hi guys,

could you reply to this email with a status update of "Versioned
Documentation" that is listed on the roadmap?

  https://www.gluster.org/community/roadmap/3.8/

It is unclear to me how we are adressing the documentation for different
versions. If there is no progress on this feature, we'll need to move it
out to 3.9/4.0.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] [release-3.8] Need update on the status of "Tiering Performance Enhancements"

2016-05-11 Thread Niels de Vos
Hi guys,

could you reply to this email with a status update of "Tiering
Performance Enhancements" that is listed on the roadmap?

  https://www.gluster.org/community/roadmap/3.8/

The bugs listed on the roadmap are for a downstream product, and they
can not be used to track the status of the patches that went in. Could
you please do the following two things:

 1. replace the BZs in the roadmap with Gluster Community ones
 2. add the missing information on the roadmap

If these patches for the performance enhancement are not in the
release-3.8 branch yet, we'll move the feature to 3.9/4.0. So we still
need you to update the roadmap with the correct details :-)

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] gfapi, readdirplus and forced lookup after inode_link

2016-05-11 Thread Soumya Koduri



On 05/11/2016 10:17 PM, Soumya Koduri wrote:



On 05/11/2016 06:12 PM, Raghavendra Gowdappa wrote:



- Original Message -

From: "Raghavendra Gowdappa" 
To: "Soumya Koduri" 
Cc: "Gluster Devel" 
Sent: Wednesday, May 11, 2016 4:28:28 PM
Subject: Re: [Gluster-devel] gfapi,readdirplus and forced lookup
after inode_link



- Original Message -

From: "Soumya Koduri" 
To: "Mohammed Rafi K C" , "Raghavendra Gowdappa"
, "Niels de Vos"
, "Raghavendra Talur" , "Poornima
Gurusiddaiah" 
Cc: "+rhs-zteam" , "Rajesh Joseph"
, "jtho >> Jiffin Thottan"

Sent: Wednesday, May 11, 2016 3:55:05 PM
Subject: Re: gfapi, readdirplus and forced lookup after inode_link



On 05/11/2016 12:41 PM, Mohammed Rafi K C wrote:



On 05/11/2016 12:28 PM, Soumya Koduri wrote:

Hi Raghavendra,



On 05/11/2016 12:01 PM, Raghavendra Gowdappa wrote:

Hi all,

There are certain code-paths where the layers managing inodes
(gfapi,
fuse, nfsv3 etc) need to do a lookup even though the inode is found
in inode-table. readdirplus is one such codepath (but not only one).
The reason for doing this is that
1. not all xlators have enough information in readdirp_cbk to make
inode usable (for eg., dht cannot build layout for directory
inodes).
2. There are operations (like dht directory self-healing) which are
needed for maintaining internal consistency and these operations
cannot be done in readdirp.

This forcing of lookup on a linked inode is normally achieved in two
ways:
1. lower layers (like dht) setting entry->inode to NULL (without
entry->inode, interface layers cannot link the inode).


Rafi (CC'ed) had made changes to fix readdirp specific issue
(required
for tiered volumes) as part of
http://review.gluster.org/#/c/14109/ to
do explicit lookup if either entry->inode is set to NULL or inode_ctx
is NULL in gfapi. And I think he had made similar changes for
gluster-NFS as well to provide support for tiered volumes.  I am not
sure if it is handled in common resolver code-path. Have to look at
the code. Rafi shall be able to confirm it.


The changes I made in the three access layers are for inodes which was
linked from lower layers. Which means the inodes linked from lower
layer
won't have inode ctx set in upper xlators, ie, during resolving we
will
send explicit lookup.

With this changes during resolve if inode_ctx is not set then it will
send a lookup + if set_need_lookup flag is set in inode_ctx, then also
we will send a lookup


That's correct. I think gfapi and fuse-bridge are handling this
properly i.e., sending a lookup before resuming fop if:
1. No context of xlator (fuse/gfapi) is present in inode.
Or
2. Context is set and it says resolution is necessary.

Note that case 1 is necessary as inode_linking is done in dht during
directory healing. So, other fops might encounter an inode on which
resolution is still in progress and not complete yet. As inode-context
is set in fuse-bridge/gfapi only after a successful lookup, absence of
context can be used as a hint for resolution being in progress.

I am not sure NFSv3 server is doing this.


Case (1) is definitely handled in gluster-NFS. In fact looks like it was
as part of single BZ# 1297311 [1], all the required changes were made in
these layers (fuse,gNFS, gfapi) to handle the cases you had mentioned
above. However, at-least when compared the patches [2], [3] & [4], I do
not see need_lookup changes in gluster-NFS.

Rafi, do you recall why its been so?  Was it intentional?


I think the reason being, in gluster-NFS, " nfs_fix_generation()", where 
inode_ctx is set, is being called only at selective places. So at-least 
in readdirp_cbk(), we seem to be just linking inodes but not setting 
inode_ctx. So that would result in forced lookup next time any fop is 
performed on that inode.


Rafi,
 Could you please confirm if that was indeed the case?

Thanks,
Soumya



Thanks,
Soumya

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1297311
[2] http://review.gluster.org/#/c/13224/
[3] http://review.gluster.org/#/c/13225
[4] http://review.gluster.org/#/c/13226



Also, on bricks too quota-enforcer and bit-rot does inode-linking. So,
protocol/server also needs to do similar things.



As Du mentioned, readdirp set need_lookup everytime for entries in
readdirp, I saw that code in fuse, and gfapi. But I don't remember
such
code in gNFS.


There are checks for "entry->inode == NULL" in gNFS case as well. Looks
like it was Jiffin who made those changes (again wrt to tiered volumes)
- http://review.gluster.org/#/c/12960/

But all these checks seem to be in only readdirp_cbk codepath where
directory entries are filled. What are other fops which need such
special handling?


There are some codepaths, where linking is done by xlators 

Re: [Gluster-devel] gfapi, readdirplus and forced lookup after inode_link

2016-05-11 Thread Soumya Koduri



On 05/11/2016 06:12 PM, Raghavendra Gowdappa wrote:



- Original Message -

From: "Raghavendra Gowdappa" 
To: "Soumya Koduri" 
Cc: "Gluster Devel" 
Sent: Wednesday, May 11, 2016 4:28:28 PM
Subject: Re: [Gluster-devel] gfapi, readdirplus and forced lookup after 
inode_link



- Original Message -

From: "Soumya Koduri" 
To: "Mohammed Rafi K C" , "Raghavendra Gowdappa"
, "Niels de Vos"
, "Raghavendra Talur" , "Poornima
Gurusiddaiah" 
Cc: "+rhs-zteam" , "Rajesh Joseph"
, "jtho >> Jiffin Thottan"

Sent: Wednesday, May 11, 2016 3:55:05 PM
Subject: Re: gfapi, readdirplus and forced lookup after inode_link



On 05/11/2016 12:41 PM, Mohammed Rafi K C wrote:



On 05/11/2016 12:28 PM, Soumya Koduri wrote:

Hi Raghavendra,



On 05/11/2016 12:01 PM, Raghavendra Gowdappa wrote:

Hi all,

There are certain code-paths where the layers managing inodes (gfapi,
fuse, nfsv3 etc) need to do a lookup even though the inode is found
in inode-table. readdirplus is one such codepath (but not only one).
The reason for doing this is that
1. not all xlators have enough information in readdirp_cbk to make
inode usable (for eg., dht cannot build layout for directory inodes).
2. There are operations (like dht directory self-healing) which are
needed for maintaining internal consistency and these operations
cannot be done in readdirp.

This forcing of lookup on a linked inode is normally achieved in two
ways:
1. lower layers (like dht) setting entry->inode to NULL (without
entry->inode, interface layers cannot link the inode).


Rafi (CC'ed) had made changes to fix readdirp specific issue (required
for tiered volumes) as part of http://review.gluster.org/#/c/14109/ to
do explicit lookup if either entry->inode is set to NULL or inode_ctx
is NULL in gfapi. And I think he had made similar changes for
gluster-NFS as well to provide support for tiered volumes.  I am not
sure if it is handled in common resolver code-path. Have to look at
the code. Rafi shall be able to confirm it.


The changes I made in the three access layers are for inodes which was
linked from lower layers. Which means the inodes linked from lower layer
won't have inode ctx set in upper xlators, ie, during resolving we will
send explicit lookup.

With this changes during resolve if inode_ctx is not set then it will
send a lookup + if set_need_lookup flag is set in inode_ctx, then also
we will send a lookup


That's correct. I think gfapi and fuse-bridge are handling this properly i.e., 
sending a lookup before resuming fop if:
1. No context of xlator (fuse/gfapi) is present in inode.
Or
2. Context is set and it says resolution is necessary.

Note that case 1 is necessary as inode_linking is done in dht during directory 
healing. So, other fops might encounter an inode on which resolution is still 
in progress and not complete yet. As inode-context is set in fuse-bridge/gfapi 
only after a successful lookup, absence of context can be used as a hint for 
resolution being in progress.

I am not sure NFSv3 server is doing this.


Case (1) is definitely handled in gluster-NFS. In fact looks like it was 
as part of single BZ# 1297311 [1], all the required changes were made in 
these layers (fuse,gNFS, gfapi) to handle the cases you had mentioned 
above. However, at-least when compared the patches [2], [3] & [4], I do 
not see need_lookup changes in gluster-NFS.


Rafi, do you recall why its been so?  Was it intentional?

Thanks,
Soumya

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1297311
[2] http://review.gluster.org/#/c/13224/
[3] http://review.gluster.org/#/c/13225
[4] http://review.gluster.org/#/c/13226



Also, on bricks too quota-enforcer and bit-rot does inode-linking. So, 
protocol/server also needs to do similar things.



As Du mentioned, readdirp set need_lookup everytime for entries in
readdirp, I saw that code in fuse, and gfapi. But I don't remember such
code in gNFS.


There are checks for "entry->inode == NULL" in gNFS case as well. Looks
like it was Jiffin who made those changes (again wrt to tiered volumes)
- http://review.gluster.org/#/c/12960/

But all these checks seem to be in only readdirp_cbk codepath where
directory entries are filled. What are other fops which need such
special handling?


There are some codepaths, where linking is done by xlators who don't do
resolution. A rough search shows following components:
1. quota enforcer
2. bitrot
3. dht/tier (needed, but currently not doing).
4. trash (for .trash I suppose)

However, none of these are explicitly setting need_lookup. So, there are
windows of time where lookup is partially complete in an xlator graph, but
other fops start using them. I am currently working on a fix to solve the
issue for dht/tier on fuse. We 

[Gluster-devel] Weekly Community Meeting - 11/May/2016

2016-05-11 Thread Kaushal M
The meeting minutes for this weeks meeting are available at

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-11/weekly_community_meeting_11may2016.2016-05-11-12.07.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-11/weekly_community_meeting_11may2016.2016-05-11-12.07.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-11/weekly_community_meeting_11may2016.2016-05-11-12.07.log.html

Next weeks meeting will be held at the same time.

Thank you all, who attended todays meeting. See you next week. :)

~kaushal

Meeting summary
---
* Rollcall  (kshlm, 12:07:13)

* Next weeks meeting host  (kshlm, 12:09:24)
  * AGREED: rastar is next weeks host  (kshlm, 12:10:21)

* Last weeks AIs  (kshlm, 12:10:36)

* kshlm & csim to set up faux/pseudo user email for gerrit, bugzilla,
  github  (kshlm, 12:10:45)
  * ACTION: kshlm & csim to set up faux/pseudo user email for gerrit,
bugzilla, github  (kshlm, 12:11:47)

* jdarcy to provide a general Gluster-4.0 status update  (kshlm,
  12:12:29)
  * LINK:
http://www.gluster.org/pipermail/gluster-devel/2016-May/049367.html
(jdarcy, 12:13:14)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2016-May/049375.html
(post-factum, 12:13:53)

* hagarth to take forward discussion on release and support strategies
  (onto mailing lists or another IRC meeting)  (kshlm, 12:14:45)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2016-May/049402.html
(post-factum, 12:15:20)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2016-May/049455.html
(post-factum, 12:16:40)

* amye to check on some blog posts being distorted on blog.gluster.org,
  josferna's post in particular  (kshlm, 12:32:43)
  * LINK: http://planet.gluster.org/ -> joelearnsopensource  (ndevos,
12:33:53)
  * LINK: http://blog.gluster.org/author/josephaug26/   (post-factum,
12:34:17)
  * ACTION: amye to check on some blog posts being distorted on
blog.gluster.org, josferna's post in particular  (kshlm, 12:37:34)

* pranithk1 sends out a summary of release requirements, with some
  ideas  (kshlm, 12:38:02)
  * ACTION: pranithk1 sends out a summary of release requirements,
with some ideas  (kshlm, 12:44:08)

* hagarth will start a discussion on his release-management strategy
  (kshlm, 12:44:34)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2016-May/049455.html
(kshlm, 12:45:32)

* kshlm to check with reported of 3.6 leaks on backport need  (kshlm,
  12:46:43)

* kshlm to check with reported of 3.6 leaks on backport need  (kshlm,
  12:48:34)
  * ACTION: kshlm to check with reported of 3.6 leaks on backport need
(kshlm, 12:49:06)

* GlusterFS-3.8  (kshlm, 12:50:51)
  * LINK: https://www.gluster.org/community/roadmap/3.8/ has been
updated with the current features  (ndevos, 12:51:31)
  * LINK: https://www.gluster.org/community/roadmap/3.9/ now has the
removed features, that is a placeholde page until we know a 3.9/4.0
schedule  (ndevos, 12:51:57)

* GlusterFS-4.0  (kshlm, 12:57:25)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2016-May/049436.html
(post-factum, 12:57:46)

* Open floor  (kshlm, 13:07:13)

Meeting ended at 13:17:41 UTC.




Action Items

* kshlm & csim to set up faux/pseudo user email for gerrit, bugzilla,
  github
* amye to check on some blog posts being distorted on blog.gluster.org,
  josferna's post in particular
* pranithk1 sends out a summary of release requirements, with some
  ideas
* kshlm to check with reported of 3.6 leaks on backport need




Action Items, by person
---
* amye
  * amye to check on some blog posts being distorted on
blog.gluster.org, josferna's post in particular
* kshlm
  * kshlm & csim to set up faux/pseudo user email for gerrit, bugzilla,
github
  * kshlm to check with reported of 3.6 leaks on backport need
* **UNASSIGNED**
  * pranithk1 sends out a summary of release requirements, with some
ideas




People Present (lines said)
---
* kshlm (145)
* ndevos (49)
* post-factum (34)
* jdarcy (32)
* kkeithley (11)
* aravindavk (10)
* amye (5)
* zodbot (3)
* msvbhat (2)
* jiffin (1)
* nigelb (1)
* overclk (1)
* atinm (1)
* glusterbot (1)
* karthik___ (1)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 'mv' of ./tests/bugs/posix/bug-1113960.t causes 100% CPU

2016-05-11 Thread Michael Scherer
Le mercredi 11 mai 2016 à 15:39 +0200, Niels de Vos a écrit :
> Could someone look into this busy loop?
>   https://paste.fedoraproject.org/365207/29732171/raw/
> 
> This was happening in a regression-test burn-in run, occupying a Jenkins
> slave for 2+ days:
>   https://build.gluster.org/job/regression-test-burn-in/936/
>   (run with commit f0ade919006b2581ae192f997a8ae5bacc2892af from master)
> 
> A coredump of the mount process is available from here:
>   http://slave20.cloud.gluster.org/archived_builds/crash.tar.gz
> 
> Thanks misc for reporting and gathering the debugging info.

There is the same problem on slave0
https://build.gluster.org/job/regression-test-burn-in/931/

run with abd27041ebcb3c6ee897ad253fc248e3bb1823e6

Core on http://slave0.cloud.gluster.org/archived_builds/crash.tar.gz

I am rebooting both builders in 5 minutes.
-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS




signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] 'mv' of ./tests/bugs/posix/bug-1113960.t causes 100% CPU

2016-05-11 Thread Niels de Vos
Could someone look into this busy loop?
  https://paste.fedoraproject.org/365207/29732171/raw/

This was happening in a regression-test burn-in run, occupying a Jenkins
slave for 2+ days:
  https://build.gluster.org/job/regression-test-burn-in/936/
  (run with commit f0ade919006b2581ae192f997a8ae5bacc2892af from master)

A coredump of the mount process is available from here:
  http://slave20.cloud.gluster.org/archived_builds/crash.tar.gz

Thanks misc for reporting and gathering the debugging info.
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] gfapi, readdirplus and forced lookup after inode_link

2016-05-11 Thread Raghavendra Gowdappa


- Original Message -
> From: "Raghavendra Gowdappa" 
> To: "Soumya Koduri" 
> Cc: "Gluster Devel" 
> Sent: Wednesday, May 11, 2016 4:28:28 PM
> Subject: Re: [Gluster-devel] gfapi,   readdirplus and forced lookup after 
> inode_link
> 
> 
> 
> - Original Message -
> > From: "Soumya Koduri" 
> > To: "Mohammed Rafi K C" , "Raghavendra Gowdappa"
> > , "Niels de Vos"
> > , "Raghavendra Talur" , "Poornima
> > Gurusiddaiah" 
> > Cc: "+rhs-zteam" , "Rajesh Joseph"
> > , "jtho >> Jiffin Thottan"
> > 
> > Sent: Wednesday, May 11, 2016 3:55:05 PM
> > Subject: Re: gfapi, readdirplus and forced lookup after inode_link
> > 
> > 
> > 
> > On 05/11/2016 12:41 PM, Mohammed Rafi K C wrote:
> > >
> > >
> > > On 05/11/2016 12:28 PM, Soumya Koduri wrote:
> > >> Hi Raghavendra,
> > >>
> > >>
> > >>
> > >> On 05/11/2016 12:01 PM, Raghavendra Gowdappa wrote:
> > >>> Hi all,
> > >>>
> > >>> There are certain code-paths where the layers managing inodes (gfapi,
> > >>> fuse, nfsv3 etc) need to do a lookup even though the inode is found
> > >>> in inode-table. readdirplus is one such codepath (but not only one).
> > >>> The reason for doing this is that
> > >>> 1. not all xlators have enough information in readdirp_cbk to make
> > >>> inode usable (for eg., dht cannot build layout for directory inodes).
> > >>> 2. There are operations (like dht directory self-healing) which are
> > >>> needed for maintaining internal consistency and these operations
> > >>> cannot be done in readdirp.
> > >>>
> > >>> This forcing of lookup on a linked inode is normally achieved in two
> > >>> ways:
> > >>> 1. lower layers (like dht) setting entry->inode to NULL (without
> > >>> entry->inode, interface layers cannot link the inode).
> > >>
> > >> Rafi (CC'ed) had made changes to fix readdirp specific issue (required
> > >> for tiered volumes) as part of http://review.gluster.org/#/c/14109/ to
> > >> do explicit lookup if either entry->inode is set to NULL or inode_ctx
> > >> is NULL in gfapi. And I think he had made similar changes for
> > >> gluster-NFS as well to provide support for tiered volumes.  I am not
> > >> sure if it is handled in common resolver code-path. Have to look at
> > >> the code. Rafi shall be able to confirm it.
> > >
> > > The changes I made in the three access layers are for inodes which was
> > > linked from lower layers. Which means the inodes linked from lower layer
> > > won't have inode ctx set in upper xlators, ie, during resolving we will
> > > send explicit lookup.
> > >
> > > With this changes during resolve if inode_ctx is not set then it will
> > > send a lookup + if set_need_lookup flag is set in inode_ctx, then also
> > > we will send a lookup

That's correct. I think gfapi and fuse-bridge are handling this properly i.e., 
sending a lookup before resuming fop if:
1. No context of xlator (fuse/gfapi) is present in inode.
Or
2. Context is set and it says resolution is necessary.

Note that case 1 is necessary as inode_linking is done in dht during directory 
healing. So, other fops might encounter an inode on which resolution is still 
in progress and not complete yet. As inode-context is set in fuse-bridge/gfapi 
only after a successful lookup, absence of context can be used as a hint for 
resolution being in progress.

I am not sure NFSv3 server is doing this.

Also, on bricks too quota-enforcer and bit-rot does inode-linking. So, 
protocol/server also needs to do similar things.

> > >
> > > As Du mentioned, readdirp set need_lookup everytime for entries in
> > > readdirp, I saw that code in fuse, and gfapi. But I don't remember such
> > > code in gNFS.
> > 
> > There are checks for "entry->inode == NULL" in gNFS case as well. Looks
> > like it was Jiffin who made those changes (again wrt to tiered volumes)
> > - http://review.gluster.org/#/c/12960/
> > 
> > But all these checks seem to be in only readdirp_cbk codepath where
> > directory entries are filled. What are other fops which need such
> > special handling?
> 
> There are some codepaths, where linking is done by xlators who don't do
> resolution. A rough search shows following components:
> 1. quota enforcer
> 2. bitrot
> 3. dht/tier (needed, but currently not doing).
> 4. trash (for .trash I suppose)
> 
> However, none of these are explicitly setting need_lookup. So, there are
> windows of time where lookup is partially complete in an xlator graph, but
> other fops start using them. I am currently working on a fix to solve the
> issue for dht/tier on fuse. We have to do similar work on other
> xlators/interface layers too.
> 
> > 
> > Thanks,
> > Soumya
> > 
> > 
> > >
> > > Regards
> > > Rafi KC
> > >
> > >>
> > >>
> > >> Thanks,
> > >> Soumya
> > >>
> > >>> 2. interface layers (at 

Re: [Gluster-devel] snapshot tests cause coredump in dmeventd

2016-05-11 Thread Rajesh Joseph
On Tue, May 10, 2016 at 2:25 PM, Michael Scherer 
wrote:

> Le lundi 09 mai 2016 à 16:16 +0530, Rajesh Joseph a écrit :
> > On Wed, May 4, 2016 at 7:19 PM, Michael Scherer 
> wrote:
> >
> > > Le mercredi 04 mai 2016 à 10:42 +0530, Rajesh Joseph a écrit :
> > > > On Mon, May 2, 2016 at 3:04 AM, Niels de Vos 
> wrote:
> > > >
> > > > > It seems that a snaphot regression test managed to trigger a core
> dump
> > > > > in dmeventd. Because our regression tests check for cores, the
> test was
> > > > > marked as a failure (mostly a 'good' thing).
> > > > >
> > > > > I'd appreciate it if one of the developers of snapshot can have a
> look
> > > > > at the core from dmeventd and maybe check with the LVM developers
> if
> > > > > this is a known bug. The core can be downloaded from the bottom of
> this
> > > > > link:
> > > > >
> > > > >
> > >
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/20315/console
> > > > >
> > > > >
> > > > The slave machine is not accessible therefore could not download the
> core
> > > > file. Is it
> > > > possible to get access to this machine?
> > >
> > > I opened the firewall for that port, didn't see it was not before.
> > > So you can now download the tarball.
> > >
> > > I will add it to the automation to open the others too.
> > >
> >
> > Thanks Michael for looking into this. The machine is still not
> accessible.
>
> Seems something did clean the firewall (potentially a reboot).
>
> i was on PTO for the last days, so I didn't see it earlier.
>
> I opened it and will fix the root cause.
>
>
Thanks Michael. Its working now.

-Rajesh
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] gfapi, readdirplus and forced lookup after inode_link

2016-05-11 Thread Raghavendra Gowdappa


- Original Message -
> From: "Soumya Koduri" 
> To: "Mohammed Rafi K C" , "Raghavendra Gowdappa" 
> , "Niels de Vos"
> , "Raghavendra Talur" , "Poornima 
> Gurusiddaiah" 
> Cc: "+rhs-zteam" , "Rajesh Joseph" 
> , "jtho >> Jiffin Thottan"
> 
> Sent: Wednesday, May 11, 2016 3:55:05 PM
> Subject: Re: gfapi, readdirplus and forced lookup after inode_link
> 
> 
> 
> On 05/11/2016 12:41 PM, Mohammed Rafi K C wrote:
> >
> >
> > On 05/11/2016 12:28 PM, Soumya Koduri wrote:
> >> Hi Raghavendra,
> >>
> >>
> >>
> >> On 05/11/2016 12:01 PM, Raghavendra Gowdappa wrote:
> >>> Hi all,
> >>>
> >>> There are certain code-paths where the layers managing inodes (gfapi,
> >>> fuse, nfsv3 etc) need to do a lookup even though the inode is found
> >>> in inode-table. readdirplus is one such codepath (but not only one).
> >>> The reason for doing this is that
> >>> 1. not all xlators have enough information in readdirp_cbk to make
> >>> inode usable (for eg., dht cannot build layout for directory inodes).
> >>> 2. There are operations (like dht directory self-healing) which are
> >>> needed for maintaining internal consistency and these operations
> >>> cannot be done in readdirp.
> >>>
> >>> This forcing of lookup on a linked inode is normally achieved in two
> >>> ways:
> >>> 1. lower layers (like dht) setting entry->inode to NULL (without
> >>> entry->inode, interface layers cannot link the inode).
> >>
> >> Rafi (CC'ed) had made changes to fix readdirp specific issue (required
> >> for tiered volumes) as part of http://review.gluster.org/#/c/14109/ to
> >> do explicit lookup if either entry->inode is set to NULL or inode_ctx
> >> is NULL in gfapi. And I think he had made similar changes for
> >> gluster-NFS as well to provide support for tiered volumes.  I am not
> >> sure if it is handled in common resolver code-path. Have to look at
> >> the code. Rafi shall be able to confirm it.
> >
> > The changes I made in the three access layers are for inodes which was
> > linked from lower layers. Which means the inodes linked from lower layer
> > won't have inode ctx set in upper xlators, ie, during resolving we will
> > send explicit lookup.
> >
> > With this changes during resolve if inode_ctx is not set then it will
> > send a lookup + if set_need_lookup flag is set in inode_ctx, then also
> > we will send a lookup
> >
> > As Du mentioned, readdirp set need_lookup everytime for entries in
> > readdirp, I saw that code in fuse, and gfapi. But I don't remember such
> > code in gNFS.
> 
> There are checks for "entry->inode == NULL" in gNFS case as well. Looks
> like it was Jiffin who made those changes (again wrt to tiered volumes)
>   - http://review.gluster.org/#/c/12960/
> 
> But all these checks seem to be in only readdirp_cbk codepath where
> directory entries are filled. What are other fops which need such
> special handling?

There are some codepaths, where linking is done by xlators who don't do 
resolution. A rough search shows following components:
1. quota enforcer
2. bitrot
3. dht/tier (needed, but currently not doing).
4. trash (for .trash I suppose)

However, none of these are explicitly setting need_lookup. So, there are 
windows of time where lookup is partially complete in an xlator graph, but 
other fops start using them. I am currently working on a fix to solve the issue 
for dht/tier on fuse. We have to do similar work on other xlators/interface 
layers too.

> 
> Thanks,
> Soumya
> 
> 
> >
> > Regards
> > Rafi KC
> >
> >>
> >>
> >> Thanks,
> >> Soumya
> >>
> >>> 2. interface layers (at least fuse) setting a flag in inode to let
> >>> resolver know that a lookup is to be done before resuming the fop.
> >>>
> >>> I am sure that fuse-bridge does this correctly. Need inputs from you
> >>> about the behavior of other interface layers like gfapi, nfsv3 etc.
> >>>
> >>> regards,
> >>> Raghavendra
> >>>
> >
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Introduction and demo of Gluster Eventing Feature

2016-05-11 Thread Aravinda

Hi,

Yesterday I recorded a demo and wrote a blog post about Gluster Eventing 
feature.


http://aravindavk.in/blog/10-mins-intro-to-gluster-eventing/

Comments and Suggestions Welcome.

--
regards
Aravinda
http://aravindavk.in

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Memory usage of glusterfsd increases when directories are created

2016-05-11 Thread Chaloulos, Klearchos (Nokia - GR/Athens)
Hello,

We have found that the memory usage of glusterfsd is increasing when 
directories are created. Environment details can seen in the attached 
environment.txt file, and the translator graph can be seen in the attached file 
translator_graph.txt. In summary we have a setup of two gluster bricks, using 
replicate mode, and one third gluster peer for quorum. We use glusterfs version 
3.6.9. We build glusterfs directly from source code.

The test script creates many directories on the /mnt/export volume from the 
storage client side. It creates directories of the path 
"/mnt/export/dir.X", so all directories are under the same folder. In each 
directory a file of a random size between 1024 and 11204 is created.
The number of directories created varied from 1 to 20 for different 
runs. Before each run all directories were deleted.

A graph of the memory usage can be seen here: http://imgur.com/z8LOskQ

In the X-axis the number of directories is seen. The number varies because 
before each run all directories are deleted. So first 8 directories were 
created, then they were deleted and 16 were created then deleted and 20 
were created, then deleted and 4 were created etc. Due to the amount of 
data the resolution of the X-axis is not fine enough to show the exact 
variation, but the general idea is that thousands of directories were created 
and deleted.

The Y-axis shows the RSS memory usage of the glusterfsd process of this volume 
on the two storage nodes (SN-0 and SN-1), as measured by 'pidstat -urdh -p 
'.
We can see that the memory usage is continually increasing.


1) Is this increase normal expected? I can imagine that glusterfsd would 
allocated more memory if more files or directories are created, but is there an 
upper limit, or will it increase forever?

2) Can we control the memory allocation by any configuration parameters?

3) Is there a particular translator that controls this memory allocation?

We did another test, where only files were created. In this case the memory 
usage reached an upper limit. Graph is available here: http://imgur.com/zDtkbml

So if only files are created there is an upper limit in memory allocation. Is 
the case of directory creation different and why? Or is this a memory leak? Has 
anyone else seen this behavior?

Best regards,
Klearchos
# gluster --version
glusterfs 3.6.9 built on Apr 14 2016 12:31:43
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General 
Public License.

# uname -a
Linux SN-0 4.1.20-pc64-distro.git-v2.11-sctpmh #1 SMP Thu Apr 14 11:59:31 UTC 
2016 x86_64 GNU/Linux

# gluster volume info export
 
Volume Name: export
Type: Replicate
Volume ID: f5b5173d-f7d3-434c-a9da-fff6b617e21c
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: XXX.XXX.XXX.7:/mnt/bricks/export/brick
Brick2: XXX.XXX.XXX.10:/mnt/bricks/export/brick
Options Reconfigured:
cluster.server-quorum-type: server
cluster.consistent-metadata: on
server.allow-insecure: on
cluster.server-quorum-ratio: 51%

# gluster volume status export
Status of volume: export
Gluster process PortOnline   Pid
--
Brick XXX.XXX.XXX.7:/mnt/bricks/export/brick49153   Y11881
Brick XXX.XXX.XXX.10:/mnt/bricks/export/brick   49153   Y25649
NFS Server on localhost N/A NN/A
Self-heal Daemon on localhost   N/A Y7413
NFS Server on XXX.XXX.XXX.4 N/A NN/A
Self-heal Daemon on XXX.XXX.XXX.4   N/A Y16504
NFS Server on XXX.XXX.XXX.10N/A NN/A
Self-heal Daemon on XXX.XXX.XXX.10  N/A NN/A
 
Task Status of Volume export
--
There are no active volume tasks

# gluster volume list
volume1
export
volume2
volume3
volume4
volume5 /mnt/export: Client graph ==

Final graph:
+--+
  1: volume export-client-0
  2: type protocol/client
  3: option ping-timeout 42
  4: option remote-host 169.254.0.14
  5: option remote-subvolume /mnt/bricks/export/brick
  6: option transport-type socket
  7: option send-gids true
  8: end-volume
  9:
 10: volume export-client-1
 11: type protocol/client
 12: option ping-timeout 42
 13: option remote-host 169.254.0.7
 14: option remote-subvolume /mnt/bricks/export/brick
 15: option transport-type socket
 16: option send-gids true
 17: end-volume
 18:
 19: volume export-replicate-0
 20: type cluster/replicate
 

Re: [Gluster-devel] [Gluster-users] Show and Tell sessions for Gluster 4.0

2016-05-11 Thread Ric Wheeler

On 05/10/2016 12:52 PM, Niels de Vos wrote:

I prefer an additional meeting, recorded and all. Not everyone would
need to join the meeting about the new feature, reading the notes or
watching a recording would work for many.


I am a big fan of recording the meetings since we can clearly never accommodate 
all interested people's time zone needs.


Looking forward to seeing the first ones :)

Ric

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Maintainership

2016-05-11 Thread Aravinda

I would like to propose Kotresh. +2 from my side.

regards
Aravinda

On 05/11/2016 10:25 AM, Venky Shankar wrote:

Hello,

I'm wanting to relinquish maintainership for changelog[1] translator.

For the uninformed, changelog xlator is the supporting infrastructure for 
features
such as Geo-replication, Bitrot and glusterfind. However, this would eventually 
be
replaced by FDL[2] when it's ready and the dependent components either integrate
with the new (and improved) infrastructure or get redesigned.

Interested folks please reply (all) to this email. Although I would prefer 
folks who
have contributed to this feature, it does not mean others cannot speak up. 
There's
always a need for a backup maintainer who can in the course of time contribute 
and
become primary maintainer in the future.

[1]: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L76
[2]: https://github.com/gluster/glusterfs/tree/master/xlators/experimental/fdl



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel