[Gluster-devel] Checklist for eventsapi component for upstream release

2016-09-02 Thread Pranith Kumar Karampuri
hi Aravinda,
   I think the existing tests should be good enough to test this
functionality but I could be wrong. May be we should update
https://public.pad.fsfe.org/p/gluster-component-release-checklist if we
need anymore things to be added.

-- 
Aravinda & Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Checklist for snapshot component for upstream release

2016-09-02 Thread Pranith Kumar Karampuri
hi,
Did you get a chance to decide on the tests that need to be done
before doing a release for snapshot component? Could you let me know who
will be providing with the list?

I can update it at https://public.pad.fsfe.org/p/
gluster-component-release-checklist

-- 
Aravinda & Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Checklist for glusterfs packaging/build for upstream release

2016-09-02 Thread Pranith Kumar Karampuri
hi,
  In the past we have had issues where some of the functionality didn't
work on debian/ubuntu because 'glfsheal' binary was not packaged. What do
you guys as packaging/build maintainers on different distros suggest that
we do to make sure we catch such mistakes before the releases are made?

Please suggest them here so that we can add them at
https://public.pad.fsfe.org/p/gluster-component-release-checklist after the
discussion is complete

-- 
Aravinda & Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Checklist for ganesha FSAL plugin integration testing for 3.9

2016-09-02 Thread Pranith Kumar Karampuri
hi,
Did you get a chance to decide on the nfs-ganesha integrations
tests that need to be run before doing an upstream gluster release? Could
you let me know who will be providing with the list?

I can update it at https://public.pad.fsfe.org/p/
gluster-component-release-checklist

-- 
Aravinda & Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Checklist for smb integration testing for 3.9

2016-09-02 Thread Pranith Kumar Karampuri
Thanks Raghavendra, once you guys are done, please update that you guys are
done with the checklist.

On Sat, Sep 3, 2016 at 12:57 AM, Raghavendra Talur <rta...@redhat.com>
wrote:

> I have edited the etherpad with some tests. Please review and add other
> things you recommend.
>
> Thanks,
> Raghavendra Talur
>
> On Sat, Sep 3, 2016 at 12:46 AM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> hi,
>> Did you get a chance to decide on the smb integration tests that
>> need to be run before doing an upstream gluster release? Could you let me
>> know who will be providing with the list?
>>
>> I can update it at https://public.pad.fsfe.org/p/
>> gluster-component-release-checklist
>> --
>> Aravinda & Pranith
>>
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Anyone wants to maintain FreeBSD port of gluster?

2016-09-02 Thread Pranith Kumar Karampuri
hi,
 As per MAINTAINERS file this port doesn't have maintainer. If you want
to take up the responsibility of maintaining the port please let us know
how you want to go about doing it and what should be the checklist of
things that should be done before every release upstream. It is extremely
healthy to have more than one maintainer for the port. Even if multiple
people already responded and you still want to be part of it, don't feel
shy to respond. More the merrier.

Gluster 3.9 release is one month away and it would be nice if you can
validate how things are on freebsd for this release.

-- 
Aravinda & Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Checklist for Wireshark dissectors

2016-09-02 Thread Pranith Kumar Karampuri
Niels,
  Does wireshark also handle new fops like seek, compond, getactivelk,
setactivelk FOPS? What are the other things that you check before the
release?

We can update more at
https://public.pad.fsfe.org/p/gluster-component-release-checklist
-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Checklist for gluster-swift for upstream release

2016-09-02 Thread Pranith Kumar Karampuri
hi,
Did you get a chance to decide on the gluster-swift integration
tests that need to be run before doing an upstream gluster release? Could
you let me know who will be providing with the list?

I can update it at https://public.pad.fsfe.org/p/
gluster-component-release-checklist


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Checklist for GlusterFS Hadoop HCFS plugin for upstream release

2016-09-02 Thread Pranith Kumar Karampuri
hi Jay,
  Are there any tests that are done before releasing glusterfs upstream
to make sure the plugin is stable? Could you let us know the process, so
that we can add it to
https://public.pad.fsfe.org/p/gluster-component-release-checklist

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Anyone wants to maintain Mac-OSX port of gluster?

2016-09-02 Thread Pranith Kumar Karampuri
hi,
 As per MAINTAINERS file this port doesn't have maintainer. If you want
to take up the responsibility of maintaining the port please let us know
how you want to go about doing it and what should be the checklist of
things that should be done before every release upstream. It is extremely
healthy to have more than one maintainer for the port. Even if multiple
people already responded and you still want to be part of it, don't feel
shy to respond. More the merrier.

Gluster 3.9 release is one month away and it would be nice if you can
validate how things are on Mac-OSX for this release.

--
Aravinda & Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Checklist for GlusterFS Hadoop HCFS plugin for upstream release

2016-09-06 Thread Pranith Kumar Karampuri
Thanks for this information. We will follow up on this one.

On Sun, Sep 4, 2016 at 3:30 AM, Jay Vyas <jayunit...@gmail.com> wrote:

> Hi pranith;
>
> The bigtop  smoke tests are a good way to go.  You can run them against
> pig hive and so on.
>
> In general running a simple mapreduce job like wordcount is a good first
> pass start.
>
> Many other communities like orangefs and so on run Hadoop tests on
> alternative file systems, you can collaborate with them.
>
> There is an hcfs wiki page you can contribute to on Hadoop.apache.org
> <http://hadoop.apache.org> where we detail Hadoop interoperability
>
>
>
> On Sep 2, 2016, at 3:33 PM, Pranith Kumar Karampuri <pkara...@redhat.com>
> wrote:
>
> hi Jay,
>   Are there any tests that are done before releasing glusterfs
> upstream to make sure the plugin is stable? Could you let us know the
> process, so that we can add it to https://public.pad.fsfe.org/p/
> gluster-component-release-checklist
>
> --
> Pranith
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Checklist for snapshot component for upstream release

2016-09-06 Thread Pranith Kumar Karampuri
Thanks Avra, added the list to etherpad.

On Tue, Sep 6, 2016 at 9:30 AM, Avra Sengupta <aseng...@redhat.com> wrote:

> Hi Pranith,
>
> The following set of automated and manual tests need to pass before doing
> a release for snapshot component:
> 1. The entire snapshot regression suite present in the source repository,
> which as of now consist of:
>
> a. ./basic/volume-snapshot.t
> b. ./basic/volume-snapshot-clone.t
> c. ./basic/volume-snapshot-xml.t
> d. All tests present in ./bugs/snapshot
>
> 2. Manual test of using snapshot scheduler.
> 3. Till the eventing test framework is integrated with the regression
> suite, manual test of all 28 snapshot events.
>
> Regards,
> Avra
>
>
> On 09/03/2016 12:26 AM, Pranith Kumar Karampuri wrote:
>
> hi,
> Did you get a chance to decide on the tests that need to be done
> before doing a release for snapshot component? Could you let me know who
> will be providing with the list?
>
> I can update it at https://public.pad.fsfe.org/p/
> gluster-component-release-checklist
>
> --
> Aravinda & Pranith
>
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Checklist for gluster-swift for upstream release

2016-09-06 Thread Pranith Kumar Karampuri
On Tue, Sep 6, 2016 at 11:23 AM, Prashanth Pai <p...@redhat.com> wrote:

>
>
> - Original Message -----
> > From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> > To: tdasi...@redhat.com, "Prashanth Pai" <p...@redhat.com>
> > Cc: "Gluster Devel" <gluster-devel@gluster.org>
> > Sent: Saturday, 3 September, 2016 12:58:41 AM
> > Subject: Checklist for gluster-swift for upstream release
> >
> > hi,
> > Did you get a chance to decide on the gluster-swift integration
> > tests that need to be run before doing an upstream gluster release? Could
> > you let me know who will be providing with the list?
>
> The tests (unit test and functional test) can be run before doing
> upstream release. These tests reside in gluster-swift repo.
>
> I can run those tests (manually as of now) whenever required.
>

Do you think long term it makes sense to add it as part of a job, so that
it is simply a matter of launching this job before release?


>
> >
> > I can update it at https://public.pad.fsfe.org/p/
> > gluster-component-release-checklist
> > <https://public.pad.fsfe.org/p/gluster-component-release-checklist>
> >
> > --
> > Pranith
> >
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Checklist for gfapi for upstream release

2016-09-06 Thread Pranith Kumar Karampuri
On Mon, Sep 5, 2016 at 8:54 PM, Niels de Vos <nde...@redhat.com> wrote:

> On Sat, Sep 03, 2016 at 12:10:41AM +0530, Pranith Kumar Karampuri wrote:
> > hi,
> > I think most of this testing will be covered in nfsv4, smb
> testing.
> > But I could be wrong. Could you let me know who will be providing with
> the
> > list if you think there are more tests that need to be run?
> >
> > I can update it at https://public.pad.fsfe.org/p/
> > gluster-component-release-checklist
>
> I've added this to the etherpad:
>
> > test known applications, run their test-suites:
> > glusterfs-coreutils (has test suite in repo)
> > libgfapi-python (has test suite in repo)
> > nfs-ganesha (pynfs and cthon04 tests)
> > Samba (test?)
> > QEMU (run qemu binary and qemu-img with gluster:// URL,
> possibly/somehow run Advocado suite)
>

I think we should also add add-brick/replace-brick with gfapi? Thoughts?


>
> Niels
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Post mortem of features that didn't make it to 3.9.0

2016-09-06 Thread Pranith Kumar Karampuri
On Wed, Sep 7, 2016 at 6:07 AM, Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:

> On Wed, Sep 7, 2016 at 5:10 AM, Pranith Kumar Karampuri
> <pkara...@redhat.com> wrote:
>
> >  Do you think it makes sense to do post-mortem of features that
> didn't
> > make it to 3.9.0? We have some features that missed deadlines twice as
> well,
> > i.e. planned for 3.8.0 and didn't make it and planned for 3.9.0 and
> didn't
> > make it. So may be we are adding features to roadmap without thinking
> things
> > through? Basically it leads to frustration in the community who are
> waiting
> > for these components and they keep moving to next releases.
>
> Doing a post-mortem to understand the pieces which went well (so that
> we can continue doing them); which didn't go well (so that we can
> learn from those) and which were impediments (so that we can address
> the topics and remove them) is an useful exercise.
>

Ah, that makes more sense. We should also do these for features that went
well as well.


>
> > Please let me know your thoughts. Goal is to get better at planning
> and
> > deliver the features as planned as much as possible. Native subdirectoy
> > mounts is in same situation which I was supposed to deliver.
> >
> > I have the following questions we need to ask ourselves the following
> > questions IMO:
>
> Incident based post-mortems require a timeline. However, while the
> need for that might be unnecessary here, the questions are perhaps too
> specific. Also, it would be good to set up the expectation from the
> exercise - what would all the inputs lead to?
>

Timeline is a good idea. But I am not sure what would be a good time. I
think it is better to concentrate on getting the 3.9.0 release out, so may
be in the last week of this month, we can start this exercise in full flow.
At the moment we want to collect this information so that we acknowledge
the good things we did for the release and things we need to avoid in the
future releases. Like I was mentioning, the main goal at least in my mind
was to prevent these slips as much as possible in future. At the moment the
roadmap is more like a backlog, at least that is how it seems like IMO, we
keep pushing them to next release based whether we get time or not. Instead
it should be like a proper roadmap where we are sure we will deliver them
for the release with good confidence.


>
> > 1) Did we have approved design before we committed the feature upstream
> for
> > 3.9?
> > 2) Did we allocate time for execution of this feature upstream?
> > 3) Was the execution derailed by any of the customer issues/important
> work
> > in your organizatoin?
> > 4) Did developers focus on something that is not of priority which could
> > have derailed the feature's delivery?
> > 5) Did others in the team suspect the developers are not focusing on
> things
> > that are of priority but didn't communicate?
> > 6) Were there any infra issues that delayed delivery of this
> > feature(regression failures etc)?
> > 7) Were there any big delays in reviews of patches?
> >
> > Do let us know if you think we should ask more questions here.
> >
> > --
> > Aravinda & Pranith
>
>
>
> --
> sankarshan mukhopadhyay
> <https://about.me/sankarshan.mukhopadhyay>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Post mortem of features that didn't make it to 3.9.0

2016-09-06 Thread Pranith Kumar Karampuri
hi,
 Do you think it makes sense to do post-mortem of features that didn't
make it to 3.9.0? We have some features that missed deadlines twice as
well, i.e. planned for 3.8.0 and didn't make it and planned for 3.9.0 and
didn't make it. So may be we are adding features to roadmap without
thinking things through? Basically it leads to frustration in the community
who are waiting for these components and they keep moving to next releases.
Please let me know your thoughts. Goal is to get better at planning and
deliver the features as planned as much as possible. Native subdirectoy
mounts is in same situation which I was supposed to deliver.

I have the following questions we need to ask ourselves the following
questions IMO:
1) Did we have approved design before we committed the feature upstream for
3.9?
2) Did we allocate time for execution of this feature upstream?
3) Was the execution derailed by any of the customer issues/important work
in your organizatoin?
4) Did developers focus on something that is not of priority which could
have derailed the feature's delivery?
5) Did others in the team suspect the developers are not focusing on things
that are of priority but didn't communicate?
6) Were there any infra issues that delayed delivery of this
feature(regression failures etc)?
7) Were there any big delays in reviews of patches?

Do let us know if you think we should ask more questions here.

-- 
Aravinda & Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Is r.g.o down?

2016-09-01 Thread Pranith Kumar Karampuri
Okay, let's wait for it to be back up. The site was not at all loading a
while back, now it at least says Service is temporarily unavailable.

On Fri, Sep 2, 2016 at 3:44 AM, Jeff Darcy  wrote:

> > Not able to access it for the past 20 minutes.
>
> Looks down to me as well, and isup.me seems to agree.
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Is r.g.o down?

2016-09-01 Thread Pranith Kumar Karampuri
Sorry about that. Missed this email.

On Fri, Sep 2, 2016 at 4:10 AM, Michael Scherer <msche...@redhat.com> wrote:

> Le vendredi 02 septembre 2016 à 03:39 +0530, Pranith Kumar Karampuri a
> écrit :
> > Not able to access it for the past 20 minutes.
>
> I am upgrading the website, as said on the list and on irc meeting
> today.
>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Is r.g.o down?

2016-09-01 Thread Pranith Kumar Karampuri
Not able to access it for the past 20 minutes.

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Status of block and object storage on gluster(integration with containers as well)

2016-09-01 Thread Pranith Kumar Karampuri
On Fri, Sep 2, 2016 at 12:47 AM, Shyam <srang...@redhat.com> wrote:

> On 08/31/2016 03:14 PM, Pranith Kumar Karampuri wrote:
>
>> hi,
>>  I will be sending status of this work every week. This is first
>> mail of this work.
>>
>
> Thank you.
>
> Work to be done in the coming weeks:
>>- I will be sending initial cut of the design for snapshotting
>> the private storage by doing file snapshots in gluster.
>>- I will be sending out initial cut of the subdirectory mounts
>> feature with tenant based access this week.
>>
>
> I did not understand the relation of the sub-directory mount feature to
> the block storage feature, could you elaborate?
>

Ah!, sorry this got mixed up with block storage. There is no relation
between this and block storage. This is just something that we are working
on as well :-). Not sure where to put this at this point, so sent it along
with this status. Sorry for the confusion :-(.

>
>- Prasanna and I are working on limiting access of one private
>> storage by only one container. One way we thought of is to do internal
>> locking on the file so that other accesses will get errors. But we are
>> still looking to find other solutions.
>>
>
> Delegations(?) but stricter, that you cannot break them in the traditional
> fashion?
>
That might work too!


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Is r.g.o down?

2016-09-01 Thread Pranith Kumar Karampuri
Worked fine, posted patch

On Fri, Sep 2, 2016 at 4:32 AM, Michael Scherer <msche...@redhat.com> wrote:

> Le vendredi 02 septembre 2016 à 00:40 +0200, Michael Scherer a écrit :
> > Le vendredi 02 septembre 2016 à 03:39 +0530, Pranith Kumar Karampuri a
> > écrit :
> > > Not able to access it for the past 20 minutes.
> >
> > I am upgrading the website, as said on the list and on irc meeting
> > today.
> s/website/servers/
>
> and the 2 services are back (not without having to fix unexpected
> issues, cause it wouldn't be fun otherwise...).
>
> Please post if something is not working anymore.
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-30 Thread Pranith Kumar Karampuri
On Tue, Aug 30, 2016 at 2:05 PM, Jiffin Tony Thottan <jthot...@redhat.com>
wrote:

>
>
> On 29/08/16 14:27, Prashanth Pai wrote:
>
> - Original Message -
>
> From: "Niels de Vos" <nde...@redhat.com> <nde...@redhat.com>
> To: "Prashanth Pai" <p...@redhat.com> <p...@redhat.com>
> Cc: "Pranith Kumar Karampuri" <pkara...@redhat.com> <pkara...@redhat.com>, 
> "Rajesh Joseph" <rjos...@redhat.com> <rjos...@redhat.com>, "Manikandan 
> Selvaganesh"<mselv...@redhat.com> <mselv...@redhat.com>, "Soumya Koduri" 
> <skod...@redhat.com> <skod...@redhat.com>, "Csaba Henk" <ch...@redhat.com> 
> <ch...@redhat.com>, "Jiffin Thottan"<jthot...@redhat.com> 
> <jthot...@redhat.com>, "Aravinda Vishwanathapura Krishna Murthy" 
> <avish...@redhat.com> <avish...@redhat.com>, "Anoop Chirayath Manjiyil
> Sajan" <achir...@redhat.com> <achir...@redhat.com>, "Ravishankar 
> Narayanankutty" <ravishan...@redhat.com> <ravishan...@redhat.com>, "Kaushal 
> Madappa"<kmada...@redhat.com> <kmada...@redhat.com>, "Raghavendra Talur" 
> <rta...@redhat.com> <rta...@redhat.com>, "Poornima Gurusiddaiah" 
> <pguru...@redhat.com> <pguru...@redhat.com>,
> "Kaleb Keithley" <kkeit...@redhat.com> <kkeit...@redhat.com>, "Jose Rivera" 
> <jriv...@redhat.com> <jriv...@redhat.com>, "Samikshan 
> Bairagya"<sbair...@redhat.com> <sbair...@redhat.com>, "Vijay Bellur" 
> <vbel...@redhat.com> <vbel...@redhat.com>, "Gluster Devel" 
> <gluster-devel@gluster.org> <gluster-devel@gluster.org>
> Sent: Monday, 29 August, 2016 2:19:10 PM
> Subject: Re: 3.9. feature freeze status check
>
> On Mon, Aug 29, 2016 at 02:45:01AM -0400, Prashanth Pai wrote:
>
>  -Prashanth Pai
>
> - Original Message -
>
> From: "Soumya Koduri" <skod...@redhat.com> <skod...@redhat.com>
> To: "Pranith Kumar Karampuri" <pkara...@redhat.com> <pkara...@redhat.com>, 
> "Rajesh Joseph"<rjos...@redhat.com> <rjos...@redhat.com>, "Manikandan 
> Selvaganesh"<mselv...@redhat.com> <mselv...@redhat.com>, "Csaba Henk" 
> <ch...@redhat.com> <ch...@redhat.com>, "Niels de Vos"<nde...@redhat.com> 
> <nde...@redhat.com>, "Jiffin Thottan"<jthot...@redhat.com> 
> <jthot...@redhat.com>, "Aravinda Vishwanathapura Krishna 
> Murthy"<avish...@redhat.com> <avish...@redhat.com>, "Anoop Chirayath Manjiyil
> Sajan" <achir...@redhat.com> <achir...@redhat.com>, "Ravishankar 
> Narayanankutty"<ravishan...@redhat.com> <ravishan...@redhat.com>, "Kaushal 
> Madappa"<kmada...@redhat.com> <kmada...@redhat.com>, "Raghavendra Talur" 
> <rta...@redhat.com> <rta...@redhat.com>, "Poornima
> Gurusiddaiah" <pguru...@redhat.com> <pguru...@redhat.com>,
> "Kaleb Keithley" <kkeit...@redhat.com> <kkeit...@redhat.com>, "Jose 
> Rivera"<jriv...@redhat.com> <jriv...@redhat.com>, "Prashanth Pai" 
> <p...@redhat.com> <p...@redhat.com>,
> "Samikshan Bairagya" <sbair...@redhat.com> <sbair...@redhat.com>, "Vijay 
> Bellur"<vbel...@redhat.com> <vbel...@redhat.com>
> Cc: "Gluster Devel" <gluster-devel@gluster.org> <gluster-devel@gluster.org>
> Sent: Monday, 29 August, 2016 12:10:02 PM
> Subject: Re: 3.9. feature freeze status check
>
>
>
> On 08/26/2016 09:38 PM, Pranith Kumar Karampuri wrote:
>
> hi,
>   Now that we are almost near the feature freeze date (31st of
>   Aug),
> want to get a sense if any of the status of the features.
>
> Please respond with:
> 1) Feature already merged
> 2) Undergoing review will make it by 31st Aug
> 3) Undergoing review, but may not make it by 31st Aug
> 4) Feature won't make it for 3.9.
>
> I added the features that were not planned(i.e. not in the 3.9 roadmap
> page) but made it to the release and not planned but may make it to
> release at the end of this mail.
> If you added a feature on master that will be released as part of 3.9.0
> but forgot to add it to roadmap page, please let me know I will add it.
>
> Here are the features planned as per the roadmap:
> 1) Throttling
> Feature owner: Ravishankar
>
> 2) Trash improvement

Re: [Gluster-devel] Query regards to heal xattr heal in dht

2016-09-07 Thread Pranith Kumar Karampuri
hi Mohit,
   How does dht find which subvolume has the correct list of xattrs?
i.e. how does it determine which subvolume is source and which is sink?

On Wed, Sep 7, 2016 at 2:35 PM, Mohit Agrawal  wrote:

> Hi,
>
>   I am trying to find out solution of one problem in dht specific to user
> xattr healing.
>   I tried to correct it in a same way as we are doing for healing dir
> attribute but i feel it is not best solution.
>
>   To find a right way to heal xattr i want to discuss with you if anyone
> does have better solution to correct it.
>
>   Problem:
>In a distributed volume environment custom extended attribute value for
> a directory does not display correct value after stop/start the brick. If
> any extended attribute value is set for a directory after stop the brick
> the attribute value is not updated on brick after start the brick.
>
>   Current approach:
> 1) function set_user_xattr to store user extended attribute in
> dictionary
> 2) function dht_dir_xattr_heal call syncop_setxattr to update the
> attribute on all volume
> 3) Call the function (dht_dir_xattr_heal) for every directory lookup
> in dht_lookup_revalidate_cbk
>
>   Psuedocode for function dht_dir_xatt_heal is like below
>
>1) First it will fetch atttributes from first up volume and store into
> xattr.
>2) Run loop on all subvolume and fetch existing attributes from every
> volume
>3) Replace user attributes from current attributes with xattr user
> attributes
>4) Set latest extended attributes(current + old user attributes) inot
> subvol.
>
>
>In this current approach problem is
>
>1) it will call heal function(dht_dir_xattr_heal) for every directory
> lookup without comparing xattr.
> 2) The function internally call syncop xattr for every subvolume that
> would be a expensive operation.
>
>I have one another way like below to correct it but again in this one
> it does have dependency on time (not sure time is synch on all bricks or
> not)
>
>1) At the time of set extended attribute(setxattr) change time in
> metadata at server side
>2) Compare change time before call healing function in
> dht_revalidate_cbk
>
> Please share your input on this.
> Appreciate your input.
>
> Regards
> Mohit Agrawal
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Query regards to heal xattr heal in dht

2016-09-07 Thread Pranith Kumar Karampuri
On Wed, Sep 7, 2016 at 9:46 PM, Mohit Agrawal <moagr...@redhat.com> wrote:

> Hi Pranith,
>
>
> In current approach i am getting list of xattr from first up volume and
> update the user attributes from that xattr to
> all other volumes.
>
> I have assumed first up subvol is source and rest of them are sink as we
> are doing same in dht_dir_attr_heal.
>

I think first up subvol is different for different mounts as per my
understanding, I could be wrong.


>
> Regards
> Mohit Agrawal
>
> On Wed, Sep 7, 2016 at 9:34 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> hi Mohit,
>>How does dht find which subvolume has the correct list of xattrs?
>> i.e. how does it determine which subvolume is source and which is sink?
>>
>> On Wed, Sep 7, 2016 at 2:35 PM, Mohit Agrawal <moagr...@redhat.com>
>> wrote:
>>
>>> Hi,
>>>
>>>   I am trying to find out solution of one problem in dht specific to
>>> user xattr healing.
>>>   I tried to correct it in a same way as we are doing for healing dir
>>> attribute but i feel it is not best solution.
>>>
>>>   To find a right way to heal xattr i want to discuss with you if anyone
>>> does have better solution to correct it.
>>>
>>>   Problem:
>>>In a distributed volume environment custom extended attribute value
>>> for a directory does not display correct value after stop/start the brick.
>>> If any extended attribute value is set for a directory after stop the brick
>>> the attribute value is not updated on brick after start the brick.
>>>
>>>   Current approach:
>>> 1) function set_user_xattr to store user extended attribute in
>>> dictionary
>>> 2) function dht_dir_xattr_heal call syncop_setxattr to update the
>>> attribute on all volume
>>> 3) Call the function (dht_dir_xattr_heal) for every directory lookup
>>> in dht_lookup_revalidate_cbk
>>>
>>>   Psuedocode for function dht_dir_xatt_heal is like below
>>>
>>>1) First it will fetch atttributes from first up volume and store
>>> into xattr.
>>>2) Run loop on all subvolume and fetch existing attributes from every
>>> volume
>>>3) Replace user attributes from current attributes with xattr user
>>> attributes
>>>4) Set latest extended attributes(current + old user attributes) inot
>>> subvol.
>>>
>>>
>>>In this current approach problem is
>>>
>>>1) it will call heal function(dht_dir_xattr_heal) for every directory
>>> lookup without comparing xattr.
>>> 2) The function internally call syncop xattr for every subvolume
>>> that would be a expensive operation.
>>>
>>>I have one another way like below to correct it but again in this one
>>> it does have dependency on time (not sure time is synch on all bricks or
>>> not)
>>>
>>>1) At the time of set extended attribute(setxattr) change time in
>>> metadata at server side
>>>2) Compare change time before call healing function in
>>> dht_revalidate_cbk
>>>
>>> Please share your input on this.
>>> Appreciate your input.
>>>
>>> Regards
>>> Mohit Agrawal
>>>
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>>
>>
>> --
>> Pranith
>>
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Status of block and object storage on gluster(integration with containers as well)

2016-08-31 Thread Pranith Kumar Karampuri
hi,
 I will be sending status of this work every week. This is first
mail of this work.
We are enhancing these interfaces primarily for container storage
 - From gluster a container will be able to export a file as virtual
block and will be used as private storage for that container, and no other
container will be able to use the same virtual block as long as this
container is alive.

Work already done in this area:
  - Prasanna has been able to do the PoC using tcmu in this area and
all the efforts have been documented as blog posts:
 1) Non shared persistent storage for containers:
https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-
persistent-storage-in-docker-container/
  2) With kubernetes: https://pkalever.wordpress.
com/2016/06/29/non-shared-persistent-gluster-storage-with-kubernetes/
  3) Read-write-once persistent storage for openshift origin
using gluster: https://pkalever.wordpress.com/2016/08/16/read-write-
once-persistent-storage-for-openshift-origin-using-gluster/

   - Andy Grover provided the resize lun capability in tcmu so that the
persistent storage can be expanded.

Work to be done in the coming weeks:
   - I will be sending initial cut of the design for snapshotting the
private storage by doing file snapshots in gluster.
   - I will be sending out initial cut of the subdirectory mounts
feature with tenant based access this week.
   - Prasanna and I are working on limiting access of one private
storage by only one container. One way we thought of is to do internal
locking on the file so that other accesses will get errors. But we are
still looking to find other solutions.

Object storage for containers:
Gluster already has swift integration using gluster-swift. We are
leveraging this work to provide object storage for containers using gluster.

Work done till now:
   - Prashant Pai worked on making swift3 middleware compatible with
swauth
   - Document how S3 access can be done using gluster @
http://review.gluster.org/#/c/13729/

Work to be done in the coming weeks:
- gluster-swift integration with gluster management for 3.9
- Prashant is looking to containerize swift process
-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] release checklist for 3.9.0

2016-09-01 Thread Pranith Kumar Karampuri
I see updates only on NFS by Niels and no other component. Can we have some
updates here please. It is difficult to make the release without these
inputs. Please let me know if you need more time because you are busy with
something.

On Mon, Aug 29, 2016 at 9:09 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> hi,
>Could we have release checklist for the components? Please add the
> steps that need to be done before the release is made at this link:
> https://public.pad.fsfe.org/p/gluster-component-release-checklist. This
> activity needs to be completed by 2nd September. Please also add if the
> tests are automated or not. We also want to use this to evolve a complete
> automation that needs to be run before a release goes out. This is the
> first step in that direction.
>
> I added the list from MAINTAINERS file. Please add if I missed anything.
> If the Maintainer is outdated please send a mail to
> maintain...@gluster.org
>
> On behalf of
> Aravinda & Pranith
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] GlusterFS 3.9 release Schedule

2016-09-09 Thread Pranith Kumar Karampuri
On Fri, Sep 9, 2016 at 5:16 PM, Aravinda  wrote:

> Hi All,
>
> Gluster 3.9 release Schedule
>
> Week of Sept 12-16 - Beta Tagging and Start testing
> Week of Sept 19-23 - RC tagging
> End of Sept 2016   - GA(General Availability) release of 3.9
>
> Considering that beta tagging will be done in next week, is it okay to
> accept any features(Which are already Merged in Master) in release-3.9
> branch till Sept 12?
>
> Other tasks before GA:
> - Removing or disabling incomplete features or the features which are not
> ready
> - Identifying Packaging issues for different distributions
> - Documenting the release process so that it will be helpful for new
> maintainers
> - Release notes preparation
> - Testing and Documentation completeness checking.
> - Blog about the release
>
> Comments and Suggestions are Welcome.
>

> @Pranith, please add if missed anything.
>

Looks fine :-)


>
> Thanks
> Aravinda & Pranith
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Review request for lock migration patches

2016-09-14 Thread Pranith Kumar Karampuri
Could you get the reviews from one of Poornima/Raghavendra Talur once?

On Wed, Sep 14, 2016 at 6:12 PM, Susant Palai <spa...@redhat.com> wrote:

> Hi,
>   It would be nice to get the patches in 3.9. The reviews are pending for
> a long time. Requesting reviews.
>
> Thanks,
> Susant
>
>
> - Original Message -
> > From: "Susant Palai" <spa...@redhat.com>
> > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Pranith Kumar
> Karampuri" <pkara...@redhat.com>
> > Cc: "gluster-devel" <gluster-devel@gluster.org>
> > Sent: Wednesday, 7 September, 2016 9:54:04 AM
> > Subject: Re: [Gluster-devel] Review request for lock migration patches
> >
> > Gentle reminder for reviews.
> >
> > Thanks,
> > Susant
> >
> > - Original Message -
> > > From: "Susant Palai" <spa...@redhat.com>
> > > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Pranith Kumar
> Karampuri"
> > > <pkara...@redhat.com>
> > > Cc: "gluster-devel" <gluster-devel@gluster.org>
> > > Sent: Tuesday, 30 August, 2016 3:19:13 PM
> > > Subject: [Gluster-devel] Review request for lock migration patches
> > >
> > > Hi,
> > >
> > > There are few patches targeted for lock migration. Requesting for
> review.
> > > 1. http://review.gluster.org/#/c/13901/
> > > 2. http://review.gluster.org/#/c/14286/
> > > 3. http://review.gluster.org/#/c/14492/
> > > 4. http://review.gluster.org/#/c/15076/
> > >
> > >
> > > Thanks,
> > > Susant~
> > >
> > > ___
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org
> > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> >
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Need help with https://bugzilla.redhat.com/show_bug.cgi?id=1224180

2016-09-13 Thread Pranith Kumar Karampuri
On Tue, Sep 13, 2016 at 1:39 PM, Xavier Hernandez <xhernan...@datalab.es>
wrote:

> Hi Sanoj,
>
> On 13/09/16 09:41, Sanoj Unnikrishnan wrote:
>
>> Hi Xavi,
>>
>> That explains a lot,
>> I see a couple of other scenario which can lead to similar inconsistency.
>> 1) simultaneous node/brick crash of 3 bricks.
>>
>
> Although this is a real problem, the 3 bricks should crash exactly at the
> same moment just after having successfully locked the inode being modified
> and queried some information, but before sending the write fop nor any down
> notification. The probability to have this suffer this problem is really
> small.
>
> 2) if the disk space of underlying filesystem on which brick is hosted
>> exceeds for 3 bricks.
>>
>
> Yes. This is the same cause that makes quota fail.
>
>
>> I don't think we can address all the scenario unless we have a
>> log/journal mechanism like raid-5.
>>
>
> I completely agree. I don't see any solution valid for all cases. BTW
> RAID-5 *is not* a solution. It doesn't have any log/journal. Maybe
> something based on fdl xlator would work.
>
> Should we look at a quota specific fix or let it get fixed whenever we
>> introduce a log?
>>
>
> Not sure how to fix this in a way that doesn't seem too hacky.
>
> One possibility is to request permission to write some data before
> actually writing it (specifying offset and size). And then be sure that the
> write will succeed if all (or at least the minimum number of data bricks)
> has acknowledged the previous write permission request.
>
> Another approach would be to queue writes in a server side xlator until a
> commit message is received, but sending back an answer saying if there's
> enough space to do the write (this is, in some way, a very primitive
> log/journal approach).
>
> However both approaches will have a big performance impact if they cannot
> be executed in background.
>
> Maybe it would be worth investing in fdl instead of trying to find a
> custom solution to this.
>

There are some things we shall do irrespective of this change:
1) When the file is in a state that all 512 bytes of the fragment represent
the data, then we shouldn't increase the file size at all which discards
the write without any problems, i.e. this case is recoverable.
2) when we append data to a partially filled chunk and it fails on 3/6
bricks, the rest could be recovered by adjusting the file size to the size
represented by (previous block - 1)*k, we should probably provide an option
to do so?
3) Proivde some utility/setfattr to perform recovery based on data rather
than versions. i.e. it needs to detect and tell which part of data is not
recoverable and which can be. Based on that, the user should be able to
recover.

What do you guys think?


> Xavi
>
>
>
>> Thanks and Regards,
>> Sanoj
>>
>> - Original Message -
>> From: "Xavier Hernandez" <xhernan...@datalab.es>
>> To: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Sanoj Unnikrishnan" <
>> sunni...@redhat.com>
>> Cc: "Pranith Kumar Karampuri" <pkara...@redhat.com>, "Ashish Pandey" <
>> aspan...@redhat.com>, "Gluster Devel" <gluster-devel@gluster.org>
>> Sent: Tuesday, September 13, 2016 11:50:27 AM
>> Subject: Re: Need help with https://bugzilla.redhat.com/sh
>> ow_bug.cgi?id=1224180
>>
>> Hi Sanoj,
>>
>> I'm unable to see bug 1224180. Access is restricted.
>>
>> Not sure what is the problem exactly, but I see that quota is involved.
>> Currently disperse doesn't play well with quota when the limit is near.
>>
>> The reason is that not all bricks fail at the same time with EDQUOT due
>> to small differences is computed space. This causes a valid write to
>> succeed on some bricks and fail on others. If it fails simultaneously on
>> more than redundancy bricks but less that the number of data bricks,
>> there's no way to rollback the changes on the bricks that have
>> succeeded, so the operation is inconsistent and an I/O error is returned.
>>
>> For example, on a 6:2 configuration (4 data bricks and 2 redundancy), if
>> 3 bricks succeed and 3 fail, there are not enough bricks with the
>> updated version, but there aren't enough bricks with the old version
>> either.
>>
>> If you force 2 bricks to be down, the problem can appear more frequently
>> as only a single failure causes this problem.
>>
>> Xavi
>>
>> On 13/09/16 06:09, Raghavendra Gowdappa wrote:
>>
>>> +gluster-devel
>>>
>>> - Original Message -
>

Re: [Gluster-devel] Changing Submit Type for glusterfs

2016-09-13 Thread Pranith Kumar Karampuri
On Tue, Sep 13, 2016 at 7:29 PM, Nigel Babu  wrote:

> On Fri, Sep 02, 2016 at 10:25:01AM +0530, Nigel Babu wrote:
> > > > The reason cherry-pick was chosen was to keep the branch linear and
> > > > avoid merge-commits as (I'm guessing here) this makes the tree hard
> to
> > > > follow.
> > > > Merge-if-necessary will not keep the branch linear. I'm not sure how
> > > > rebase-if-necessary works though.
> > > >
> > > > Vijay, can you provide anymore background for the choice of
> > > > cherry-pick and you opinion on the change?
> > > >
> > >
> > > Unfortunately I do not recollect the reason for cherry-pick being the
> > > current choice. FWIW, I think dependencies were being enforced a while
> > > back in the previous version(s) of gerrit. Not sure if something has
> > > changed in the more recent gerrit versions.
> > >
> >
> > According to the documentation, the behavior was intended to be like how
> it is
> > currently. If it worked in the past, it may have been a bug. Let me setup
> > a test with Rebase-If-Necessary. Then we can make an informed decision
> on which
> > way to go about it.
> >
> > --
> > nigelb
>
> I tested out Rebase-If-Necessary. This bit is very important:
>
> When cherry picking a change, Gerrit automatically appends onto the end of
> the
> commit message a short summary of the change's approvals, and a URL link
> back
> to the change on the web. The committer header is also set to the
> submitter,
> while the author header retains the original patch set author.
>
> When using Rebase-If-Necessary, Gerrit does none of this. I'm guessing
> this is
> a problem for us?
>

It is a problem, yes.


>
> --
> nigelb
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Gluster Developer Summit 2016 Talk Schedule

2016-09-15 Thread Pranith Kumar Karampuri
On Thu, Sep 15, 2016 at 2:37 PM, Soumya Koduri  wrote:

> Hi Amye,
>
> Is there any plan to record these talks?
>

I had same question.


>
> Thanks,
> Soumya
>
> On 09/15/2016 03:09 AM, Amye Scavarda wrote:
>
>> Thanks to all that submitted talks, and thanks to the program committee
>> who helped select this year's content.
>>
>> This will be posted on the main Summit page as
>> well: gluster.org/events/summit2016 > >
>>
>> October 6
>> 9:00am - 9:25amOpening Session
>> 9:30 - 9:55amDHT: current design, (dis)advantages, challenges - A
>> perspective- Raghavendra Gowdappa
>> 10:00am - 10:25am  DHT2 - O Brother, Where Art Thou? - Shyam Ranganathan
>> 10:30am - 10:55am Performance bottlenecks for metadata workload in
>> Gluster - Poornima Gurusiddaiah ,  Rajesh Joseph
>> 11:00am - 11:25am The life of a consultant listed on gluster.org
>>  - Ivan Rossi
>> 11:30am - 11:55am Architecture of the High Availability Solution for
>> Ganesha and Samba - Kaleb Keithley
>> 12:00 - 1:00pmLunch
>> 1:00pm - 1:25pmChallenges with Gluster and Persistent Memory - Dan
>> Lambright
>> 1:25pm - 1:55pmThrottling in gluster  - Ravishankar Narayanankutty
>> 2:00pm  - 2:25pmGluster: The Ugly Parts - Jeff Darcy
>> 2:30pm  - 2:55pmDeterministic Releases and How to Get There - Nigel Babu
>> 3:00pm - 3:25pmBreak
>> 3:30pm - 4:00pmBirds of a Feather Sessions
>> 4:00pm - 4:55pmBirds of a Feather Sessions
>> Evening Reception to be announced
>>
>>
>> October 7
>> 9:00am - 9:25amGFProxy: Scaling the GlusterFS FUSE Client - Shreyas
>> Siravara
>> 9:30 - 9:55amSharding in GlusterFS - Past, Present and Future - Krutika
>> Dhananjay
>> 10:00am - 10:25amObject Storage with Gluster - Prashanth Pai
>> 10:30am - 10:55am Containers and Perisstent Storage for Containers. -
>> Humble Chirammal, Luis Pabon
>> 11:00am - 11:25am Gluster as Block Store in Containers  - Prasanna Kalever
>> 11:30am - 11:55amAn Update on GlusterD-2.0 - Kaushal Madappa
>> 12:00 - 1:00pmLunch
>> 1:00pm - 1:25pmIntegration of GlusterFS in to Commvault data platform  -
>> Ankireddypalle Reddy
>> 1:30-1:55pmBootstrapping Challenge
>> 2:00pm  - 2:25pmPractical Glusto Example - Jonathan Holloway
>> 2:30pm  - 2:55pmState of Gluster Performance - Manoj Pillai
>> 3:00pm - 3:25pmServer side replication - Avra Sengupta
>> 3:30pm - 4:00pmBirds of a Feather Sessions
>> 4:00pm - 4:55pmBirds of a Feather Sessions
>> 5:00pm - 5:30pm Closing
>>
>> --
>> Amye Scavarda | a...@redhat.com  | Gluster
>> Community Lead
>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] logs/cores for smoke failures

2016-09-30 Thread Pranith Kumar Karampuri
On Fri, Sep 30, 2016 at 4:00 PM, Nigel Babu <nig...@redhat.com> wrote:

> On Tue, Sep 27, 2016 at 01:13:15PM +0530, Nigel Babu wrote:
> > On Tue, Sep 27, 2016 at 12:52:45PM +0530, Pranith Kumar Karampuri wrote:
> > > On Tue, Sep 27, 2016 at 12:39 PM, Nigel Babu <nig...@redhat.com>
> wrote:
> > >
> > > > On Tue, Sep 27, 2016 at 12:00:40PM +0530, Pranith Kumar Karampuri
> wrote:
> > > > > On Tue, Sep 27, 2016 at 11:20 AM, Nigel Babu <nig...@redhat.com>
> wrote:
> > > > >
> > > > > > These are gbench failures rather than smoke failures. If you
> know how
> > > > to
> > > > > > debug dbench failures, please add comments on the bug and I'll
> get you
> > > > the
> > > > > > logs you need.
> > > > > >
> > > > >
> > > > > Oh, we can't archive the logs like we do for regression runs?
> > > >
> > > > We don't log anything for smoke tests. Perhaps we should. Would you
> care to
> > > > send a patch for smoke.sh[1] so we log the appropriate files?
> > > >
> > >
> > > hmm... I see that gluster is launched normally so it should log fine. I
> > > guess I didn't understand the question.
> >
> > The regression runs log quite a lot of things in line with what they
> test. The
> > smoke test runs just two things - posix compliance tests and dbench.
> Dbench is
> > the bit that's failing for us. The dbench output is printed onto the
> screen if
> > it fails.
> >
> > What logs do you want to add to smoke jobs? If you want Gluster logs,
> write the
> > patch to get those cleared before the start of smoke test, archived
> afterward,
> > and dumped into /archives?
>
> Smoke tests now have gluster logs. I intend to have gluster statedump
> output as
> well, but I need more to get that correctly.
>

Super!!


>
> --
> nigelb
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] RPC, DHT and AFR logging errors that are expected, reduce log level?

2016-09-30 Thread Pranith Kumar Karampuri
On Thu, Sep 29, 2016 at 7:51 PM, Niels de Vos  wrote:

> Hello,
>
> When NFS-Ganesha does an UNLINK of a filename on an inode, it does a
> follow-up check to see if the inode has been deleted or if there are
> still other filenames linked (like hardlinks) to it.
>
> Users are getting confused about the errors that are logged by RPC, DHT
> and AFR. The file is missing (which is often perfectly expected from a
> NFS-Ganesha point of view) and this causes a flood of messages.
>
> From https://bugzilla.redhat.com/show_bug.cgi?id=1328581#c5 :
>
> > If we reduce the log level for
> > client-rpc-fops.c:2974:client3_3_lookup_cbk there would be the
> > following entries left:
> >
> > 2x dht-helper.c:1179:dht_migration_complete_check_task
> > 2x afr-read-txn.c:250:afr_read_txn
> >
> > it would reduce the logging for this non-error with 10 out of 14
> > messages. We need to know from the AFR and DHT team if these messages
> > are sufficient for them to identify potential issues.
>

Updated the bug from the perspective of AFR as well:
https://bugzilla.redhat.com/show_bug.cgi?id=1328581#c12

"I am not sure how an inode which is not in split-brain is linked as 'no
read-subvolumes' case. That is something to debug."


>
> Thanks,
> Niels
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Dht readdir filtering out names

2016-09-30 Thread Pranith Kumar Karampuri
On Fri, Sep 30, 2016 at 9:58 AM, Raghavendra Gowdappa <rgowd...@redhat.com>
wrote:

>
>
> - Original Message -----
> > From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> > Cc: "Shyam Ranganathan" <srang...@redhat.com>, "Nithya Balachandran" <
> nbala...@redhat.com>, "Gluster Devel"
> > <gluster-devel@gluster.org>
> > Sent: Friday, September 30, 2016 9:53:44 AM
> > Subject: Re: Dht readdir filtering out names
> >
> > On Fri, Sep 30, 2016 at 9:50 AM, Raghavendra Gowdappa <
> rgowd...@redhat.com>
> > wrote:
> >
> > >
> > >
> > > - Original Message -
> > > > From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> > > > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> > > > Cc: "Shyam Ranganathan" <srang...@redhat.com>, "Nithya
> Balachandran" <
> > > nbala...@redhat.com>, "Gluster Devel"
> > > > <gluster-devel@gluster.org>
> > > > Sent: Friday, September 30, 2016 9:15:04 AM
> > > > Subject: Re: Dht readdir filtering out names
> > > >
> > > > On Fri, Sep 30, 2016 at 9:13 AM, Raghavendra Gowdappa <
> > > rgowd...@redhat.com>
> > > > wrote:
> > > >
> > > > > dht_readdirp_cbk has different behaviour for directories and files.
> > > > >
> > > > > 1. If file, pick the dentry (passed from subvols as part of
> readdirp
> > > > > response) if the it corresponds to data file.
> > > > > 2. If directory pick the dentry if readdirp response is from
> > > hashed-subvol.
> > > > >
> > > > > In all other cases, the dentry is skipped and not passed to higher
> > > > > layers/application. To elaborate, the dentries which are ignored
> are:
> > > > > 1. dentries corresponding to linkto files.
> > > > > 2. dentries from non-hashed subvols corresponding to directories.
> > > > >
> > > > > Since the behaviour is different for different filesystem objects,
> dht
> > > > > needs ia_type to choose its behaviour.
> > > > >
> > > > > - Original Message -
> > > > > > From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> > > > > > To: "Shyam Ranganathan" <srang...@redhat.com>, "Raghavendra
> > > Gowdappa" <
> > > > > rgowd...@redhat.com>, "Nithya Balachandran"
> > > > > > <nbala...@redhat.com>
> > > > > > Cc: "Gluster Devel" <gluster-devel@gluster.org>
> > > > > > Sent: Friday, September 30, 2016 8:39:28 AM
> > > > > > Subject: Dht readdir filtering out names
> > > > > >
> > > > > > hi,
> > > > > >In dht_readdirp_cbk() there is a check about skipping
> files
> > > > > without
> > > > > > ia_type. Could you help me understand why this check is added?
> There
> > > are
> > > > > > times when users have to delete gfid of the entries and trigger
> > > something
> > > > > > like 'find . | xargs stat' to heal the gfids. This case would
> fail
> > > if we
> > > > > > skip entries without gfid, if the lower xlators don't send stat
> > > > > information
> > > > > > for them.
> > > > >
> > > > > Probably we can make readdirp_cbk not rely on ia_type and pass
> _all_
> > > > > dentries received by subvols to application without filtering.
> However
> > > we
> > > > > should make this behaviour optional and use this only for recovery
> > > setups.
> > > > > If we don't rely on ia_type (during non error scenarios),
> applications
> > > end
> > > > > up seeing duplicate dentries in readdir listing.
> > > > >
> > > >
> > > > That means dht_readdir() gives duplicate entries? As per the code it
> > > seems
> > > > like it...
> > >
> > > No. It follows the filtering logic of "pick dentry only from hashed
> > > subvol". This logic doesn't need ia_type. Now, that you brought the
> topic
> > > of dht_readdir, I've another solution for your use case (Basically
> don't
> > > use readdirp :) ):
> > >
> > > 1. mount glusterfs with "--use-readdirp=no" option.
> > > 2. disable md-cache/stat-prefetch as it converts all readdir calls into
> > > readdirp calls
> > >
> >
> > Probably the ones in dht as well? i.e. use-readdirp option.
>
> No. dht doesn't convert a readdir into readdirp. The option you are
> referring to might be "readdir-optimize" which is something different.
>

It seems to do it.


>
> >
> >
> > >
> > > Use this only for recovery setups :).
> > >
> > > >
> > > >
> > > > >
> > > > > >
> > > > > > --
> > > > > > Pranith
> > > > > >
> > > > >
> > > > > regards,
> > > > > Raghavendra
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Pranith
> > > >
> > >
> >
> >
> >
> > --
> > Pranith
> >
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Dht readdir filtering out names

2016-09-30 Thread Pranith Kumar Karampuri
On Fri, Sep 30, 2016 at 9:13 AM, Raghavendra Gowdappa <rgowd...@redhat.com>
wrote:

> dht_readdirp_cbk has different behaviour for directories and files.
>
> 1. If file, pick the dentry (passed from subvols as part of readdirp
> response) if the it corresponds to data file.
> 2. If directory pick the dentry if readdirp response is from hashed-subvol.
>
> In all other cases, the dentry is skipped and not passed to higher
> layers/application. To elaborate, the dentries which are ignored are:
> 1. dentries corresponding to linkto files.
> 2. dentries from non-hashed subvols corresponding to directories.
>
> Since the behaviour is different for different filesystem objects, dht
> needs ia_type to choose its behaviour.
>
> ----- Original Message -
> > From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> > To: "Shyam Ranganathan" <srang...@redhat.com>, "Raghavendra Gowdappa" <
> rgowd...@redhat.com>, "Nithya Balachandran"
> > <nbala...@redhat.com>
> > Cc: "Gluster Devel" <gluster-devel@gluster.org>
> > Sent: Friday, September 30, 2016 8:39:28 AM
> > Subject: Dht readdir filtering out names
> >
> > hi,
> >In dht_readdirp_cbk() there is a check about skipping files
> without
> > ia_type. Could you help me understand why this check is added? There are
> > times when users have to delete gfid of the entries and trigger something
> > like 'find . | xargs stat' to heal the gfids. This case would fail if we
> > skip entries without gfid, if the lower xlators don't send stat
> information
> > for them.
>
> Probably we can make readdirp_cbk not rely on ia_type and pass _all_
> dentries received by subvols to application without filtering. However we
> should make this behaviour optional and use this only for recovery setups.
> If we don't rely on ia_type (during non error scenarios), applications end
> up seeing duplicate dentries in readdir listing.
>

That means dht_readdir() gives duplicate entries? As per the code it seems
like it...


>
> >
> > --
> > Pranith
> >
>
> regards,
> Raghavendra
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Dht readdir filtering out names

2016-09-30 Thread Pranith Kumar Karampuri
hi,
   In dht_readdirp_cbk() there is a check about skipping files without
ia_type. Could you help me understand why this check is added? There are
times when users have to delete gfid of the entries and trigger something
like 'find . | xargs stat' to heal the gfids. This case would fail if we
skip entries without gfid, if the lower xlators don't send stat information
for them.

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] 'Reviewd-by' tag for commits

2016-09-30 Thread Pranith Kumar Karampuri
hi,
 At the moment 'Reviewed-by' tag comes only if a +1 is given on the
final version of the patch. But for most of the patches, different people
would spend time on different versions making the patch better, they may
not get time to do the review for every version of the patch. Is it
possible to change the gerrit script to add 'Reviewed-by' for all the
people who participated in the review?

Or removing 'Reviewed-by' tag completely would also help to make sure it
doesn't give skewed counts.

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Dht readdir filtering out names

2016-09-30 Thread Pranith Kumar Karampuri
What if the lower xlators want to set the entry->inode to NULL and clear
the entry->d_stat to force a lookup on the name? i.e.
gfid-split-brain/ia_type mismatches.

On Fri, Sep 30, 2016 at 10:00 AM, Raghavendra Gowdappa <rgowd...@redhat.com>
wrote:

>
>
> - Original Message -
> > From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> > To: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> > Cc: "Shyam Ranganathan" <srang...@redhat.com>, "Nithya Balachandran" <
> nbala...@redhat.com>, "Gluster Devel"
> > <gluster-devel@gluster.org>
> > Sent: Friday, September 30, 2016 9:58:34 AM
> > Subject: Re: Dht readdir filtering out names
> >
> >
> >
> > - Original Message -
> > > From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> > > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> > > Cc: "Shyam Ranganathan" <srang...@redhat.com>, "Nithya Balachandran"
> > > <nbala...@redhat.com>, "Gluster Devel"
> > > <gluster-devel@gluster.org>
> > > Sent: Friday, September 30, 2016 9:53:44 AM
> > > Subject: Re: Dht readdir filtering out names
> > >
> > > On Fri, Sep 30, 2016 at 9:50 AM, Raghavendra Gowdappa <
> rgowd...@redhat.com>
> > > wrote:
> > >
> > > >
> > > >
> > > > - Original Message -
> > > > > From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> > > > > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> > > > > Cc: "Shyam Ranganathan" <srang...@redhat.com>, "Nithya
> Balachandran" <
> > > > nbala...@redhat.com>, "Gluster Devel"
> > > > > <gluster-devel@gluster.org>
> > > > > Sent: Friday, September 30, 2016 9:15:04 AM
> > > > > Subject: Re: Dht readdir filtering out names
> > > > >
> > > > > On Fri, Sep 30, 2016 at 9:13 AM, Raghavendra Gowdappa <
> > > > rgowd...@redhat.com>
> > > > > wrote:
> > > > >
> > > > > > dht_readdirp_cbk has different behaviour for directories and
> files.
> > > > > >
> > > > > > 1. If file, pick the dentry (passed from subvols as part of
> readdirp
> > > > > > response) if the it corresponds to data file.
> > > > > > 2. If directory pick the dentry if readdirp response is from
> > > > hashed-subvol.
> > > > > >
> > > > > > In all other cases, the dentry is skipped and not passed to
> higher
> > > > > > layers/application. To elaborate, the dentries which are ignored
> are:
> > > > > > 1. dentries corresponding to linkto files.
> > > > > > 2. dentries from non-hashed subvols corresponding to directories.
> > > > > >
> > > > > > Since the behaviour is different for different filesystem
> objects,
> > > > > > dht
> > > > > > needs ia_type to choose its behaviour.
> > > > > >
> > > > > > - Original Message -
> > > > > > > From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> > > > > > > To: "Shyam Ranganathan" <srang...@redhat.com>, "Raghavendra
> > > > Gowdappa" <
> > > > > > rgowd...@redhat.com>, "Nithya Balachandran"
> > > > > > > <nbala...@redhat.com>
> > > > > > > Cc: "Gluster Devel" <gluster-devel@gluster.org>
> > > > > > > Sent: Friday, September 30, 2016 8:39:28 AM
> > > > > > > Subject: Dht readdir filtering out names
> > > > > > >
> > > > > > > hi,
> > > > > > >In dht_readdirp_cbk() there is a check about skipping
> files
> > > > > > without
> > > > > > > ia_type. Could you help me understand why this check is added?
> > > > > > > There
> > > > are
> > > > > > > times when users have to delete gfid of the entries and trigger
> > > > something
> > > > > > > like 'find . | xargs stat' to heal the gfids. This case would
> fail
> > > > if we
> > > > > > > skip entries without gfid, if the lower xlators don't send stat
> > > >

Re: [Gluster-devel] Dht readdir filtering out names

2016-09-30 Thread Pranith Kumar Karampuri
On Fri, Sep 30, 2016 at 9:50 AM, Raghavendra Gowdappa <rgowd...@redhat.com>
wrote:

>
>
> - Original Message -----
> > From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> > Cc: "Shyam Ranganathan" <srang...@redhat.com>, "Nithya Balachandran" <
> nbala...@redhat.com>, "Gluster Devel"
> > <gluster-devel@gluster.org>
> > Sent: Friday, September 30, 2016 9:15:04 AM
> > Subject: Re: Dht readdir filtering out names
> >
> > On Fri, Sep 30, 2016 at 9:13 AM, Raghavendra Gowdappa <
> rgowd...@redhat.com>
> > wrote:
> >
> > > dht_readdirp_cbk has different behaviour for directories and files.
> > >
> > > 1. If file, pick the dentry (passed from subvols as part of readdirp
> > > response) if the it corresponds to data file.
> > > 2. If directory pick the dentry if readdirp response is from
> hashed-subvol.
> > >
> > > In all other cases, the dentry is skipped and not passed to higher
> > > layers/application. To elaborate, the dentries which are ignored are:
> > > 1. dentries corresponding to linkto files.
> > > 2. dentries from non-hashed subvols corresponding to directories.
> > >
> > > Since the behaviour is different for different filesystem objects, dht
> > > needs ia_type to choose its behaviour.
> > >
> > > - Original Message -
> > > > From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> > > > To: "Shyam Ranganathan" <srang...@redhat.com>, "Raghavendra
> Gowdappa" <
> > > rgowd...@redhat.com>, "Nithya Balachandran"
> > > > <nbala...@redhat.com>
> > > > Cc: "Gluster Devel" <gluster-devel@gluster.org>
> > > > Sent: Friday, September 30, 2016 8:39:28 AM
> > > > Subject: Dht readdir filtering out names
> > > >
> > > > hi,
> > > >In dht_readdirp_cbk() there is a check about skipping files
> > > without
> > > > ia_type. Could you help me understand why this check is added? There
> are
> > > > times when users have to delete gfid of the entries and trigger
> something
> > > > like 'find . | xargs stat' to heal the gfids. This case would fail
> if we
> > > > skip entries without gfid, if the lower xlators don't send stat
> > > information
> > > > for them.
> > >
> > > Probably we can make readdirp_cbk not rely on ia_type and pass _all_
> > > dentries received by subvols to application without filtering. However
> we
> > > should make this behaviour optional and use this only for recovery
> setups.
> > > If we don't rely on ia_type (during non error scenarios), applications
> end
> > > up seeing duplicate dentries in readdir listing.
> > >
> >
> > That means dht_readdir() gives duplicate entries? As per the code it
> seems
> > like it...
>
> No. It follows the filtering logic of "pick dentry only from hashed
> subvol". This logic doesn't need ia_type. Now, that you brought the topic
> of dht_readdir, I've another solution for your use case (Basically don't
> use readdirp :) ):
>
> 1. mount glusterfs with "--use-readdirp=no" option.
> 2. disable md-cache/stat-prefetch as it converts all readdir calls into
> readdirp calls
>

Probably the ones in dht as well? i.e. use-readdirp option.


>
> Use this only for recovery setups :).
>
> >
> >
> > >
> > > >
> > > > --
> > > > Pranith
> > > >
> > >
> > > regards,
> > > Raghavendra
> > >
> >
> >
> >
> > --
> > Pranith
> >
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Dht readdir filtering out names

2016-09-30 Thread Pranith Kumar Karampuri
Does samba/gfapi/nfs-ganesha have options to disable readdirp?

On Fri, Sep 30, 2016 at 10:04 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> What if the lower xlators want to set the entry->inode to NULL and clear
> the entry->d_stat to force a lookup on the name? i.e.
> gfid-split-brain/ia_type mismatches.
>
> On Fri, Sep 30, 2016 at 10:00 AM, Raghavendra Gowdappa <
> rgowd...@redhat.com> wrote:
>
>>
>>
>> - Original Message -
>> > From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
>> > To: "Pranith Kumar Karampuri" <pkara...@redhat.com>
>> > Cc: "Shyam Ranganathan" <srang...@redhat.com>, "Nithya Balachandran" <
>> nbala...@redhat.com>, "Gluster Devel"
>> > <gluster-devel@gluster.org>
>> > Sent: Friday, September 30, 2016 9:58:34 AM
>> > Subject: Re: Dht readdir filtering out names
>> >
>> >
>> >
>> > - Original Message -
>> > > From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
>> > > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
>> > > Cc: "Shyam Ranganathan" <srang...@redhat.com>, "Nithya Balachandran"
>> > > <nbala...@redhat.com>, "Gluster Devel"
>> > > <gluster-devel@gluster.org>
>> > > Sent: Friday, September 30, 2016 9:53:44 AM
>> > > Subject: Re: Dht readdir filtering out names
>> > >
>> > > On Fri, Sep 30, 2016 at 9:50 AM, Raghavendra Gowdappa <
>> rgowd...@redhat.com>
>> > > wrote:
>> > >
>> > > >
>> > > >
>> > > > - Original Message -
>> > > > > From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
>> > > > > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
>> > > > > Cc: "Shyam Ranganathan" <srang...@redhat.com>, "Nithya
>> Balachandran" <
>> > > > nbala...@redhat.com>, "Gluster Devel"
>> > > > > <gluster-devel@gluster.org>
>> > > > > Sent: Friday, September 30, 2016 9:15:04 AM
>> > > > > Subject: Re: Dht readdir filtering out names
>> > > > >
>> > > > > On Fri, Sep 30, 2016 at 9:13 AM, Raghavendra Gowdappa <
>> > > > rgowd...@redhat.com>
>> > > > > wrote:
>> > > > >
>> > > > > > dht_readdirp_cbk has different behaviour for directories and
>> files.
>> > > > > >
>> > > > > > 1. If file, pick the dentry (passed from subvols as part of
>> readdirp
>> > > > > > response) if the it corresponds to data file.
>> > > > > > 2. If directory pick the dentry if readdirp response is from
>> > > > hashed-subvol.
>> > > > > >
>> > > > > > In all other cases, the dentry is skipped and not passed to
>> higher
>> > > > > > layers/application. To elaborate, the dentries which are
>> ignored are:
>> > > > > > 1. dentries corresponding to linkto files.
>> > > > > > 2. dentries from non-hashed subvols corresponding to
>> directories.
>> > > > > >
>> > > > > > Since the behaviour is different for different filesystem
>> objects,
>> > > > > > dht
>> > > > > > needs ia_type to choose its behaviour.
>> > > > > >
>> > > > > > - Original Message -
>> > > > > > > From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
>> > > > > > > To: "Shyam Ranganathan" <srang...@redhat.com>, "Raghavendra
>> > > > Gowdappa" <
>> > > > > > rgowd...@redhat.com>, "Nithya Balachandran"
>> > > > > > > <nbala...@redhat.com>
>> > > > > > > Cc: "Gluster Devel" <gluster-devel@gluster.org>
>> > > > > > > Sent: Friday, September 30, 2016 8:39:28 AM
>> > > > > > > Subject: Dht readdir filtering out names
>> > > > > > >
>> > > > > > > hi,
>> > > > > > >In dht_readdirp_cbk() there is a check about skipping
>> files
>> > > > > > without
>> > > > >

Re: [Gluster-devel] Regression caused to gfapi applications with enabling client-io-threads by default

2016-10-05 Thread Pranith Kumar Karampuri
On Wed, Oct 5, 2016 at 2:00 PM, Soumya Koduri  wrote:

> Hi,
>
> With http://review.gluster.org/#/c/15051/, performace/client-io-threads
> is enabled by default. But with that we see regression caused to
> nfs-ganesha application trying to un/re-export any glusterfs volume. This
> shall be the same case with any gfapi application using glfs_fini().
>
> More details and the RCA can be found at [1].
>
> In short, iot-worker threads spawned  (when the above option is enabled)
> are not cleaned up as part of io-threads-xlator->fini() and those threads
> could end up accessing invalid/freed memory post glfs_fini().
>
> The actual fix is to address io-threads-xlator->fini() to cleanup those
> threads before exiting. But since those threads' IDs are currently not
> stored, the fix could be very intricate and take a while. So till then to
> avoid all existing applications crash, I suggest to keep this option
> disabled by default and update this known_issue with enabling this option
> in the release-notes.
>
> I sent a patch to revert the commit - http://review.gluster.org/#/c/15616/
> [2]
>

Good catch! I think the correct fix would be to make sure all threads die
as part of PARENT_DOWN then?


> Comments/Suggestions are welcome.
>
> Thanks,
> Soumya
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1380619#c11
> [2] http://review.gluster.org/#/c/15616/
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Query regards to heal xattr heal in dht

2016-09-15 Thread Pranith Kumar Karampuri
On Thu, Sep 15, 2016 at 12:02 PM, Nithya Balachandran <nbala...@redhat.com>
wrote:

>
>
> On 8 September 2016 at 12:02, Mohit Agrawal <moagr...@redhat.com> wrote:
>
>> Hi All,
>>
>>I have one another solution to heal user xattr but before implement it
>> i would like to discuss with you.
>>
>>Can i call function (dht_dir_xattr_heal internally it is calling
>> syncop_setxattr) to heal xattr in dht_getxattr_cbk in last
>>after make sure we have a valid xattr.
>>In function(dht_dir_xattr_heal) it will copy blindly all user xattr on
>> all subvolume or i can compare subvol xattr with valid xattr if there is
>> any mismatch then i will call syncop_setxattr otherwise no need to call.
>> syncop_setxattr.
>>
>
>
> This can be problematic if a particular xattr is being removed - it might
> still exist on some subvols. IIUC, the heal would go and reset it again?
>
> One option is to use the hash subvol for the dir as the source - so
> perform xattr op on hashed subvol first and on the others only if it
> succeeds on the hashed. This does have the problem of being unable to set
> xattrs if the hashed subvol is unavailable. This might not be such a big
> deal in case of distributed replicate or distribute disperse volumes but
> will affect pure distribute. However, this way we can at least be
> reasonably certain of the correctness (leaving rebalance out of the
> picture).
>

Yes, this seems fine.


>
>
>
>>
>>Let me know if this approach is suitable.
>>
>>
>>
>> Regards
>> Mohit Agrawal
>>
>> On Wed, Sep 7, 2016 at 10:27 PM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>>> On Wed, Sep 7, 2016 at 9:46 PM, Mohit Agrawal <moagr...@redhat.com>
>>> wrote:
>>>
>>>> Hi Pranith,
>>>>
>>>>
>>>> In current approach i am getting list of xattr from first up volume and
>>>> update the user attributes from that xattr to
>>>> all other volumes.
>>>>
>>>> I have assumed first up subvol is source and rest of them are sink as
>>>> we are doing same in dht_dir_attr_heal.
>>>>
>>>
>>> I think first up subvol is different for different mounts as per my
>>> understanding, I could be wrong.
>>>
>>>
>>>>
>>>> Regards
>>>> Mohit Agrawal
>>>>
>>>> On Wed, Sep 7, 2016 at 9:34 PM, Pranith Kumar Karampuri <
>>>> pkara...@redhat.com> wrote:
>>>>
>>>>> hi Mohit,
>>>>>How does dht find which subvolume has the correct list of
>>>>> xattrs? i.e. how does it determine which subvolume is source and which is
>>>>> sink?
>>>>>
>>>>> On Wed, Sep 7, 2016 at 2:35 PM, Mohit Agrawal <moagr...@redhat.com>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>>   I am trying to find out solution of one problem in dht specific to
>>>>>> user xattr healing.
>>>>>>   I tried to correct it in a same way as we are doing for healing dir
>>>>>> attribute but i feel it is not best solution.
>>>>>>
>>>>>>   To find a right way to heal xattr i want to discuss with you if
>>>>>> anyone does have better solution to correct it.
>>>>>>
>>>>>>   Problem:
>>>>>>In a distributed volume environment custom extended attribute
>>>>>> value for a directory does not display correct value after stop/start the
>>>>>> brick. If any extended attribute value is set for a directory after stop
>>>>>> the brick the attribute value is not updated on brick after start the 
>>>>>> brick.
>>>>>>
>>>>>>   Current approach:
>>>>>> 1) function set_user_xattr to store user extended attribute in
>>>>>> dictionary
>>>>>> 2) function dht_dir_xattr_heal call syncop_setxattr to update the
>>>>>> attribute on all volume
>>>>>> 3) Call the function (dht_dir_xattr_heal) for every directory
>>>>>> lookup in dht_lookup_revalidate_cbk
>>>>>>
>>>>>>   Psuedocode for function dht_dir_xatt_heal is like below
>>>>>>
>>>>>>1) First it will fetch at

Re: [Gluster-devel] Multiplexing - good news, bad news, and a plea for help

2016-09-20 Thread Pranith Kumar Karampuri
Jeff,
If I understood brick-multiplexing correctly,
add-brick/remove-brick add/remove graphs right? I don't think the
grah-cleanup is in good shape, i.e. it should lead to memory leaks etc. Did
you get a chance to think about it?

On Mon, Sep 19, 2016 at 6:56 PM, Jeff Darcy  wrote:

> I have brick multiplexing[1] functional to the point that it passes all
> basic AFR, EC, and quota tests.  There are still some issues with tiering,
> and I wouldn't consider snapshots functional at all, but it seemed like a
> good point to see how well it works.  I ran some *very simple* tests with
> 20 volumes, each 2x distribute on top of 2x replicate.
>
> First, the good news: it worked!  Getting 80 bricks to come up in the same
> process, and then run I/O correctly across all of those, is pretty cool.
> Also, memory consumption is *way* down.  RSS size went from 1.1GB before
> (total across 80 processes) to about 400MB (one process) with
> multiplexing.  Each process seems to consume approximately 8MB globally
> plus 5MB per brick, so (8+5)*80 = 1040 vs. 8+(5*80) = 408.  Just
> considering the amount of memory, this means we could support about three
> times as many bricks as before.  When memory *contention* is considered,
> the difference is likely to be even greater.
>
> Bad news: some of our code doesn't scale very well in terms of CPU use.
> To test performance I ran a test which would create 20,000 files across all
> 20 volumes, then write and delete them, all using 100 client threads.  This
> is similar to what smallfile does, but deliberately constructed to use a
> minimum of disk space - at any given, only one file per thread (maximum)
> actually has 4KB worth of data in it.  This allows me to run it against
> SSDs or even ramdisks even with high brick counts, to factor out slow disks
> in a study of CPU/memory issues.  Here are some results and observations.
>
> * On my first run, the multiplexed version of the test took 77% longer to
> run than the non-multiplexed version (5:42 vs. 3:13).  And that was after
> I'd done some hacking to use 16 epoll threads.  There's something a bit
> broken about trying to set that option normally, so that the value you set
> doesn't actually make it to the place that tries to spawn the threads.
> Bumping this up further to 32 threads didn't seem to help.
>
> * A little profiling showed me that we're spending almost all of our time
> in pthread_spin_lock.  I disabled the code to use spinlocks instead of
> regular mutexes, which immediately improved performance and also reduced
> CPU time by almost 50%.
>
> * The next round of profiling showed that a lot of the locking is in
> mem-pool code, and a lot of that in turn is from dictionary code.  Changing
> the dict code to use malloc/free instead of mem_get/mem_put gave another
> noticeable boost.
>
> At this point run time was down to 4:50, which is 20% better than where I
> started but still far short of non-multiplexed performance.  I can drive
> that down still further by converting more things to use malloc/free.
> There seems to be a significant opportunity here to improve performance -
> even without multiplexing - by taking a more careful look at our
> memory-management strategies:
>
> * Tune the mem-pool implementation to scale better with hundreds of
> threads.
>
> * Use mem-pools more selectively, or even abandon them altogether.
>
> * Try a different memory allocator such as jemalloc.
>
> I'd certainly appreciate some help/collaboration in studying these options
> further.  It's a great opportunity to make a large impact on overall
> performance without a lot of code or specialized knowledge.  Even so,
> however, I don't think memory management is our only internal scalability
> problem.  There must be something else limiting parallelism, and quite
> severely at that.  My first guess is io-threads, so I'll be looking into
> that first, but if anybody else has any ideas please let me know.  There's
> no *good* reason why running many bricks in one process should be slower
> than running them in separate processes.  If it remains slower, then the
> limit on the number of bricks and volumes we can support will remain
> unreasonably low.  Also, the problems I'm seeing here probably don't *only*
> affect multiplexing.  Excessive memory/CPU use and poor parallelism are
> issues that we kind of need to address anyway, so if anybody has any ideas
> please let me know.
>
>
>
> [1] http://review.gluster.org/#/c/14763/
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Heketi] Block store related API design discussion

2016-09-20 Thread Pranith Kumar Karampuri
On Mon, Sep 19, 2016 at 10:13 AM, Niels de Vos  wrote:

> On Tue, Sep 13, 2016 at 12:06:00PM -0400, Luis Pabón wrote:
> > Very good points.  Thanks Prasanna for putting this together.  I agree
> with
> > your comments in that Heketi is the high level abstraction API and it
> should have
> > an API similar of what is described by Prasanna.
> >
> > I definitely do not think any File Api should be available in Heketi,
> > because that is an implementation of the Block API.  The Heketi API
> should
> > be similar to something like OpenStack Cinder.
> >
> > I think that the actual management of the Volumes used for Block storage
> > and the files in them should be all managed by Heketi.  How they are
> > actually created is still to be determined, but we could have Heketi
> > create them, or have helper programs do that.
>
> Maybe a tool like qemu-img? If whatever iscsi service understand the
> format (at the very least 'raw'), you could get functionality like
> snapshots pretty simple.
>

Prasanna, Poornima and I just discussed about this. Prasanna is doing this
experiment to see if we can use qcow from tcmu-runner to get this piece
working. If yes, we definitely will get snapshots for free :-). Prasanna
will confirm it based on his experiments.


>
> Niels
>
>
> > We also need to document the exact workflow to enable a file in
> > a Gluster volume to be exposed as a block device.  This will help
> > determine where the creation of the file could take place.
> >
> > We can capture our decisions from these discussions in the
> > following page:
> >
> > https://github.com/heketi/heketi/wiki/Proposed-Changes
> >
> > - Luis
> >
> >
> > - Original Message -
> > From: "Humble Chirammal" 
> > To: "Raghavendra Talur" 
> > Cc: "Prasanna Kalever" , "gluster-devel" <
> gluster-devel@gluster.org>, "Stephen Watt" , "Luis
> Pabon" , "Michael Adam" ,
> "Ramakrishna Yekulla" , "Mohamed Ashiq Liyazudeen" <
> mliya...@redhat.com>
> > Sent: Tuesday, September 13, 2016 2:23:39 AM
> > Subject: Re: [Gluster-devel] [Heketi] Block store related API design
> discussion
> >
> >
> >
> >
> >
> > - Original Message -
> > | From: "Raghavendra Talur" 
> > | To: "Prasanna Kalever" 
> > | Cc: "gluster-devel" , "Stephen Watt" <
> sw...@redhat.com>, "Luis Pabon" ,
> > | "Michael Adam" , "Humble Chirammal" <
> hchir...@redhat.com>, "Ramakrishna Yekulla"
> > | , "Mohamed Ashiq Liyazudeen" 
> > | Sent: Tuesday, September 13, 2016 11:08:44 AM
> > | Subject: Re: [Gluster-devel] [Heketi] Block store related API design
> discussion
> > |
> > | On Mon, Sep 12, 2016 at 11:30 PM, Prasanna Kalever <
> pkale...@redhat.com>
> > | wrote:
> > |
> > | > Hi all,
> > | >
> > | > This mail is open for discussion on gluster block store integration
> with
> > | > heketi and its REST API interface design constraints.
> > | >
> > | >
> > | >  ___ Volume Request ...
> > | > |
> > | > |
> > | > PVC claim -> Heketi --->|
> > | > |
> > | > |
> > | > |
> > | > |
> > | > |__ BlockCreate
> > | > |   |
> > | > |   |__ BlockInfo
> > | > |   |
> > | > |___ Block Request (APIS)-> |__ BlockResize
> > | > |
> > | > |__ BlockList
> > | > |
> > | > |__ BlockDelete
> > | >
> > | > Heketi will have block API and volume API, when user submit a
> Persistent
> > | > volume claim, Kubernetes provisioner based on the storage class(from
> PVC)
> > | > talks to heketi for storage, heketi intern calls block or volume
> API's
> > | > based on request.
> > | >
> > |
> > | This is probably wrong. It won't be Heketi calling block or volume
> APIs. It
> > | would be Kubernetes calling block or volume API *of* Heketi.
> > |
> > |
> > | > With my limited understanding, heketi currently creates clusters from
> > | > provided nodes, creates volumes and handover them to the user.
> > | > For block related API's, it has to deal with files right ?
> > | >
> > | > Here is how block API's look like in short-
> > | > Create: heketi has to create file in the volume and export it as a
> iscsi
> > | > target device and hand it over to user.
> > | > Info: show block stores information across all the clusters,
> 

Re: [Gluster-devel] [Heketi] Block store related API design discussion

2016-09-20 Thread Pranith Kumar Karampuri
On Mon, Sep 19, 2016 at 9:22 PM, Niels de Vos  wrote:

> On Mon, Sep 19, 2016 at 10:31:11AM -0400, Luis Pabón wrote:
> > Using qemu is interesting, but the I/O should be using the IO path of
> QEMU block API.  If not,
> > TCMU would not know how to work with QEMU dynamic QCOW2 files.
> >
> > Now, if TCMU already has this, then that would be great!
>
> It has a qcow2 header, maybe you guys are lucky!
>   https://github.com/open-iscsi/tcmu-runner/blob/master/qcow2.h


Sent the earlier mail before seeing this mail :-). So yes, what we
discussed is to see if this qemu in tcmu can internally use gfapi for doing
the operations or not is something we are trying to find out.


>
>
> Niels
>
> >
> > - Luis
> >
> > - Original Message -
> > From: "Prasanna Kalever" 
> > To: "Niels de Vos" 
> > Cc: "Luis Pabón" , "Stephen Watt" ,
> "gluster-devel" , "Ramakrishna Yekulla" <
> rre...@redhat.com>, "Humble Chirammal" 
> > Sent: Monday, September 19, 2016 7:13:36 AM
> > Subject: Re: [Gluster-devel] [Heketi] Block store related API design
> discussion
> >
> > On Mon, Sep 19, 2016 at 4:09 PM, Niels de Vos  wrote:
> > >
> > > On Mon, Sep 19, 2016 at 03:34:29PM +0530, Prasanna Kalever wrote:
> > > > On Mon, Sep 19, 2016 at 10:13 AM, Niels de Vos 
> wrote:
> > > > > On Tue, Sep 13, 2016 at 12:06:00PM -0400, Luis Pabón wrote:
> > > > >> Very good points.  Thanks Prasanna for putting this together.  I
> agree with
> > > > >> your comments in that Heketi is the high level abstraction API
> and it should have
> > > > >> an API similar of what is described by Prasanna.
> > > > >>
> > > > >> I definitely do not think any File Api should be available in
> Heketi,
> > > > >> because that is an implementation of the Block API.  The Heketi
> API should
> > > > >> be similar to something like OpenStack Cinder.
> > > > >>
> > > > >> I think that the actual management of the Volumes used for Block
> storage
> > > > >> and the files in them should be all managed by Heketi.  How they
> are
> > > > >> actually created is still to be determined, but we could have
> Heketi
> > > > >> create them, or have helper programs do that.
> > > > >
> > > > > Maybe a tool like qemu-img? If whatever iscsi service understand
> the
> > > > > format (at the very least 'raw'), you could get functionality like
> > > > > snapshots pretty simple.
> > > >
> > > > Niels,
> > > >
> > > > This is brilliant and subset of the Idea falls in one among my
> > > > thoughts, only concern is about building dependencies of qemu with
> > > > Heketi.
> > > > But at an advantage of easy and cool snapshots solution.
> > >
> > > And well tested as I understand that oVirt is moving to use qemu-img as
> > > well. Other tools are able to use the qcow2 format, maybe the iscsi
> > > servce that gets used does so too.
> > >
> > > Has there already been a decision on what Heketi will configure as
> iscsi
> > > service? I am aware of the tgt [1] and LIO/TCMU [2] projects.
> >
> > Niels,
> >
> > yes we will be using TCMU (Kernel Module) and TCMU-runner (user space
> > service) to expose file in Gluster volume as an iSCSI target.
> > more at [1], [2] & [3]
> >
> > [1] https://pkalever.wordpress.com/2016/06/23/gluster-
> solution-for-non-shared-persistent-storage-in-docker-container/
> > [2] https://pkalever.wordpress.com/2016/06/29/non-shared-
> persistent-gluster-storage-with-kubernetes/
> > [3] https://pkalever.wordpress.com/2016/08/16/read-write-
> once-persistent-storage-for-openshift-origin-using-gluster/
> >
> > --
> > Prasanna
> >
> > >
> > > Niels
> > >
> > > 1. http://stgt.sourceforge.net/
> > > 2. https://github.com/open-iscsi/tcmu-runner
> > >http://blog.gluster.org/2016/04/using-lio-with-gluster/
> > >
> > > >
> > > > --
> > > > Prasanna
> > > >
> > > > >
> > > > > Niels
> > > > >
> > > > >
> > > > >> We also need to document the exact workflow to enable a file in
> > > > >> a Gluster volume to be exposed as a block device.  This will help
> > > > >> determine where the creation of the file could take place.
> > > > >>
> > > > >> We can capture our decisions from these discussions in the
> > > > >> following page:
> > > > >>
> > > > >> https://github.com/heketi/heketi/wiki/Proposed-Changes
> > > > >>
> > > > >> - Luis
> > > > >>
> > > > >>
> > > > >> - Original Message -
> > > > >> From: "Humble Chirammal" 
> > > > >> To: "Raghavendra Talur" 
> > > > >> Cc: "Prasanna Kalever" , "gluster-devel" <
> gluster-devel@gluster.org>, "Stephen Watt" , "Luis
> Pabon" , "Michael Adam" ,
> "Ramakrishna Yekulla" , "Mohamed Ashiq Liyazudeen" <
> mliya...@redhat.com>
> > > > >> Sent: Tuesday, September 13, 2016 2:23:39 AM
> > > > >> Subject: Re: [Gluster-devel] 

Re: [Gluster-devel] Fixing setfsuid/gid problems in posix xlator

2016-09-23 Thread Pranith Kumar Karampuri
On Fri, Sep 23, 2016 at 12:30 PM, Soumya Koduri <skod...@redhat.com> wrote:

>
>
> On 09/23/2016 08:28 AM, Pranith Kumar Karampuri wrote:
>
>> hi,
>>Jiffin found an interesting problem in posix xlator where we have
>> never been using setfsuid/gid (http://review.gluster.org/#/c/15545/),
>> what I am seeing regressions after this is, if the files are created
>> using non-root user then the file creation fails because that user
>> doesn't have permissions to create the gfid-link. So it seems like the
>> correct way forward for this patch is to write wrappers around
>> sys_ to do setfsuid/gid do the actual operation requested and
>> then set it back to old uid/gid and then do the internal operations. I
>> am planning to write posix_sys_() to do the same, may be a
>> macro?.
>>
>
> Why not otherwise around? As in can we switch to superuser when required
> so that we know what all internal operations need root access and avoid
> misusing it.
>

The thread should have the uid/gid of the frame->root->uid/gid only at the
time of executing the syscall of open/mkdir/creat in posix xlator etc, rest
of the time it shouldn't. So doing it this way.


>
> Thanks,
> Soumya
>
> I need inputs from you guys to let me know if I am on the right path
>> and if you see any issues with this approach.
>>
>> --
>> Pranith
>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Fixing setfsuid/gid problems in posix xlator

2016-09-22 Thread Pranith Kumar Karampuri
hi,
   Jiffin found an interesting problem in posix xlator where we have never
been using setfsuid/gid (http://review.gluster.org/#/c/15545/), what I am
seeing regressions after this is, if the files are created using non-root
user then the file creation fails because that user doesn't have
permissions to create the gfid-link. So it seems like the correct way
forward for this patch is to write wrappers around sys_ to do
setfsuid/gid do the actual operation requested and then set it back to old
uid/gid and then do the internal operations. I am planning to write
posix_sys_() to do the same, may be a macro?.
I need inputs from you guys to let me know if I am on the right path
and if you see any issues with this approach.

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Fixing setfsuid/gid problems in posix xlator

2016-09-23 Thread Pranith Kumar Karampuri
On Fri, Sep 23, 2016 at 6:12 PM, Jeff Darcy  wrote:

> > Jiffin found an interesting problem in posix xlator where we have never
> been
> > using setfsuid/gid ( http://review.gluster.org/#/c/15545/ ), what I am
> > seeing regressions after this is, if the files are created using non-root
> > user then the file creation fails because that user doesn't have
> permissions
> > to create the gfid-link. So it seems like the correct way forward for
> this
> > patch is to write wrappers around sys_ to do setfsuid/gid do the
> > actual operation requested and then set it back to old uid/gid and then
> do
> > the internal operations. I am planning to write posix_sys_() to
> do
> > the same, may be a macro?
>
> Kind of an aside, but I'd prefer to see a lot fewer macros in our code.
> They're not type-safe, and multi-line macros often mess up line numbers for
> debugging or error messages.  IMO it's better to use functions whenever
> possible, and usually to let the compiler worry about how/when to inline.
>
> > I need inputs from you guys to let me know if I am on the right path and
> if
> > you see any issues with this approach.
>
> I think there's a bit of an interface problem here.  The sys_xxx wrappers
> don't have arguments that point to the current frame, so how would they get
> the correct uid/gid?  We could add arguments to each function, but then
> we'd have to modify every call.  This includes internal calls which don't
> have a frame to pass, so I guess they'd have to pass NULL.  Alternatively,
> we could create a parallel set of functions with frame pointers.  Contrary
> to what I just said above, this might be a case where macros make sense:
>
>int
>sys_writev_fp (call_frame_t *frame, int fd, void *buf, size_t len)
>{
>   if (frame) { setfsuid(...) ... }
>   int ret = writev (fd, buf, len);
>   if (frame) { setfsuid(...) ... }
>   return ret;
>}
>#define sys_writev(fd,buf,len) sys_writev_fp (NULL, fd, buf, len)
>
> That way existing callers don't have to change, but posix can use the
> extended versions to get the right setfsuid behavior.
>
>
After trying to do these modifications to test things out, I am now under
the impression to remove setfsuid/gid altogether and depend on posix-acl
for permission checks. It seems too cumbersome as the operations more often
than not happen on files inside .glusterfs and non-root users/groups don't
have permissions at all to access files in that directory.


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-26 Thread Pranith Kumar Karampuri
Prasanna, Prashant,
 Could you add a short description of the features you are working
on for 3.9 as well to the list?

On Fri, Aug 26, 2016 at 9:39 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

>
>
> On Fri, Aug 26, 2016 at 9:38 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> hi,
>>   Now that we are almost near the feature freeze date (31st of Aug),
>> want to get a sense if any of the status of the features.
>>
>
> I meant "want to get a sense of the status of the features"
>
>
>>
>> Please respond with:
>> 1) Feature already merged
>> 2) Undergoing review will make it by 31st Aug
>> 3) Undergoing review, but may not make it by 31st Aug
>> 4) Feature won't make it for 3.9.
>>
>> I added the features that were not planned(i.e. not in the 3.9 roadmap
>> page) but made it to the release and not planned but may make it to release
>> at the end of this mail.
>> If you added a feature on master that will be released as part of 3.9.0
>> but forgot to add it to roadmap page, please let me know I will add it.
>>
>> Here are the features planned as per the roadmap:
>> 1) Throttling
>> Feature owner: Ravishankar
>>
>> 2) Trash improvements
>> Feature owners: Anoop, Jiffin
>>
>> 3) Kerberos for Gluster protocols:
>> Feature owners: Niels, Csaba
>>
>> 4) SELinux on gluster volumes:
>> Feature owners: Niels, Manikandan
>>
>> 5) Native sub-directory mounts:
>> Feature owners: Kaushal, Pranith
>>
>> 6) RichACL support for GlusterFS:
>> Feature owners: Rajesh Joseph
>>
>> 7) Sharemodes/Share reservations:
>> Feature owners: Raghavendra Talur, Poornima G, Soumya Koduri, Rajesh
>> Joseph, Anoop C S
>>
>> 8) Integrate with external resource management software
>> Feature owners: Kaleb Keithley, Jose Rivera
>>
>> 9) Python Wrappers for Gluster CLI Commands
>> Feature owners: Aravinda VK
>>
>> 10) Package and ship libgfapi-python
>> Feature owners: Prashant Pai
>>
>> 11) Management REST APIs
>> Feature owners: Aravinda VK
>>
>> 12) Events APIs
>> Feature owners: Aravinda VK
>>
>> 13) CLI to get state representation of a cluster from the local glusterd
>> pov
>> Feature owners: Samikshan Bairagya
>>
>> 14) Posix-locks Reclaim support
>> Feature owners: Soumya Koduri
>>
>> 15) Deprecate striped volumes
>> Feature owners: Vijay Bellur, Niels de Vos
>>
>> 16) Improvements in Gluster NFS-Ganesha integration
>> Feature owners: Jiffin Tony Thottan, Soumya Koduri
>>
>> *The following need to be added to the roadmap:*
>>
>> Features that made it to master already but were not palnned:
>> 1) Multi threaded self-heal in EC
>> Feature owner: Pranith (Did this because serkan asked for it. He has 9PB
>> volume, self-healing takes a long time :-/)
>>
>> 2) Lock revocation (Facebook patch)
>> Feature owner: Richard Wareing
>>
>> Features that look like will make it to 3.9.0:
>> 1) Hardware extension support for EC
>> Feature owner: Xavi
>>
>> 2) Reset brick support for replica volumes:
>> Feature owner: Anuradha
>>
>> 3) Md-cache perf improvements in smb:
>> Feature owner: Poornima
>>
>> --
>> Pranith
>>
>
>
>
> --
> Pranith
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] 3.9. feature freeze status check

2016-08-26 Thread Pranith Kumar Karampuri
hi,
  Now that we are almost near the feature freeze date (31st of Aug),
want to get a sense if any of the status of the features.

Please respond with:
1) Feature already merged
2) Undergoing review will make it by 31st Aug
3) Undergoing review, but may not make it by 31st Aug
4) Feature won't make it for 3.9.

I added the features that were not planned(i.e. not in the 3.9 roadmap
page) but made it to the release and not planned but may make it to release
at the end of this mail.
If you added a feature on master that will be released as part of 3.9.0 but
forgot to add it to roadmap page, please let me know I will add it.

Here are the features planned as per the roadmap:
1) Throttling
Feature owner: Ravishankar

2) Trash improvements
Feature owners: Anoop, Jiffin

3) Kerberos for Gluster protocols:
Feature owners: Niels, Csaba

4) SELinux on gluster volumes:
Feature owners: Niels, Manikandan

5) Native sub-directory mounts:
Feature owners: Kaushal, Pranith

6) RichACL support for GlusterFS:
Feature owners: Rajesh Joseph

7) Sharemodes/Share reservations:
Feature owners: Raghavendra Talur, Poornima G, Soumya Koduri, Rajesh
Joseph, Anoop C S

8) Integrate with external resource management software
Feature owners: Kaleb Keithley, Jose Rivera

9) Python Wrappers for Gluster CLI Commands
Feature owners: Aravinda VK

10) Package and ship libgfapi-python
Feature owners: Prashant Pai

11) Management REST APIs
Feature owners: Aravinda VK

12) Events APIs
Feature owners: Aravinda VK

13) CLI to get state representation of a cluster from the local glusterd pov
Feature owners: Samikshan Bairagya

14) Posix-locks Reclaim support
Feature owners: Soumya Koduri

15) Deprecate striped volumes
Feature owners: Vijay Bellur, Niels de Vos

16) Improvements in Gluster NFS-Ganesha integration
Feature owners: Jiffin Tony Thottan, Soumya Koduri

*The following need to be added to the roadmap:*

Features that made it to master already but were not palnned:
1) Multi threaded self-heal in EC
Feature owner: Pranith (Did this because serkan asked for it. He has 9PB
volume, self-healing takes a long time :-/)

2) Lock revocation (Facebook patch)
Feature owner: Richard Wareing

Features that look like will make it to 3.9.0:
1) Hardware extension support for EC
Feature owner: Xavi

2) Reset brick support for replica volumes:
Feature owner: Anuradha

3) Md-cache perf improvements in smb:
Feature owner: Poornima

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-26 Thread Pranith Kumar Karampuri
On Fri, Aug 26, 2016 at 9:38 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> hi,
>   Now that we are almost near the feature freeze date (31st of Aug),
> want to get a sense if any of the status of the features.
>

I meant "want to get a sense of the status of the features"


>
> Please respond with:
> 1) Feature already merged
> 2) Undergoing review will make it by 31st Aug
> 3) Undergoing review, but may not make it by 31st Aug
> 4) Feature won't make it for 3.9.
>
> I added the features that were not planned(i.e. not in the 3.9 roadmap
> page) but made it to the release and not planned but may make it to release
> at the end of this mail.
> If you added a feature on master that will be released as part of 3.9.0
> but forgot to add it to roadmap page, please let me know I will add it.
>
> Here are the features planned as per the roadmap:
> 1) Throttling
> Feature owner: Ravishankar
>
> 2) Trash improvements
> Feature owners: Anoop, Jiffin
>
> 3) Kerberos for Gluster protocols:
> Feature owners: Niels, Csaba
>
> 4) SELinux on gluster volumes:
> Feature owners: Niels, Manikandan
>
> 5) Native sub-directory mounts:
> Feature owners: Kaushal, Pranith
>
> 6) RichACL support for GlusterFS:
> Feature owners: Rajesh Joseph
>
> 7) Sharemodes/Share reservations:
> Feature owners: Raghavendra Talur, Poornima G, Soumya Koduri, Rajesh
> Joseph, Anoop C S
>
> 8) Integrate with external resource management software
> Feature owners: Kaleb Keithley, Jose Rivera
>
> 9) Python Wrappers for Gluster CLI Commands
> Feature owners: Aravinda VK
>
> 10) Package and ship libgfapi-python
> Feature owners: Prashant Pai
>
> 11) Management REST APIs
> Feature owners: Aravinda VK
>
> 12) Events APIs
> Feature owners: Aravinda VK
>
> 13) CLI to get state representation of a cluster from the local glusterd
> pov
> Feature owners: Samikshan Bairagya
>
> 14) Posix-locks Reclaim support
> Feature owners: Soumya Koduri
>
> 15) Deprecate striped volumes
> Feature owners: Vijay Bellur, Niels de Vos
>
> 16) Improvements in Gluster NFS-Ganesha integration
> Feature owners: Jiffin Tony Thottan, Soumya Koduri
>
> *The following need to be added to the roadmap:*
>
> Features that made it to master already but were not palnned:
> 1) Multi threaded self-heal in EC
> Feature owner: Pranith (Did this because serkan asked for it. He has 9PB
> volume, self-healing takes a long time :-/)
>
> 2) Lock revocation (Facebook patch)
> Feature owner: Richard Wareing
>
> Features that look like will make it to 3.9.0:
> 1) Hardware extension support for EC
> Feature owner: Xavi
>
> 2) Reset brick support for replica volumes:
> Feature owner: Anuradha
>
> 3) Md-cache perf improvements in smb:
> Feature owner: Poornima
>
> --
> Pranith
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Events API: Adding support for Client Events

2016-08-23 Thread Pranith Kumar Karampuri
On Tue, Aug 23, 2016 at 9:27 PM, Aravinda <avish...@redhat.com> wrote:

> Today I discussed about the topic with Rajesh, Avra and Kotresh. Summary
> as below
>
> - Instead of exposing eventsd to external world why not expose a Glusterd
> RPC for gf_event, Since Glusterd already has logic for backup volfile
> server.
> - Gluster Clients to Glusterd using RPC, Glusterd will send message to
> local eventsd.
>

> Any suggestions for this approach?
>

If I remember correctly this is something we considered before we finalized
on exposing eventsd. I think the reason was that this approach takes two
hops which we didn't like in the discussion at the time. Did any other
parameter change for reconsidering this approach?


>
> regards
> Aravinda
>
>
> On Thursday 04 August 2016 11:04 AM, Aravinda wrote:
>
>>
>> regards
>> Aravinda
>>
>> On 08/03/2016 09:19 PM, Vijay Bellur wrote:
>>
>>> On 08/02/2016 11:24 AM, Pranith Kumar Karampuri wrote:
>>>
>>>>
>>>>
>>>> On Tue, Aug 2, 2016 at 8:21 PM, Vijay Bellur <vbel...@redhat.com
>>>> <mailto:vbel...@redhat.com>> wrote:
>>>>
>>>> On 08/02/2016 07:27 AM, Aravinda wrote:
>>>>
>>>> Hi,
>>>>
>>>> As many of you aware, Gluster Eventing feature is available in
>>>> Master.
>>>> To add support to listen to the Events from GlusterFS Clients
>>>> following
>>>> changes are identified
>>>>
>>>> - Change in Eventsd to listen to tcp socket instead of Unix
>>>> domain
>>>> socket. This enables Client to send message to Eventsd running
>>>> in
>>>> Storage node.
>>>> - On Client connection, share Port and Token details with Xdata
>>>> - Client gf_event will connect to this port and pushes the
>>>> event(Includes Token)
>>>> - Eventsd validates Token, publishes events only if Token is
>>>> valid.
>>>>
>>>>
>>>> Is there a lifetime/renewal associated with this token? Are there
>>>> more details on how token management is being done? Sorry if these
>>>> are repeat questions as I might have missed something along the
>>>> review trail!
>>>>
>>>>
>>>> At least in the discussion it didn't seem like we needed any new tokens
>>>> once it is generated. Do you have any usecase?
>>>>
>>>>
>>> No specific usecase right now but I am interested in understanding more
>>> details about token lifecycle management. Are we planning to use the same
>>> token infrastructure described in Authentication section of [1]?
>>>
>> If we use the same token as in REST API then we can expire the tokens
>> easily without the overhead of maintaining the token state in node. If we
>> expire tokens then Clients have to get new tokens once expired. Let me know
>> if we already have any best practice with glusterd to client communication.
>>
>>>
>>> Thanks,
>>> Vijay
>>>
>>> [1] http://review.gluster.org/#/c/13214/6/under_review/managemen
>>> t_rest_api.md
>>>
>>
>>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-28 Thread Pranith Kumar Karampuri
On Sat, Aug 27, 2016 at 1:22 PM, Raghavendra Gowdappa <rgowd...@redhat.com>
wrote:

>
>
> - Original Message -----
> > From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> > To: "Rajesh Joseph" <rjos...@redhat.com>, "Manikandan Selvaganesh" <
> mselv...@redhat.com>, "Csaba Henk"
> > <ch...@redhat.com>, "Niels de Vos" <nde...@redhat.com>, "Jiffin
> Thottan" <jthot...@redhat.com>, "Aravinda
> > Vishwanathapura Krishna Murthy" <avish...@redhat.com>, "Anoop Chirayath
> Manjiyil Sajan" <achir...@redhat.com>,
> > "Ravishankar Narayanankutty" <ravishan...@redhat.com>, "Kaushal
> Madappa" <kmada...@redhat.com>, "Raghavendra Talur"
> > <rta...@redhat.com>, "Poornima Gurusiddaiah" <pguru...@redhat.com>,
> "Soumya Koduri" <skod...@redhat.com>, "Kaleb
> > Keithley" <kkeit...@redhat.com>, "Jose Rivera" <jriv...@redhat.com>,
> "Prashanth Pai" <p...@redhat.com>, "Samikshan
> > Bairagya" <sbair...@redhat.com>, "Vijay Bellur" <vbel...@redhat.com>
> > Cc: "Gluster Devel" <gluster-devel@gluster.org>
> > Sent: Friday, August 26, 2016 9:38:55 PM
> > Subject: [Gluster-devel] 3.9. feature freeze status check
> >
> > hi,
> > Now that we are almost near the feature freeze date (31st of Aug), want
> to
> > get a sense if any of the status of the features.
> >
> > Please respond with:
> > 1) Feature already merged
> > 2) Undergoing review will make it by 31st Aug
> > 3) Undergoing review, but may not make it by 31st Aug
> > 4) Feature won't make it for 3.9.
> >
> > I added the features that were not planned(i.e. not in the 3.9 roadmap
> page)
> > but made it to the release and not planned but may make it to release at
> the
> > end of this mail.
> > If you added a feature on master that will be released as part of 3.9.0
> but
> > forgot to add it to roadmap page, please let me know I will add it.
> >
> > Here are the features planned as per the roadmap:
> > 1) Throttling
> > Feature owner: Ravishankar
> >
> > 2) Trash improvements
> > Feature owners: Anoop, Jiffin
> >
> > 3) Kerberos for Gluster protocols:
> > Feature owners: Niels, Csaba
> >
> > 4) SELinux on gluster volumes:
> > Feature owners: Niels, Manikandan
> >
> > 5) Native sub-directory mounts:
> > Feature owners: Kaushal, Pranith
> >
> > 6) RichACL support for GlusterFS:
> > Feature owners: Rajesh Joseph
> >
> > 7) Sharemodes/Share reservations:
> > Feature owners: Raghavendra Talur, Poornima G, Soumya Koduri, Rajesh
> Joseph,
> > Anoop C S
> >
> > 8) Integrate with external resource management software
> > Feature owners: Kaleb Keithley, Jose Rivera
> >
> > 9) Python Wrappers for Gluster CLI Commands
> > Feature owners: Aravinda VK
> >
> > 10) Package and ship libgfapi-python
> > Feature owners: Prashant Pai
> >
> > 11) Management REST APIs
> > Feature owners: Aravinda VK
> >
> > 12) Events APIs
> > Feature owners: Aravinda VK
> >
> > 13) CLI to get state representation of a cluster from the local glusterd
> pov
> > Feature owners: Samikshan Bairagya
> >
> > 14) Posix-locks Reclaim support
> > Feature owners: Soumya Koduri
> >
> > 15) Deprecate striped volumes
> > Feature owners: Vijay Bellur, Niels de Vos
> >
> > 16) Improvements in Gluster NFS-Ganesha integration
> > Feature owners: Jiffin Tony Thottan, Soumya Koduri
> >
> > The following need to be added to the roadmap:
> >
> > Features that made it to master already but were not palnned:
> > 1) Multi threaded self-heal in EC
> > Feature owner: Pranith (Did this because serkan asked for it. He has 9PB
> > volume, self-healing takes a long time :-/)
> >
> > 2) Lock revocation (Facebook patch)
> > Feature owner: Richard Wareing
> >
> > Features that look like will make it to 3.9.0:
> > 1) Hardware extension support for EC
> > Feature owner: Xavi
> >
> > 2) Reset brick support for replica volumes:
> > Feature owner: Anuradha
> >
> > 3) Md-cache perf improvements in smb:
> > Feature owner: Poornima
>
> This has pending reviews. I'll try to close it by Aug 31st.
>

Yeah, if it can be merged by 31st that will be great!


>
> >
> > --
> > Pranith
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] release checklist for 3.9.0

2016-08-29 Thread Pranith Kumar Karampuri
hi,
   Could we have release checklist for the components? Please add the
steps that need to be done before the release is made at this link:
https://public.pad.fsfe.org/p/gluster-component-release-checklist. This
activity needs to be completed by 2nd September. Please also add if the
tests are automated or not. We also want to use this to evolve a complete
automation that needs to be run before a release goes out. This is the
first step in that direction.

I added the list from MAINTAINERS file. Please add if I missed anything. If
the Maintainer is outdated please send a mail to maintain...@gluster.org

On behalf of
Aravinda & Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Gerrit Access Control

2016-08-29 Thread Pranith Kumar Karampuri
On Mon, Aug 29, 2016 at 12:25 PM, Nigel Babu  wrote:

> Hello folks,
>
> We have not pruned our Gerrit maintainers list ever as far as I can see.
> We've
> only added people. For security reasons, I'd like to propose that we do the
> following:
>
> If you do not have a commit in the last 90 days, your membership from
> gluster-maintainers team on Gerrit will be revoked. This means you won't
> have
> permission to merge patches. This does not mean you're no longer
> maintainer.
> This is only a security measure. To gain access again, all you have to do
> is
> file a bug against gluster-infra and I'll grant you access immediately.
>

Just need a clarification. Does a "commit in the last 90 days" means
merging a patch sent by someone else by maintainer or maintainer sending a
patch to be merged?


>
> When I remove someone's access, I'll send an invidual email about it.
> Again,
> your membership on gluster-maintainers has no say on your maintainer
> status.
> This is only for security reasons.
>
> Thoughts on implementing this policy?
>
> --
> nigelb
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Brick multiplexing status

2016-08-22 Thread Pranith Kumar Karampuri
On Sat, Aug 20, 2016 at 5:49 AM, Jeff Darcy  wrote:

> For those who are interested, here's the current development status.
>
> The good news is that the current patch[1] works well enough for almost
> all of the basic tests and 22/32 of the basic/afr tests to run
> successfully.  The exceptions have to do with specific features rather
> than base functionality, as mentioned in the commmit messsage:
>
> > There are some things that still don't seem to work.  Changelog and
> > trash were both causing problems even when those features weren't
> > being used, so those translators aren't even added.  Quota and
> > snapshots also seem to have problems, so most of their tests fail.
> > Lastly, a lot of things work for the wrong reasons, the most egregious
> > example being that authentication has been short-circuited until I do
> > some refactoring around where the auth-related transport options live.
> > Still, it's easier to fix these sorts of things one by one from a
> > mostly-working base than try to deal with them all together.
>
> It's a good start, but plenty more to do . . . which brings us to the
> bad news.  I won't be able to work on this at all next week, due to
> planned vacation, and that leaves only a few days at the end of the
> month.  If that's the deadline for 3.9, this feature is not going to
> make it (not that anyone ever reviewed the feature-page addition, so I
> guess you could say it was never in 3.9 anyway).  This change is going
> to make a big difference for some use cases, and it's progressing well,
> but we'll need to find a new release vehicle for it.
>

Actually shyam and Niels reviewed it if http://review.gluster.org/#/c/15038/
is the patch you are talking about.
We can definitely look at this feature for 3.10 I guess which is 3 more
months away for feature freeze.


>
> [1] http://review.gluster.org/#/c/14763/
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] doubts in using gf_event/gf_msg

2016-08-22 Thread Pranith Kumar Karampuri
hi Aravinda,
   I was wondering what is your opinion in sending selected logs as
events instead of treating them specially. Is this something you guys
considered? Do you think it is a bad idea to do it that way? We can even
come up with a new api which logs and then sends it as event.

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] 'Reviewd-by' tag for commits

2016-10-02 Thread Pranith Kumar Karampuri
On Mon, Oct 3, 2016 at 7:23 AM, Ravishankar N <ravishan...@redhat.com>
wrote:

> On 10/03/2016 06:58 AM, Pranith Kumar Karampuri wrote:
>
>
>
> On Mon, Oct 3, 2016 at 6:41 AM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Fri, Sep 30, 2016 at 8:50 PM, Ravishankar N <ravishan...@redhat.com>
>> wrote:
>>
>>> On 09/30/2016 06:38 PM, Niels de Vos wrote:
>>>
>>> On Fri, Sep 30, 2016 at 07:11:51AM +0530, Pranith Kumar Karampuri wrote:
>>>
>>> hi,
>>>  At the moment 'Reviewed-by' tag comes only if a +1 is given on the
>>> final version of the patch. But for most of the patches, different people
>>> would spend time on different versions making the patch better, they may
>>> not get time to do the review for every version of the patch. Is it
>>> possible to change the gerrit script to add 'Reviewed-by' for all the
>>> people who participated in the review?
>>>
>>> +1 to this. For the argument that this *might* encourage me-too +1s, it
>>> only exposes
>>> such persons in bad light.
>>>
>>> Or removing 'Reviewed-by' tag completely would also help to make sure it
>>> doesn't give skewed counts.
>>>
>>> I'm not going to lie, for me, that takes away the incentive of doing any
>>> reviews at all.
>>>
>>
>> Could you elaborate why? May be you should also talk about your primary
>> motivation for doing reviews.
>>
>
> I guess it is probably because the effort needs to be recognized? I think
> there is an option to recognize it so it is probably not a good idea to
> remove the tag I guess.
>
>
> Yes, numbers provide good motivation for me:
> Motivation for looking at patches and finding bugs for known components
> even though I am not its maintainer.
> Motivation to learning new components because a bug and a fix is usually
> when I look at code for unknown components.
> Motivation to level-up when statistics indicate I'm behind my peers.
>
> I think even you said some time back in an ML thread that what can be
> measured can be improved.
>

I am still not sure how to quantify good review from a bad one. So not sure
how it can be measured thus improved. I guess at this point getting more
eyes on the patches is good enough.


>
> -Ravi
>
>
>
>>
>> I would not feel comfortable automatically adding Reviewed-by tags for
>>> people that did not review the last version. They may not agree with the
>>> last version, so adding their "approved stamp" on it may not be correct.
>>> See the description of Reviewed-by in the Linux kernel sources [0].
>>>
>>> While the Linux kernel model is the poster child for projects to draw
>>> standards
>>> from, IMO, their email based review system is certainly not one to
>>> emulate. It
>>> does not provide a clean way to view patch-set diffs, does not present a
>>> single
>>> URL based history that tracks all review comments, relies on the sender
>>> to
>>> provide information on what changed between versions, allows a variety of
>>> 'Komedians' [1] to add random tags which may or may not be picked up
>>> by the maintainer who takes patches in etc.
>>>
>>> Maybe we can add an additional tag that mentions all the people that
>>> did do reviews of older versions of the patch. Not sure what the tag
>>> would be, maybe just CC?
>>>
>>> It depends on what tags would be processed to obtain statistics on
>>> review contributions.
>>> I agree that not all reviewers might be okay with the latest revision
>>> but that
>>> % might be miniscule (zero, really) compared to the normal case where
>>> the reviewer spent
>>> considerable time and effort to provide feedback (and an eventual +1) on
>>> previous
>>> revisions. If converting all +1s into 'Reviewed-by's is not feasible in
>>> gerrit
>>> or is not considered acceptable, then the maintainer could wait for a
>>> reasonable
>>> time for reviewers to give +1 for the final revision before he/she goes
>>> ahead
>>> with a +2 and merges it. While we cannot wait indefinitely for all acks,
>>> a comment
>>> like 'LGTM, will wait for a day for other acks before I go ahead and
>>> merge' would be
>>> appreciated.
>>>
>>> Enough of bike-shedding from my end I suppose.:-)
>>> Ravi
>>>
>>> [1] https://lwn.net/Articles/503829/
>>>
>>> Niels
>>>
>>> 0. 
>>> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/SubmittingPatches#n552
>>>
>>> ___
>>> Gluster-devel mailing 
>>> listGluster-devel@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>> --
>> Pranith
>>
> --
> Pranith
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] 'Reviewd-by' tag for commits

2016-10-02 Thread Pranith Kumar Karampuri
On Fri, Sep 30, 2016 at 8:50 PM, Ravishankar N <ravishan...@redhat.com>
wrote:

> On 09/30/2016 06:38 PM, Niels de Vos wrote:
>
> On Fri, Sep 30, 2016 at 07:11:51AM +0530, Pranith Kumar Karampuri wrote:
>
> hi,
>  At the moment 'Reviewed-by' tag comes only if a +1 is given on the
> final version of the patch. But for most of the patches, different people
> would spend time on different versions making the patch better, they may
> not get time to do the review for every version of the patch. Is it
> possible to change the gerrit script to add 'Reviewed-by' for all the
> people who participated in the review?
>
> +1 to this. For the argument that this *might* encourage me-too +1s, it
> only exposes
> such persons in bad light.
>
> Or removing 'Reviewed-by' tag completely would also help to make sure it
> doesn't give skewed counts.
>
> I'm not going to lie, for me, that takes away the incentive of doing any
> reviews at all.
>

Could you elaborate why? May be you should also talk about your primary
motivation for doing reviews.

I would not feel comfortable automatically adding Reviewed-by tags for
> people that did not review the last version. They may not agree with the
> last version, so adding their "approved stamp" on it may not be correct.
> See the description of Reviewed-by in the Linux kernel sources [0].
>
> While the Linux kernel model is the poster child for projects to draw
> standards
> from, IMO, their email based review system is certainly not one to
> emulate. It
> does not provide a clean way to view patch-set diffs, does not present a
> single
> URL based history that tracks all review comments, relies on the sender to
> provide information on what changed between versions, allows a variety of
> 'Komedians' [1] to add random tags which may or may not be picked up
> by the maintainer who takes patches in etc.
>
> Maybe we can add an additional tag that mentions all the people that
> did do reviews of older versions of the patch. Not sure what the tag
> would be, maybe just CC?
>
> It depends on what tags would be processed to obtain statistics on review
> contributions.
> I agree that not all reviewers might be okay with the latest revision but
> that
> % might be miniscule (zero, really) compared to the normal case where the
> reviewer spent
> considerable time and effort to provide feedback (and an eventual +1) on
> previous
> revisions. If converting all +1s into 'Reviewed-by's is not feasible in
> gerrit
> or is not considered acceptable, then the maintainer could wait for a
> reasonable
> time for reviewers to give +1 for the final revision before he/she goes
> ahead
> with a +2 and merges it. While we cannot wait indefinitely for all acks, a
> comment
> like 'LGTM, will wait for a day for other acks before I go ahead and
> merge' would be
> appreciated.
>
> Enough of bike-shedding from my end I suppose.:-)
> Ravi
>
> [1] https://lwn.net/Articles/503829/
>
> Niels
>
> 0. 
> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/SubmittingPatches#n552
>
>
>
> ___
> Gluster-devel mailing 
> listGluster-devel@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] 'Reviewd-by' tag for commits

2016-10-02 Thread Pranith Kumar Karampuri
On Mon, Oct 3, 2016 at 6:41 AM, Pranith Kumar Karampuri <pkara...@redhat.com
> wrote:

>
>
> On Fri, Sep 30, 2016 at 8:50 PM, Ravishankar N <ravishan...@redhat.com>
> wrote:
>
>> On 09/30/2016 06:38 PM, Niels de Vos wrote:
>>
>> On Fri, Sep 30, 2016 at 07:11:51AM +0530, Pranith Kumar Karampuri wrote:
>>
>> hi,
>>  At the moment 'Reviewed-by' tag comes only if a +1 is given on the
>> final version of the patch. But for most of the patches, different people
>> would spend time on different versions making the patch better, they may
>> not get time to do the review for every version of the patch. Is it
>> possible to change the gerrit script to add 'Reviewed-by' for all the
>> people who participated in the review?
>>
>> +1 to this. For the argument that this *might* encourage me-too +1s, it
>> only exposes
>> such persons in bad light.
>>
>> Or removing 'Reviewed-by' tag completely would also help to make sure it
>> doesn't give skewed counts.
>>
>> I'm not going to lie, for me, that takes away the incentive of doing any
>> reviews at all.
>>
>
> Could you elaborate why? May be you should also talk about your primary
> motivation for doing reviews.
>

I guess it is probably because the effort needs to be recognized? I think
there is an option to recognize it so it is probably not a good idea to
remove the tag I guess.


>
> I would not feel comfortable automatically adding Reviewed-by tags for
>> people that did not review the last version. They may not agree with the
>> last version, so adding their "approved stamp" on it may not be correct.
>> See the description of Reviewed-by in the Linux kernel sources [0].
>>
>> While the Linux kernel model is the poster child for projects to draw
>> standards
>> from, IMO, their email based review system is certainly not one to
>> emulate. It
>> does not provide a clean way to view patch-set diffs, does not present a
>> single
>> URL based history that tracks all review comments, relies on the sender to
>> provide information on what changed between versions, allows a variety of
>> 'Komedians' [1] to add random tags which may or may not be picked up
>> by the maintainer who takes patches in etc.
>>
>> Maybe we can add an additional tag that mentions all the people that
>> did do reviews of older versions of the patch. Not sure what the tag
>> would be, maybe just CC?
>>
>> It depends on what tags would be processed to obtain statistics on review
>> contributions.
>> I agree that not all reviewers might be okay with the latest revision but
>> that
>> % might be miniscule (zero, really) compared to the normal case where the
>> reviewer spent
>> considerable time and effort to provide feedback (and an eventual +1) on
>> previous
>> revisions. If converting all +1s into 'Reviewed-by's is not feasible in
>> gerrit
>> or is not considered acceptable, then the maintainer could wait for a
>> reasonable
>> time for reviewers to give +1 for the final revision before he/she goes
>> ahead
>> with a +2 and merges it. While we cannot wait indefinitely for all acks,
>> a comment
>> like 'LGTM, will wait for a day for other acks before I go ahead and
>> merge' would be
>> appreciated.
>>
>> Enough of bike-shedding from my end I suppose.:-)
>> Ravi
>>
>> [1] https://lwn.net/Articles/503829/
>>
>> Niels
>>
>> 0. 
>> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/SubmittingPatches#n552
>>
>>
>>
>> ___
>> Gluster-devel mailing 
>> listGluster-devel@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>>
>
>
> --
> Pranith
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] 'Reviewd-by' tag for commits

2016-10-03 Thread Pranith Kumar Karampuri
On Mon, Oct 3, 2016 at 12:17 PM, Joe Julian <j...@julianfamily.org> wrote:

> If you get credit for +1, shouldn't you also get credit for -1? It seems
> to me that catching a fault is at least as valuable if not more so.
>

Yes when I said review it could be either +1/-1/+2


>
> On October 3, 2016 3:58:32 AM GMT+02:00, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>>
>>
>>
>> On Mon, Oct 3, 2016 at 7:23 AM, Ravishankar N <ravishan...@redhat.com>
>> wrote:
>>
>>> On 10/03/2016 06:58 AM, Pranith Kumar Karampuri wrote:
>>>
>>>
>>>
>>> On Mon, Oct 3, 2016 at 6:41 AM, Pranith Kumar Karampuri <
>>> pkara...@redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On Fri, Sep 30, 2016 at 8:50 PM, Ravishankar N <ravishan...@redhat.com>
>>>> wrote:
>>>>
>>>>> On 09/30/2016 06:38 PM, Niels de Vos wrote:
>>>>>
>>>>> On Fri, Sep 30, 2016 at 07:11:51AM +0530, Pranith Kumar Karampuri wrote:
>>>>>
>>>>> hi,
>>>>>  At the moment 'Reviewed-by' tag comes only if a +1 is given on the
>>>>> final version of the patch. But for most of the patches, different people
>>>>> would spend time on different versions making the patch better, they may
>>>>> not get time to do the review for every version of the patch. Is it
>>>>> possible to change the gerrit script to add 'Reviewed-by' for all the
>>>>> people who participated in the review?
>>>>>
>>>>> +1 to this. For the argument that this *might* encourage me-too +1s,
>>>>> it only exposes
>>>>> such persons in bad light.
>>>>>
>>>>> Or removing 'Reviewed-by' tag completely would also help to make sure it
>>>>> doesn't give skewed counts.
>>>>>
>>>>> I'm not going to lie, for me, that takes away the incentive of doing
>>>>> any reviews at all.
>>>>>
>>>>
>>>> Could you elaborate why? May be you should also talk about your primary
>>>> motivation for doing reviews.
>>>>
>>>
>>> I guess it is probably because the effort needs to be recognized? I
>>> think there is an option to recognize it so it is probably not a good idea
>>> to remove the tag I guess.
>>>
>>>
>>> Yes, numbers provide good motivation for me:
>>> Motivation for looking at patches and finding bugs for known components
>>> even though I am not its maintainer.
>>> Motivation to learning new components because a bug and a fix is usually
>>> when I look at code for unknown components.
>>> Motivation to level-up when statistics indicate I'm behind my peers.
>>>
>>> I think even you said some time back in an ML thread that what can be
>>> measured can be improved.
>>>
>>
>> I am still not sure how to quantify good review from a bad one. So not
>> sure how it can be measured thus improved. I guess at this point getting
>> more eyes on the patches is good enough.
>>
>>
>>>
>>> -Ravi
>>>
>>>
>>>
>>>>
>>>> I would not feel comfortable automatically adding Reviewed-by tags for
>>>>> people that did not review the last version. They may not agree with the
>>>>> last version, so adding their "approved stamp" on it may not be correct.
>>>>> See the description of Reviewed-by in the Linux kernel sources [0].
>>>>>
>>>>> While the Linux kernel model is the poster child for projects to draw
>>>>> standards
>>>>> from, IMO, their email based review system is certainly not one to
>>>>> emulate. It
>>>>> does not provide a clean way to view patch-set diffs, does not present
>>>>> a single
>>>>> URL based history that tracks all review comments, relies on the
>>>>> sender to
>>>>> provide information on what changed between versions, allows a variety
>>>>> of
>>>>> 'Komedians' [1] to add random tags which may or may not be picked up
>>>>> by the maintainer who takes patches in etc.
>>>>>
>>>>> Maybe we can add an additional tag that mentions all the people that
>>>>> did do reviews of older versions of the patch. Not sure what the tag
>>>>> would be, maybe just CC?

Re: [Gluster-devel] [Gluster-infra] Migration complete

2016-09-25 Thread Pranith Kumar Karampuri
On Mon, Sep 26, 2016 at 4:53 AM, Vijay Bellur  wrote:

> On Sat, Sep 24, 2016 at 2:49 PM, Nigel Babu  wrote:
> > Hello,
> >
> > Michael and I are happy to announce that the migration is now complete.
> Both
> > review.gluster.org and build.gluster.org are now served from the
> community
> > cage. Our remaining server will move to the cage shortly and we'll have
> all our
> > machines in the cage.
> >
> > If you notice anything wrong, please file a bug. I'll get to it once I've
> > caught up with sleep.
> >
> > We'll do a postmortem of what happened and the cause for the delay
> during the
> > week.
> >
>
> Thank you Nigel and Misc! Appreciate your efforts in migrating our
> core infrastructure to the community cage and help us achieve better
> stability with our server infrastructure.
>

+1 good job guys!!


>
> Regards,
> Vijay
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Fixing setfsuid/gid problems in posix xlator

2016-09-26 Thread Pranith Kumar Karampuri
On Mon, Sep 26, 2016 at 4:49 PM, Niels de Vos <nde...@redhat.com> wrote:

> On Fri, Sep 23, 2016 at 08:44:14PM +0530, Pranith Kumar Karampuri wrote:
> > On Fri, Sep 23, 2016 at 6:12 PM, Jeff Darcy <jda...@redhat.com> wrote:
> >
> > > > Jiffin found an interesting problem in posix xlator where we have
> never
> > > been
> > > > using setfsuid/gid ( http://review.gluster.org/#/c/15545/ ), what I
> am
> > > > seeing regressions after this is, if the files are created using
> non-root
> > > > user then the file creation fails because that user doesn't have
> > > permissions
> > > > to create the gfid-link. So it seems like the correct way forward for
> > > this
> > > > patch is to write wrappers around sys_ to do setfsuid/gid
> do the
> > > > actual operation requested and then set it back to old uid/gid and
> then
> > > do
> > > > the internal operations. I am planning to write
> posix_sys_() to
> > > do
> > > > the same, may be a macro?
> > >
> > > Kind of an aside, but I'd prefer to see a lot fewer macros in our code.
> > > They're not type-safe, and multi-line macros often mess up line
> numbers for
> > > debugging or error messages.  IMO it's better to use functions whenever
> > > possible, and usually to let the compiler worry about how/when to
> inline.
> > >
> > > > I need inputs from you guys to let me know if I am on the right path
> and
> > > if
> > > > you see any issues with this approach.
> > >
> > > I think there's a bit of an interface problem here.  The sys_xxx
> wrappers
> > > don't have arguments that point to the current frame, so how would
> they get
> > > the correct uid/gid?  We could add arguments to each function, but then
> > > we'd have to modify every call.  This includes internal calls which
> don't
> > > have a frame to pass, so I guess they'd have to pass NULL.
> Alternatively,
> > > we could create a parallel set of functions with frame pointers.
> Contrary
> > > to what I just said above, this might be a case where macros make
> sense:
> > >
> > >int
> > >sys_writev_fp (call_frame_t *frame, int fd, void *buf, size_t len)
> > >{
> > >   if (frame) { setfsuid(...) ... }
> > >   int ret = writev (fd, buf, len);
> > >   if (frame) { setfsuid(...) ... }
> > >   return ret;
> > >}
> > >#define sys_writev(fd,buf,len) sys_writev_fp (NULL, fd, buf, len)
> > >
> > > That way existing callers don't have to change, but posix can use the
> > > extended versions to get the right setfsuid behavior.
> > >
> > >
> > After trying to do these modifications to test things out, I am now under
> > the impression to remove setfsuid/gid altogether and depend on posix-acl
> > for permission checks. It seems too cumbersome as the operations more
> often
> > than not happen on files inside .glusterfs and non-root users/groups
> don't
> > have permissions at all to access files in that directory.
>
> But the files under .glusterfs are hardlinks. Except for creation and
> removal, should the users not have access to read/write and update
> attributes and xattrs?
>
> I would prefer to rely on the VFS permission checking on the bricks, and
> not bother with the posix-acl xlator when the filesystem on the brick
> supports POSIX ACLs.
>

Could you list down the pros/cons with each approach?


>
> Niels
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] logs/cores for smoke failures

2016-09-27 Thread Pranith Kumar Karampuri
On Tue, Sep 27, 2016 at 11:20 AM, Nigel Babu <nig...@redhat.com> wrote:

> These are gbench failures rather than smoke failures. If you know how to
> debug dbench failures, please add comments on the bug and I'll get you the
> logs you need.
>

Oh, we can't archive the logs like we do for regression runs?


>
> On Tue, Sep 27, 2016 at 9:40 AM, Ravishankar N <ravishan...@redhat.com>
> wrote:
>
>> On 09/27/2016 09:36 AM, Pranith Kumar Karampuri wrote:
>>
>> hi Nigel,
>>   Is there already a bug to capture these in the runs when failures
>> happen? I am not able to understand why this failure happened:
>> https://build.gluster.org/job/smoke/30843/console, logs/cores would have
>> helped. Let me know if I should raise a bug for this.
>>
>> I raised one y'day: https://bugzilla.redhat.com/show_bug.cgi?id=1379228
>> -Ravi
>>
>>
>> --
>> Pranith
>>
>>
>> ___
>> Gluster-devel mailing 
>> listGluster-devel@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>>
>
>
> --
> nigelb
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Fixes for spurious failures in open-behind.t

2016-09-26 Thread Pranith Kumar Karampuri
hi,
I found the following two issues and fixed them:

Problems:
1) flush-behind is on by default, so just because write completes
doesn't mean
   it will be on the disk, it could still be in write-behind's cache.
This
   leads to failure where if you write from one mount and expect it to
be there
   on the other mount, sometimes it won't be there.
2) Sometimes the graph switch is not completing by the time we issue
read which
   is leading to opens not being sent on brick leading to failures.

Fixes:
1) Disable flush-behind
2) Add new functions to check the new graph is there and connected to
bricks
   before 'cat' is executed.

Check bz: 1379511 for more info.

Please let me know if you still face any failures after this. I removed it
from being bad test.

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] logs/cores for smoke failures

2016-09-26 Thread Pranith Kumar Karampuri
hi Nigel,
  Is there already a bug to capture these in the runs when failures
happen? I am not able to understand why this failure happened:
https://build.gluster.org/job/smoke/30843/console, logs/cores would have
helped. Let me know if I should raise a bug for this.

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] logs/cores for smoke failures

2016-09-27 Thread Pranith Kumar Karampuri
On Tue, Sep 27, 2016 at 12:39 PM, Nigel Babu <nig...@redhat.com> wrote:

> On Tue, Sep 27, 2016 at 12:00:40PM +0530, Pranith Kumar Karampuri wrote:
> > On Tue, Sep 27, 2016 at 11:20 AM, Nigel Babu <nig...@redhat.com> wrote:
> >
> > > These are gbench failures rather than smoke failures. If you know how
> to
> > > debug dbench failures, please add comments on the bug and I'll get you
> the
> > > logs you need.
> > >
> >
> > Oh, we can't archive the logs like we do for regression runs?
>
> We don't log anything for smoke tests. Perhaps we should. Would you care to
> send a patch for smoke.sh[1] so we log the appropriate files?
>

hmm... I see that gluster is launched normally so it should log fine. I
guess I didn't understand the question.


>
> [1]: https://github.com/gluster/glusterfs-patch-acceptance-
> tests/blob/master/smoke.sh
>
> --
> nigelb
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] automating straightforward backports

2016-10-26 Thread Pranith Kumar Karampuri
hi,
 Nowadays I am seeing quite a few patches are straightforward backports
from master. But if I follow the process it generally takes around 10
minutes to complete porting each patch. I was wondering if anyone else
looked into automating this. Yesterday I had to backport
http://review.gluster.org/15728 to 3.9, 3.8, 3.7. So I finally took some
time to automate portions of the workflow. I want to exchange ideas you may
be using to achieve the same.

Here is how I automated portions:
1) Cloning bug to different branches:
 Not automated: It seems like bugzilla CLI doesn't allow cloning of the
bug :-(. Anyone knows if we can write a script which interacts with the
website to achieve this?

2) Porting the patch to the branches: Wrote the following script which will
do the porting adding prefix " >" to the commit-headers
===
⚡ cat ../backport.sh
#!/bin/bash
#launch it like this: BRANCHES="3.9 3.8 3.7" ./backport.sh
 

prefix=$1
shift
commit=$1
shift

function add_prefix_to_commit_headers {
#We have the habit of adding ' >' for the commit headers
for i in BUG Change-Id Signed-off-by Reviewed-on Smoke
NetBSD-regression Reviewed-by CentOS-regression; do sed -i -e "s/^$i:/
>$i:/" commit-msg; done
}

function form_commit_msg {
#Get the commit message out of the commit
local commit=$1
git log --format=%B -n 1 $commit > commit-msg
}

function main {
cur_branch=$(git rev-parse --abbrev-ref HEAD)
form_commit_msg $commit
add_prefix_to_commit_headers;
rm -f branches;
for i in $BRANCHES; do cp commit-msg ${i}-commit-msg && git
checkout -b ${prefix}-${i} origin/release-${i} > /dev/null && git
cherry-pick $commit && git commit -s --amend -F ${i}-commit-msg && echo
${prefix}-${i} >> branches; done
git checkout $cur_branch
}

main
===

3) Adding reviewers, triggering regressions, smoke:
 I have been looking around for good gerrit-cli, at the moment, I am
happy with the gerrit CLI which is installed through npm. So you need to
first install npm on your box and then do 'npm install gerrit'
 Go to the branch from where we did the commit and do:
# gerrit assign xhernan...@datalab.es - this will add Xavi as
reviewer for the patch that I just committed.
# gerrit comment "recheck smoke"
# gerrit comment "recheck centos"
# gerrit comment "recheck netbsd"

4) I am yet to look into bugzilla cli to come up with the command to move
the bugs into POST, but may be Niels has it at his fingertips?

Main pain point has been cloning the bugs. If we have an automated way to
clone the bug to different branches. The script at 2) can be modified to
add all the steps.
If we can clone the bug and get the bz of the cloned bug, then we can add
"BUG: " to the commit-message and launch rfc.sh which won't prompt for
anything. We can auto answer coding-guidelines script by launching "yes |
rfc.sh" if we really want to.

PS: The script is something I hacked together for one time use yesterday.
Not something I guessed I would send a mail about today so it is not all
that good looking. Just got the job done yesterday.

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-09 Thread Pranith Kumar Karampuri
I am trying to understand the criticality of these patches. Raghavendra's
patch is crucial because gfapi workloads(for samba and qemu) are affected
severely. I waited for Krutika's patch because VM usecase can lead to disk
corruption on replace-brick. If you could let us know the criticality and
we are in agreement that they are this severe, we can definitely take them
in. Otherwise next release is better IMO. Thoughts?

On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee <amukh...@redhat.com>
wrote:

> Pranith,
>
> I'd like to see following patches getting in:
>
> http://review.gluster.org/#/c/15722/
> http://review.gluster.org/#/c/15714/
> http://review.gluster.org/#/c/15792/
>
>
>
> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> hi,
>>   The only problem left was EC taking more time. This should affect
>> small files a lot more. Best way to solve it is using compound-fops. So for
>> now I think going ahead with the release is best.
>>
>> We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/
>> 15778 before going ahead with the release. If we missed any other
>> crucial patch please let us know.
>>
>> Will make the release as soon as this patch is merged.
>>
>> --
>> Pranith & Aravinda
>>
>> ___
>> maintainers mailing list
>> maintain...@gluster.org
>> http://www.gluster.org/mailman/listinfo/maintainers
>>
>>
>
>
> --
>
> ~ Atin (atinm)
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Please pause merging patches to 3.9 waiting for just one patch

2016-11-09 Thread Pranith Kumar Karampuri
hi,
  The only problem left was EC taking more time. This should affect
small files a lot more. Best way to solve it is using compound-fops. So for
now I think going ahead with the release is best.

We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/15778
before going ahead with the release. If we missed any other crucial patch
please let us know.

Will make the release as soon as this patch is merged.

-- 
Pranith & Aravinda
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] getting "Transport endpoint is not connected" in glusterfs mount log file.

2016-11-11 Thread Pranith Kumar Karampuri
Abhishek,
  Both Rafi and I tried to look at the logs but the file seems to be
corrupted. I was saying that there is connection problem because the
following log appeard in between lot of connection failures in the logs you
posted. Are you on IRC #gluster-dev?

[2016-10-31 04:06:03.628539] I [MSGID: 108019]
[afr-transaction.c:1224:afr_post_blocking_inodelk_cbk]
0-c_glusterfs-replicate-0: Blocking inodelks failed.

On Fri, Nov 11, 2016 at 1:05 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi Pranith,
>
> Could you please tell tell me the logs showing that the mount is not
> available to connect to both the bricks.
>
> On Fri, Nov 11, 2016 at 12:05 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> As per the logs, the mount is not able to connect to both the bricks. Are
>> the connections fine?
>>
>> On Fri, Nov 11, 2016 at 10:20 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Its an urgent case.
>>>
>>> Atleast provide your views on this
>>>
>>> On Wed, Nov 9, 2016 at 11:08 AM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> We could see that sync is getting failed to sync the GlusterFS bricks
>>>> due to error trace "Transport endpoint is not connected "
>>>>
>>>> [2016-10-31 04:06:03.627395] E [MSGID: 114031]
>>>> [client-rpc-fops.c:1673:client3_3_finodelk_cbk]
>>>> 0-c_glusterfs-client-9: remote operation failed [Transport endpoint is not
>>>> connected]
>>>> [2016-10-31 04:06:03.628381] I [socket.c:3308:socket_submit_request]
>>>> 0-c_glusterfs-client-9: not connected (priv->connected = 0)
>>>> [2016-10-31 04:06:03.628432] W [rpc-clnt.c:1586:rpc_clnt_submit]
>>>> 0-c_glusterfs-client-9: failed to submit rpc-request (XID: 0x7f5f Program:
>>>> GlusterFS 3.3, ProgVers: 330, Proc: 30) to rpc-transport
>>>> (c_glusterfs-client-9)
>>>> [2016-10-31 04:06:03.628466] E [MSGID: 114031]
>>>> [client-rpc-fops.c:1673:client3_3_finodelk_cbk]
>>>> 0-c_glusterfs-client-9: remote operation failed [Transport endpoint is not
>>>> connected]
>>>> [2016-10-31 04:06:03.628475] I [MSGID: 108019]
>>>> [afr-lk-common.c:1086:afr_lock_blocking] 0-c_glusterfs-replicate-0:
>>>> unable to lock on even one child
>>>> [2016-10-31 04:06:03.628539] I [MSGID: 108019]
>>>> [afr-transaction.c:1224:afr_post_blocking_inodelk_cbk]
>>>> 0-c_glusterfs-replicate-0: Blocking inodelks failed.
>>>> [2016-10-31 04:06:03.628630] W [fuse-bridge.c:1282:fuse_err_cbk]
>>>> 0-glusterfs-fuse: 20790: FLUSH() ERR => -1 (Transport endpoint is not
>>>> connected)
>>>> [2016-10-31 04:06:03.629149] E [rpc-clnt.c:362:saved_frames_unwind]
>>>> (--> 
>>>> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn-0xb5c80)[0x3fff8ab79f58]
>>>> (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind-0x1b7a0)[0x3fff8ab1dc90]
>>>> (--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy-0x1b638)[0x3fff8ab1de10]
>>>> (--> 
>>>> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup-0x19af8)[0x3fff8ab1fb18]
>>>> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify-0x18e68)[0x3fff8ab20808]
>>>> ) 0-c_glusterfs-client-9: forced unwinding frame type(GlusterFS 3.3)
>>>> op(LOOKUP(27)) called at 2016-10-31 04:06:03.624346 (xid=0x7f5a)
>>>> [2016-10-31 04:06:03.629183] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
>>>> 0-c_glusterfs-client-9: changing port to 49391 (from 0)
>>>> [2016-10-31 04:06:03.629210] W [MSGID: 114031]
>>>> [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-c_glusterfs-client-9:
>>>> remote operation failed. Path: 
>>>> /loadmodules_norepl/CXC1725605_P93A001/cello/emasviews
>>>> (b0e5a94e-a432-4dce-b86f-a551555780a2) [Transport endpoint is not
>>>> connected]
>>>>
>>>>
>>>> Could you please tell us the reason why we are getting these trace and
>>>> how to resolve this.
>>>>
>>>> Logs are attached here please share your analysis.
>>>>
>>>> Thanks in advanced
>>>>
>>>> --
>>>> Regards
>>>> Abhishek Paliwal
>>>>
>>>
>>>
>>>
>>> --
>>>
>>>
>>>
>>>
>>> Regards
>>> Abhishek Paliwal
>>>
>>> ___
>>> Gluster-users mailing list
>>> gluster-us...@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>>
>> --
>> Pranith
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-11 Thread Pranith Kumar Karampuri
On Fri, Nov 11, 2016 at 8:01 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

>
>
> On Fri, Nov 11, 2016 at 6:24 PM, Saravanakumar Arumugam <
> sarum...@redhat.com> wrote:
>
>>
>>
>> On 11/11/2016 06:03 PM, Sander Eikelenboom wrote:
>>
>>> L.S.,
>>>
>>> I was wondering if it would be possible to turn an existing filesystem
>>> with data
>>> (ext4 with files en dirs) into a GlusterFS brick ?
>>>
>> It is not possible, at least I am not aware about any such solution yet.
>>
>>>
>>> I can't find much info about it except the following remark at [1] which
>>> seems
>>> to indicate it is not possible yet:
>>>
>>> Data import tool
>>>
>>> Create a tool which will allow importing already existing data
>>> in the brick
>>> directories into the gluster volume.
>>> This is most likely going to be a special rebalance process.
>>>
>>> So that would mean i would always have to:
>>> - first create an GlusterFS brick on an empty filesystem
>>> - after that copy all the data into the mounted GlusterFS brick
>>> - never ever copy something into the filesystem (or manipulate it
>>> otherwise)
>>>   used as a GlusterFS brick directly (without going through a GlusterFS
>>> client mount)
>>>
>>> because there is no checking / healing between GlusterFS's view on the
>>> data and the data in the
>>> underlying brick filesystem ?
>>>
>>> Is this a correct view ?
>>>
>>> you are right !
>> Once the data is copied into Gluster, it internally creates meta-data
>> about data(file/dir).
>> Unless you copy it via Gluster mount point, it is NOT possible to create
>> such meta-data.
>>
>
> No, it is possible. You just need to be a bit creative.
>
> Could you let me know how many such bricks you have which you want to
> convert to glusterfs. It seems like you want replication as well. So if you
> give me all this information. With your help may be we can at least come up
> with a document on how this can be done.
>

Once the import is complete, whatever you are saying about not touching the
brick directly and doing everything from the mount point holds. But we can
definitely convert an existing ext4 directory structure into a volume.


>
>
>>
>> Thanks,
>> Saravana
>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Pranith
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-11 Thread Pranith Kumar Karampuri
On Fri, Nov 11, 2016 at 6:24 PM, Saravanakumar Arumugam  wrote:

>
>
> On 11/11/2016 06:03 PM, Sander Eikelenboom wrote:
>
>> L.S.,
>>
>> I was wondering if it would be possible to turn an existing filesystem
>> with data
>> (ext4 with files en dirs) into a GlusterFS brick ?
>>
> It is not possible, at least I am not aware about any such solution yet.
>
>>
>> I can't find much info about it except the following remark at [1] which
>> seems
>> to indicate it is not possible yet:
>>
>> Data import tool
>>
>> Create a tool which will allow importing already existing data in
>> the brick
>> directories into the gluster volume.
>> This is most likely going to be a special rebalance process.
>>
>> So that would mean i would always have to:
>> - first create an GlusterFS brick on an empty filesystem
>> - after that copy all the data into the mounted GlusterFS brick
>> - never ever copy something into the filesystem (or manipulate it
>> otherwise)
>>   used as a GlusterFS brick directly (without going through a GlusterFS
>> client mount)
>>
>> because there is no checking / healing between GlusterFS's view on the
>> data and the data in the
>> underlying brick filesystem ?
>>
>> Is this a correct view ?
>>
>> you are right !
> Once the data is copied into Gluster, it internally creates meta-data
> about data(file/dir).
> Unless you copy it via Gluster mount point, it is NOT possible to create
> such meta-data.
>

No, it is possible. You just need to be a bit creative.

Could you let me know how many such bricks you have which you want to
convert to glusterfs. It seems like you want replication as well. So if you
give me all this information. With your help may be we can at least come up
with a document on how this can be done.


>
> Thanks,
> Saravana
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Pranith Kumar Karampuri
On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee <amukh...@redhat.com> wrote:

>
>
> On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> I am trying to understand the criticality of these patches. Raghavendra's
>> patch is crucial because gfapi workloads(for samba and qemu) are affected
>> severely. I waited for Krutika's patch because VM usecase can lead to disk
>> corruption on replace-brick. If you could let us know the criticality and
>> we are in agreement that they are this severe, we can definitely take them
>> in. Otherwise next release is better IMO. Thoughts?
>>
>
> If you are asking about how critical they are, then the first two are
> definitely not but third one is actually a critical one as if user upgrades
> from 3.6 to latest with quota enable, further peer probes get rejected and
> the only work around is to disable quota and re-enable it back.
>
> On a different note, 3.9 head is not static and moving forward. So if you
> are really looking at only critical patches need to go in, that's not
> happening, just a word of caution!
>

Yes this is one more workflow problem. There is no way to stop others from
merging it in the tool. I once screwed Kaushal's release process by merging
a patch because I didn't see his mail about pausing merges or something. I
will send out a post-mortem about our experiences and the painpoints we
felt after 3.9.0 release.


>
>
>> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee <amukh...@redhat.com>
>> wrote:
>>
>>> Pranith,
>>>
>>> I'd like to see following patches getting in:
>>>
>>> http://review.gluster.org/#/c/15722/
>>> http://review.gluster.org/#/c/15714/
>>> http://review.gluster.org/#/c/15792/
>>>
>>
>>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
>>> pkara...@redhat.com> wrote:
>>>
>>>> hi,
>>>>   The only problem left was EC taking more time. This should affect
>>>> small files a lot more. Best way to solve it is using compound-fops. So for
>>>> now I think going ahead with the release is best.
>>>>
>>>> We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/
>>>> 15778 before going ahead with the release. If we missed any other
>>>> crucial patch please let us know.
>>>>
>>>> Will make the release as soon as this patch is merged.
>>>>
>>>> --
>>>> Pranith & Aravinda
>>>>
>>>> ___
>>>> maintainers mailing list
>>>> maintain...@gluster.org
>>>> http://www.gluster.org/mailman/listinfo/maintainers
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> ~ Atin (atinm)
>>>
>>
>>
>>
>> --
>> Pranith
>>
>
>
>
> --
>
> ~ Atin (atinm)
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-10 Thread Pranith Kumar Karampuri
On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee <amukh...@redhat.com> wrote:

>
>
> On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> I am trying to understand the criticality of these patches. Raghavendra's
>> patch is crucial because gfapi workloads(for samba and qemu) are affected
>> severely. I waited for Krutika's patch because VM usecase can lead to disk
>> corruption on replace-brick. If you could let us know the criticality and
>> we are in agreement that they are this severe, we can definitely take them
>> in. Otherwise next release is better IMO. Thoughts?
>>
>
> If you are asking about how critical they are, then the first two are
> definitely not but third one is actually a critical one as if user upgrades
> from 3.6 to latest with quota enable, further peer probes get rejected and
> the only work around is to disable quota and re-enable it back.
>

Let me take Raghavendra G's input also here.

Raghavendra, what do you think we should do? Merge it or live with it till
3.9.1?


>
> On a different note, 3.9 head is not static and moving forward. So if you
> are really looking at only critical patches need to go in, that's not
> happening, just a word of caution!
>
>
>> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee <amukh...@redhat.com>
>> wrote:
>>
>>> Pranith,
>>>
>>> I'd like to see following patches getting in:
>>>
>>> http://review.gluster.org/#/c/15722/
>>> http://review.gluster.org/#/c/15714/
>>> http://review.gluster.org/#/c/15792/
>>>
>>
>>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri <
>>> pkara...@redhat.com> wrote:
>>>
>>>> hi,
>>>>   The only problem left was EC taking more time. This should affect
>>>> small files a lot more. Best way to solve it is using compound-fops. So for
>>>> now I think going ahead with the release is best.
>>>>
>>>> We are waiting for Raghavendra Talur's http://review.gluster.org/#/c/
>>>> 15778 before going ahead with the release. If we missed any other
>>>> crucial patch please let us know.
>>>>
>>>> Will make the release as soon as this patch is merged.
>>>>
>>>> --
>>>> Pranith & Aravinda
>>>>
>>>> ___
>>>> maintainers mailing list
>>>> maintain...@gluster.org
>>>> http://www.gluster.org/mailman/listinfo/maintainers
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> ~ Atin (atinm)
>>>
>>
>>
>>
>> --
>> Pranith
>>
>
>
>
> --
>
> ~ Atin (atinm)
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Need inputs for solution for renames + entry self-heal data loss in afr

2016-10-25 Thread Pranith Kumar Karampuri
One of the Red hat QE engineers (Nag Pavan) found a day 1 bug in entry
self-heal where the file with good data can be replaced with file with bad
data when renames + self-heal is involved in a particular way.

Sample steps (From the bz):
1) have a plain replica volume with 2 bricks. start the volume and mount it.
2) mkdir dir && mkdir newdir && touch file1
3) bring first brick down
4) echo abc > dir/file1
5) bring the first brick back up and quickly bring the second brick down
before self-heal can be triggered.
6) do mv dir/file1 newdir/file2 <<--- note that this is empty file.

Now bring the second brick back up. If entry self-heal of 'dir' happens
first then it deletes the file1 with content 'abc' now when 'newdir' heal
happens it leads to creation of empty file and the data in the file is lost.

Same can be achieved using 'link' + 'unlink' as well.

The main reason for this problem is that afr entry-self-heal at the moment
doesn't care completely about link-counts before deleting the final link of
an inode, so it always does unlink and recreates the file and does data
heals. In this corner case unlink happens on the good copy of the file and
we either lose data or get stale data based on what is the data present on
the sink file.

Solution we are proposing is the following:

1) Posix will maintain a hidden directory '.glusterfs/anoninode'(We can
call it lost+found as well) directory which will be used by afr/ec for
keeping the 'inodes' until their names are resolved.
2) Both afr and ec when they need to heal a directory and a 'name' has to
be deleted but on the other bricks if the inode is present, it renames this
file as  'anoninode/' instead of doing unlink/rmdir on it.
3) For files:
 a) Both afr, ec already has logic to do 'link' instead of new file
creation if a gfid already exists in the brick. So when a name is resolved
it does exactly what it does now.
 b) Self-heal daemon will periodically crawl the first level of
'anoninode' directory to make sure it deletes the 'inodes' represented as
files with gfid-string as names whenever the link count is > 1. It will
also delete the files if the gfid cease to exist on the other bricks.
5) For directories:
 a) both afr and ec need to perform 'rename' of the
'anoninode/dir-gfid' to the name it will be resolved to as part of entry
self-heal, instead of 'mkdir'.
 b) If self-heal daemon crawl detects that a directory is deleted
on the other bricks, then it has to scan the files inside the deleted
directory and move them into 'anoninode' if the gfid of the file/directory
exists on the other bricks. Otherwise they can be safely deleted.

Please let us know if you see any issues with this approach.

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Need inputs for solution for renames + entry self-heal data loss in afr

2016-10-25 Thread Pranith Kumar Karampuri
https://bugzilla.redhat.com/show_bug.cgi?id=1366818 is the bug I am
referring to in the mail above. (Thanks sankarshan for pointing that I
missed the link :-) )

On Tue, Oct 25, 2016 at 3:14 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> One of the Red hat QE engineers (Nag Pavan) found a day 1 bug in entry
> self-heal where the file with good data can be replaced with file with bad
> data when renames + self-heal is involved in a particular way.
>
> Sample steps (From the bz):
> 1) have a plain replica volume with 2 bricks. start the volume and mount
> it.
> 2) mkdir dir && mkdir newdir && touch file1
> 3) bring first brick down
> 4) echo abc > dir/file1
> 5) bring the first brick back up and quickly bring the second brick down
> before self-heal can be triggered.
> 6) do mv dir/file1 newdir/file2 <<--- note that this is empty file.
>
> Now bring the second brick back up. If entry self-heal of 'dir' happens
> first then it deletes the file1 with content 'abc' now when 'newdir' heal
> happens it leads to creation of empty file and the data in the file is lost.
>
> Same can be achieved using 'link' + 'unlink' as well.
>
> The main reason for this problem is that afr entry-self-heal at the moment
> doesn't care completely about link-counts before deleting the final link of
> an inode, so it always does unlink and recreates the file and does data
> heals. In this corner case unlink happens on the good copy of the file and
> we either lose data or get stale data based on what is the data present on
> the sink file.
>
> Solution we are proposing is the following:
>
> 1) Posix will maintain a hidden directory '.glusterfs/anoninode'(We can
> call it lost+found as well) directory which will be used by afr/ec for
> keeping the 'inodes' until their names are resolved.
> 2) Both afr and ec when they need to heal a directory and a 'name' has to
> be deleted but on the other bricks if the inode is present, it renames this
> file as  'anoninode/' instead of doing unlink/rmdir on it.
> 3) For files:
>  a) Both afr, ec already has logic to do 'link' instead of new
> file creation if a gfid already exists in the brick. So when a name is
> resolved it does exactly what it does now.
>  b) Self-heal daemon will periodically crawl the first level of
> 'anoninode' directory to make sure it deletes the 'inodes' represented as
> files with gfid-string as names whenever the link count is > 1. It will
> also delete the files if the gfid cease to exist on the other bricks.
> 5) For directories:
>  a) both afr and ec need to perform 'rename' of the
> 'anoninode/dir-gfid' to the name it will be resolved to as part of entry
> self-heal, instead of 'mkdir'.
>  b) If self-heal daemon crawl detects that a directory is deleted
> on the other bricks, then it has to scan the files inside the deleted
> directory and move them into 'anoninode' if the gfid of the file/directory
> exists on the other bricks. Otherwise they can be safely deleted.
>
> Please let us know if you see any issues with this approach.
>
> --
> Pranith
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Input/output error when files in .shard folder are deleted

2016-10-25 Thread Pranith Kumar Karampuri
+Krutika

On Mon, Oct 24, 2016 at 4:10 PM, qingwei wei  wrote:

> Hi,
>
> I am currently running a simple gluster setup using one server node
> with multiple disks. I realize that if i delete away all the .shard
> files in one replica in the backend, my application (dd) will report
> Input/Output error even though i have 3 replicas.
>
> My gluster version is 3.7.16
>
> gluster volume file
>
> Volume Name: testHeal
> Type: Replicate
> Volume ID: 26d16d7f-bc4f-44a6-a18b-eab780d80851
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.123.4:/mnt/sdb_mssd/testHeal2
> Brick2: 192.168.123.4:/mnt/sde_mssd/testHeal2
> Brick3: 192.168.123.4:/mnt/sdd_mssd/testHeal2
> Options Reconfigured:
> cluster.self-heal-daemon: on
> features.shard-block-size: 16MB
> features.shard: on
> performance.readdir-ahead: on
>
> dd error
>
> [root@fujitsu05 .shard]# dd of=/home/test if=/mnt/fuseMount/ddTest
> bs=16M count=20 oflag=direct
> dd: error reading ‘/mnt/fuseMount/ddTest’: Input/output error
> 1+0 records in
> 1+0 records out
> 16777216 bytes (17 MB) copied, 0.111038 s, 151 MB/s
>
> in the .shard folder where i deleted all the .shard file, i can see
> one .shard file is recreated
>
> getfattr -d -e hex -m.  9061198a-eb7e-45a2-93fb-eb396d1b2727.1
> # file: 9061198a-eb7e-45a2-93fb-eb396d1b2727.1
> trusted.afr.testHeal-client-0=0x00010001
> trusted.afr.testHeal-client-2=0x00010001
> trusted.gfid=0x41b653f7daa14627b1f91f9e8554ddde
>
> However, the gfid is not the same compare to the other replicas
>
> getfattr -d -e hex -m.  9061198a-eb7e-45a2-93fb-eb396d1b2727.1
> # file: 9061198a-eb7e-45a2-93fb-eb396d1b2727.1
> trusted.afr.dirty=0x
> trusted.afr.testHeal-client-1=0x
> trusted.bit-rot.version=0x0300580dde99000e5e5d
> trusted.gfid=0x9ee5c5eed7964a6cb9ac1a1419de5a40
>
> Is this consider a bug?
>
> Regards,
>
> Cwtan
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel




-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Notice: https://download.gluster.org:/pub/gluster/glusterfs/LATEST has changed

2016-11-17 Thread Pranith Kumar Karampuri
On Wed, Nov 16, 2016 at 11:47 PM, Serkan Çoban 
wrote:

> Hi,
> Will disperse related new futures be ported to 3.7? or we should
> upgrade for those features?
>

hi Serkan,
  Unfortunately, no they won't be backported to 3.7. We are adding
new features to latest releases to prevent accidental bugs slipping in
stable releases. While the features are working well, we did see a
performance problem very late in the cycle in the I/O path just with EC for
small files. You should wait before you upgrade IMO.

You were trying to test how long it takes to heal data with multi-threaded
heal in EC right? Do you want to give us feedback by trying this feature
out?


> On Wed, Nov 16, 2016 at 8:51 PM, Kaleb S. KEITHLEY 
> wrote:
> > Hi,
> >
> > As some of you may have noticed, GlusterFS-3.9.0 was released. Watch
> > this space for the official announcement soon.
> >
> > If you are using Community GlusterFS packages from download.gluster.org
> > you should check your package metadata to be sure that an update doesn't
> > inadvertently update your system to 3.9.
> >
> > There is a new symlink:
> > https://download.gluster.org:/pub/gluster/glusterfs/LTM-3.8 which will
> > remain pointed at the GlusterFS-3.8 packages. Use this instead of
> > .../LATEST to keep getting 3.8 updates without risk of accidentally
> > getting 3.9. There is also a new LTM-3.7 symlink that you can use for
> > 3.7 updates.
> >
> > Also note that there is a new package signing key for the 3.9 packages
> > that are on download.gluster.org. The old key remains the same for 3.8
> > and earlier packages. New releases of 3.8 and 3.7 packages will continue
> > to use the old key.
> >
> > GlusterFS-3.9 is the first "short term" release; it will be supported
> > for approximately six months. 3.7 and 3.8 are Long Term Maintenance
> > (LTM) releases. 3.9 will be followed by 3.10; 3.10 will be a LTM release
> > and 3.9 and 3.7 will be End-of-Life (EOL) at that time.
> >
> >
> > --
> >
> > Kaleb
> > ___
> > Gluster-users mailing list
> > gluster-us...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] gfid generation

2016-11-15 Thread Pranith Kumar Karampuri
On Wed, Nov 16, 2016 at 3:31 AM, Ankireddypalle Reddy <are...@commvault.com>
wrote:

> Kaushal/Pranith,
>   Thanks for clarifying this. As I
> understand there are 2 id's. Please correct if there is a mistake in my
> assumptions:
>   1) HASH generated by DHT and this will
> generate the same id for a given file all the time.
>   2) GFID which is an version 4 UUID. As
> per the below links this is supposed to contain a time stamp field in it.
> So this will not generate the same id for a given file all the time.
>https://en.wikipedia.org/wiki/
> Universally_unique_identifier
>https://tools.ietf.org/html/rfc4122


That is correct. There is no involvement of parent gfid in either of this
:-).


>
>
> Thanks and Regards,
> ram
> -Original Message-
> From: Kaushal M [mailto:kshlms...@gmail.com]
> Sent: Tuesday, November 15, 2016 1:21 PM
> To: Ankireddypalle Reddy
> Cc: Pranith Kumar Karampuri; gluster-us...@gluster.org; Gluster Devel
> Subject: Re: [Gluster-users] gfid generation
>
> On Tue, Nov 15, 2016 at 11:33 PM, Ankireddypalle Reddy <
> are...@commvault.com> wrote:
> > Pranith,
> >
> >  Thanks for getting back on this. I am trying to see
> > how gfid can be generated programmatically. Given a file name how do
> > we generate gfid for it. I was reading some of the email threads about
> > it where it was mentioned that gfid is generated based upon parent
> > directory gfid and the file name. Given a same parent gfid and file
> > name do we always end up with the same gfid.
>
> You're probably confusing the hash as generated for the elastic hash
> algorithm in DHT, with UUID. That is a combination of
>
> I always thought that the GFID was a UUID, which was randomly generated.
> (The random UUID might be being modified a little to allow some leeway with
> directory listing, IIRC).
>
> Adding gluster-devel to get more eyes on this.
>
> >
> >
> >
> > Thanks and Regards,
> >
> > ram
> >
> >
> >
> > From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
> > Sent: Tuesday, November 15, 2016 12:58 PM
> > To: Ankireddypalle Reddy
> > Cc: gluster-us...@gluster.org
> > Subject: Re: [Gluster-users] gfid generation
> >
> >
> >
> > Sorry, didn't understand the question. Are you saying give a file on
> > gluster how to get gfid of the file?
> >
> > #getfattr -d -m. -e hex /path/to/file shows it
> >
> >
> >
> > On Fri, Nov 11, 2016 at 9:47 PM, Ankireddypalle Reddy
> > <are...@commvault.com>
> > wrote:
> >
> > Hi,
> >
> > Is the mapping from file name to gfid an idempotent operation.
> > If so please point me to the function that does this.
> >
> >
> >
> > Thanks and Regards,
> >
> > Ram
> >
> > ***Legal Disclaimer***
> >
> > "This communication may contain confidential and privileged material
> > for the
> >
> > sole use of the intended recipient. Any unauthorized review, use or
> > distribution
> >
> > by others is strictly prohibited. If you have received the message by
> > mistake,
> >
> > please advise the sender by reply email and delete the message. Thank
> you."
> >
> > **
> >
> >
> > ___
> > Gluster-users mailing list
> > gluster-us...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> >
> >
> >
> >
> > --
> >
> > Pranith
> >
> > ***Legal Disclaimer***
> > "This communication may contain confidential and privileged material
> > for the sole use of the intended recipient. Any unauthorized review,
> > use or distribution by others is strictly prohibited. If you have
> > received the message by mistake, please advise the sender by reply
> > email and delete the message. Thank you."
> > **
> >
> > ___
> > Gluster-users mailing list
> > gluster-us...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for
> the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Container Repo Change + > 50K downloads of Gluster Container images

2016-11-12 Thread Pranith Kumar Karampuri
That is very good news!

On Sun, Nov 13, 2016 at 11:58 AM, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:

> On Thu, Oct 20, 2016 at 11:56 AM, Humble Devassy Chirammal <
> humble.deva...@gmail.com> wrote:
>
>> Hi All,
>>
>> We have kept our official Gluster Container images  in Docker hub for
>> CentOS and Fedora distros for some time now.
>>
>> https://hub.docker.com/r/gluster/gluster-centos/
>> https://hub.docker.com/r/gluster/gluster-fedora/
>>
>>
>> I see massive increase in the download of these container images for past
>> few months. This is indeed a good sign.  :) .. It seems that, we will be
>> crossing 100k+ downloads soon. :)
>>
>
>
> Yes, we are  in 100k+ club , Waiting for 200k+   :)
>
>
>>
>> As a side note, to address some of the copyright issues, I have renamed
>> our source container repo to "https://github.com/gluster/gl
>> uster-containers" from "https://github.com/gluster/docker;.   Whoever
>> forked this repo or contributing to this repo may  take a note of this
>> change .
>>
>> Please let us know if you  have any queries on this.
>>
>> --Humble
>>
>>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Issue about the size of fstat is less than the really size of the syslog file

2016-10-31 Thread Pranith Kumar Karampuri
On Tue, Nov 1, 2016 at 7:32 AM, Lian, George (Nokia - CN/Hangzhou) <
george.l...@nokia.com> wrote:

> Hi,
>
>
>
> I will test it with your patches and update to you when I have result.
>

hi George,
  Please use http://review.gluster.org/#/c/15757/2 i.e. second version
of Raghavendra's patch. I tested it and it worked fine. We are still trying
to figure out quick-read and readdir-ahead as I type this mail.


>
> Thanks a lots
>
>
>
> Best Regards,
>
> George
>
>
>
> *From:* Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
> *Sent:* Monday, October 31, 2016 11:23 AM
> *To:* Lian, George (Nokia - CN/Hangzhou) <george.l...@nokia.com>
> *Cc:* Raghavendra Gowdappa <rgowd...@redhat.com>; Zhang, Bingxuan (Nokia
> - CN/Hangzhou) <bingxuan.zh...@nokia.com>; Gluster-devel@gluster.org;
> Zizka, Jan (Nokia - CZ/Prague) <jan.zi...@nokia.com>
>
> *Subject:* Re: [Gluster-devel] Issue about the size of fstat is less than
> the really size of the syslog file
>
>
>
> Removing i_ext_mbb_wcdma_swd3_da1_mat...@internal.nsn.com, it is causing
> mail delivery problems for me.
>
> George,
>
>  Raghavendra and I made some progress on this issue. We were in
> parallel working on another issue which is similar where elastic search
> indices are getting corrupted because of wrong stat sizes in our opinion.
> So I have been running different translator stacks in identifying the
> problematic xlators which are leading to indices corruption.
>
>   We found the list to be 1) Write-behind, 2) Quick-read, 3)
> Readdir-ahead. Raghavendra and I just had a chat and we are suspecting that
> lack of lookup/readdirp implementation in write-behind could be the reason
> for this problem. Similar problems may exist in other two xlators too. But
> we are working on write-behind with priority.
>
> Our theory is this:
>
> If we do a 4KB write for example and it is cached in write-behind and we
> do a lookup on the file/do a readdirp on the directory with this file we
> send out wrong stat value to the kernel. There are different caches between
> kernel and gluster which may lead to fstat never coming till write-behind.
> So we need to make sure that we don't get into this situation.
>
> Action items:
>
>  At the moment Raghavendra is working on a patch to implement
> lookup/readdirp in write-behind. I am going to test the same for elastic
> search. Will it be possible for you to test your application against the
> same patch and confirm that the patch fixes the problem?
>
>
>
> On Fri, Oct 28, 2016 at 12:08 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
> hi George,
>
>It would help if we can identify the bare minimum xlators which are
> contributing to the issue like Raghavendra was mentioning earlier. We were
> wondering if it is possible for you to help us in identifying the issue by
> running the workload on a modified setup? We can suggest testing out using
> custom volfiles so that we can slowly build the graph which could be
> causing this issue. We would like you guys to try out this problem with
> just posix-xlator and fuse and nothing else.
>
>
>
> On Thu, Oct 27, 2016 at 1:40 PM, Lian, George (Nokia - CN/Hangzhou) <
> george.l...@nokia.com> wrote:
>
> Hi, Raghavendra,
>
> Could you please give some suggestion for this issue? we try to find the
> clue for this issue for a long time, but it has no progress:(
>
> Thanks & Best Regards,
> George
>
> -Original Message-
> From: Lian, George (Nokia - CN/Hangzhou)
> Sent: Wednesday, October 19, 2016 4:40 PM
> To: 'Raghavendra Gowdappa' <rgowd...@redhat.com>
> Cc: Gluster-devel@gluster.org; I_EXT_MBB_WCDMA_SWD3_DA1_MATRIX_GMS <
> i_ext_mbb_wcdma_swd3_da1_mat...@internal.nsn.com>; Zhang, Bingxuan (Nokia
> - CN/Hangzhou) <bingxuan.zh...@nokia.com>; Zizka, Jan (Nokia - CZ/Prague)
> <jan.zi...@nokia.com>
> Subject: RE: [Gluster-devel] Issue about the size of fstat is less than
> the really size of the syslog file
>
> Hi, Raghavendra
>
> Just now, we test it with glusterfs log with debug-level "TRACE", and let
> some application trigger "glusterfs" produce large log, in that case, when
> we set write-behind and stat-prefetch both OFF,
> Tail the glusterfs log such like mnt-{VOLUME-NAME}.log, it still failed
> with "file truncated",
>
> So that means if file's IO in huge amount, the issue will still be there
> even write-behind and stat-prefetch both OFF.
>
> Best Regards,
> George
>
> -Original Message-
> From: Raghavendra Gowdappa [mailto:rgowd...@redhat.com]
>
> Sent: Wedn

Re: [Gluster-devel] Issue about the size of fstat is less than the really size of the syslog file

2016-11-03 Thread Pranith Kumar Karampuri
; Yes, I confirm use the Patch 2.
>>> >
>>> > One update: the issue is occurred when readdir-ahead off and
>>> write-behind on.
>>> > Seems gone when write-behind and readdir-ahead and quick-read all off.
>>> > Not verified with readdir-ahead and quick-read both off and
>>> write-behind on
>>> > till now.
>>> >
>>> > Need I test it with write-behind on and readdir-ahead and quick-read
>>> both
>>> > off?
>>>
>>> Yes. I was assuming that the previous results were tested with:
>>> 1. write-behind on with the fix
>>> 2. quick-read and readdir-ahead off
>>>
>>> If not, test results with this configuration will help.
>>>
>>> >
>>> > Best Regards,
>>> > George
>>> >
>>> > -Original Message-
>>> > From: Raghavendra Gowdappa [mailto:rgowd...@redhat.com]
>>> > Sent: Wednesday, November 02, 2016 4:04 PM
>>> > To: Lian, George (Nokia - CN/Hangzhou) <george.l...@nokia.com>
>>> > Cc: Raghavendra G <raghaven...@gluster.com>; Gluster-devel@gluster.org
>>> ;
>>> > Zizka, Jan (Nokia - CZ/Prague) <jan.zi...@nokia.com>; Zhang, Bingxuan
>>> (Nokia
>>> > - CN/Hangzhou) <bingxuan.zh...@nokia.com>
>>> > Subject: Re: [Gluster-devel] Issue about the size of fstat is less
>>> than the
>>> > really size of the syslog file
>>> >
>>> >
>>> >
>>> > - Original Message -
>>> > > From: "George Lian (Nokia - CN/Hangzhou)" <george.l...@nokia.com>
>>> > > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
>>> > > Cc: "Raghavendra G" <raghaven...@gluster.com>,
>>> Gluster-devel@gluster.org,
>>> > > "Jan Zizka (Nokia - CZ/Prague)"
>>> > > <jan.zi...@nokia.com>, "Bingxuan Zhang (Nokia - CN/Hangzhou)"
>>> > > <bingxuan.zh...@nokia.com>
>>> > > Sent: Wednesday, November 2, 2016 1:29:13 PM
>>> > > Subject: RE: [Gluster-devel] Issue about the size of fstat is less
>>> than the
>>> > > really size of the syslog file
>>> > >
>>> > > Hi,
>>> > >
>>> > > When those 3 options turn off, the issue seems gone in about 3 hours,
>>> > > otherwise, the issue will be occurred about every 10 minutes.
>>> >
>>> > That's a good news. IIRC, you mentioned that you saw the issue with
>>> just
>>> > write-behind on, with fix applied (readdir-ahead and quick-read off).
>>> Can
>>> > you please confirm you had patcset 2 of http://review.gluster.org/1575
>>> 7?
>>> > patchset 1 had some issues that I corrected in 2.
>>> >
>>> > >
>>> > > Best Regards,
>>> > > George
>>> > >
>>> > > -Original Message-
>>> > > From: Raghavendra Gowdappa [mailto:rgowd...@redhat.com]
>>> > > Sent: Wednesday, November 02, 2016 1:07 PM
>>> > > To: Lian, George (Nokia - CN/Hangzhou) <george.l...@nokia.com>
>>> > > Cc: Raghavendra G <raghaven...@gluster.com>;
>>> Gluster-devel@gluster.org;
>>> > > Zizka, Jan (Nokia - CZ/Prague) <jan.zi...@nokia.com>; Zhang,
>>> Bingxuan
>>> > > (Nokia
>>> > > - CN/Hangzhou) <bingxuan.zh...@nokia.com>
>>> > > Subject: Re: [Gluster-devel] Issue about the size of fstat is less
>>> than the
>>> > > really size of the syslog file
>>> > >
>>> > > Can you try with following xlators turned off?
>>> > >
>>> > > 1. write-behind
>>> > > 2. readdir-ahead
>>> > > 3. quick-read
>>> > >
>>> > > regards,
>>> > > Raghavendra
>>> > >
>>> > > - Original Message -
>>> > > > From: "George Lian (Nokia - CN/Hangzhou)" <george.l...@nokia.com>
>>> > > > To: "Raghavendra Gowdappa" <rgowd...@redhat.com>, "Raghavendra G"
>>> > > > <raghaven...@gluster.com>
>>> > > > Cc: Gluster-devel@gluster.org, "Jan Zizka (Nokia - CZ/Prague)"
>>> > > > <jan.zi...@nokia.com>, "Bingxuan Zhang (Nokia -
>>> > > > CN/Hangzhou)" <bingx

Re: [Gluster-devel] Gluster Test Thursday - Release 3.9

2016-11-03 Thread Pranith Kumar Karampuri
On Thu, Nov 3, 2016 at 9:55 AM, Pranith Kumar Karampuri <pkara...@redhat.com
> wrote:

>
>
> On Wed, Nov 2, 2016 at 7:00 PM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
>
>> Just finished testing VM storage use-case.
>>
>> *Volume configuration used:*
>>
>> [root@srv-1 ~]# gluster volume info
>>
>> Volume Name: rep
>> Type: Replicate
>> Volume ID: 2c603783-c1da-49b7-8100-0238c777b731
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: srv-1:/bricks/rep1
>> Brick2: srv-2:/bricks/rep2
>> Brick3: srv-3:/bricks/rep4
>> Options Reconfigured:
>> nfs.disable: on
>> performance.readdir-ahead: on
>> transport.address-family: inet
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.stat-prefetch: off
>> cluster.eager-lock: enable
>> network.remote-dio: enable
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> features.shard: on
>> cluster.granular-entry-heal: on
>> cluster.locking-scheme: granular
>> network.ping-timeout: 30
>> server.allow-insecure: on
>> storage.owner-uid: 107
>> storage.owner-gid: 107
>> cluster.data-self-heal-algorithm: full
>>
>> Used FUSE to mount the volume locally on each of the 3 nodes (no external
>> clients).
>> shard-block-size - 4MB.
>>
>> *TESTS AND RESULTS:*
>>
>> *What works:*
>>
>> * Created 3 vm images, one per hypervisor. Installed fedora 24 on all of
>> them.
>>   Used virt-manager for ease of setting up the environment. Installation
>> went fine. All green.
>>
>> * Rebooted the vms. Worked fine.
>>
>> * Killed brick-1. Ran dd on the three vms to create a 'src' file.
>> Captured their md5sum value. Verified that
>> the gfid indices and name indices are created under
>> .glusterfs/indices/xattrop and .glusterfs/indices/entry-changes
>> respectively as they should. Brought the brick back up. Waited until heal
>> completed. Captured md5sum again. They matched.
>>
>> * Killed brick-2. Copied 'src' file from the step above into new file
>> using dd. Captured md5sum on the newly created file.
>> Checksum matched. Waited for heal to finish. Captured md5sum again.
>> Everything matched.
>>
>> * Repeated the test above with brick-3 being killed and brought back up
>> after a while. Worked fine.
>>
>> At the end I also captured md5sums from the backend of the shards on the
>> three replicas. They all were found to be
>> in sync. So far so good.
>>
>> *What did NOT work:*
>>
>> * Started dd again on all 3 vms to copy the existing files to new files.
>> While dd was running, I ran replace-brick to replace the third brick with a
>> new brick on the same node with a different path. This caused dd on all
>> three vms to simultaneously fail with "Input/Output error". I tried to read
>> off the files, even that failed. Rebooted the vms. By this time, /.shard is
>> in
>> split-brain as per heal-info. And the vms seem to have suffered
>> corruption and are in an irrecoverable state.
>>
>> I checked the logs. The pattern is very much similar to the one in the
>> add-brick bug Lindsay reported here - https://bugzilla.redhat.com/sh
>> ow_bug.cgi?id=1387878. Seems like something is going wrong each time
>> there is a graph switch.
>>
>> @Aravinda and Pranith:
>>
>> I will need some time to debug this, if 3.9 release can wait until it is
>> RC'd and fixed.
>> Otherwise we will need to caution the users to not do replace-brick,
>> add-brick etc (or any form of graph switch for that matter) *might* cause
>> vm corruption, irrespective of whether the users are using FUSE or gfapi,
>> in 3.9.0.
>>
>> Let me know what your decision is.
>>
>
> Since this bug is not a regression let us document this as a known issue.
> Let us do our best to get the fix in next release.
>
> I am almost done with testing afr and ec.
>
> For afr, leaks etc were not there in the tests I did.
> But I am seeing performance drop for crawling related tests.
>
> This is with 3.9.0rc2
> running directory_crawl_create ... done (252.91 secs)
> running directory_crawl ... done (104.83 secs)
> running directory_recrawl ... done (71.20 secs)
> running metadata_modify ... done (324.83 secs)
> running directory_crawl_delete ... done (124.22 secs)
>

I guess this was a one off: I ran it again thrice for both 3.8.5 and
3.9.0rc2 and t

Re: [Gluster-devel] Issue about the size of fstat is less than the really size of the syslog file

2016-10-28 Thread Pranith Kumar Karampuri
hi George,
   It would help if we can identify the bare minimum xlators which are
contributing to the issue like Raghavendra was mentioning earlier. We were
wondering if it is possible for you to help us in identifying the issue by
running the workload on a modified setup? We can suggest testing out using
custom volfiles so that we can slowly build the graph which could be
causing this issue. We would like you guys to try out this problem with
just posix-xlator and fuse and nothing else.

On Thu, Oct 27, 2016 at 1:40 PM, Lian, George (Nokia - CN/Hangzhou) <
george.l...@nokia.com> wrote:

> Hi, Raghavendra,
>
> Could you please give some suggestion for this issue? we try to find the
> clue for this issue for a long time, but it has no progress:(
>
> Thanks & Best Regards,
> George
>
> -Original Message-
> From: Lian, George (Nokia - CN/Hangzhou)
> Sent: Wednesday, October 19, 2016 4:40 PM
> To: 'Raghavendra Gowdappa' 
> Cc: Gluster-devel@gluster.org; I_EXT_MBB_WCDMA_SWD3_DA1_MATRIX_GMS <
> i_ext_mbb_wcdma_swd3_da1_mat...@internal.nsn.com>; Zhang, Bingxuan (Nokia
> - CN/Hangzhou) ; Zizka, Jan (Nokia - CZ/Prague)
> 
> Subject: RE: [Gluster-devel] Issue about the size of fstat is less than
> the really size of the syslog file
>
> Hi, Raghavendra
>
> Just now, we test it with glusterfs log with debug-level "TRACE", and let
> some application trigger "glusterfs" produce large log, in that case, when
> we set write-behind and stat-prefetch both OFF,
> Tail the glusterfs log such like mnt-{VOLUME-NAME}.log, it still failed
> with "file truncated",
>
> So that means if file's IO in huge amount, the issue will still be there
> even write-behind and stat-prefetch both OFF.
>
> Best Regards,
> George
>
> -Original Message-
> From: Raghavendra Gowdappa [mailto:rgowd...@redhat.com]
> Sent: Wednesday, October 19, 2016 2:54 PM
> To: Lian, George (Nokia - CN/Hangzhou) 
> Cc: Gluster-devel@gluster.org; I_EXT_MBB_WCDMA_SWD3_DA1_MATRIX_GMS <
> i_ext_mbb_wcdma_swd3_da1_mat...@internal.nsn.com>; Zhang, Bingxuan (Nokia
> - CN/Hangzhou) ; Zizka, Jan (Nokia - CZ/Prague)
> 
> Subject: Re: [Gluster-devel] Issue about the size of fstat is less than
> the really size of the syslog file
>
>
>
> - Original Message -
> > From: "George Lian (Nokia - CN/Hangzhou)" 
> > To: "Raghavendra Gowdappa" 
> > Cc: Gluster-devel@gluster.org, "I_EXT_MBB_WCDMA_SWD3_DA1_MATRIX_GMS"
> > , "Bingxuan Zhang
> (Nokia - CN/Hangzhou)"
> > , "Jan Zizka (Nokia - CZ/Prague)" <
> jan.zi...@nokia.com>
> > Sent: Wednesday, October 19, 2016 12:05:01 PM
> > Subject: RE: [Gluster-devel] Issue about the size of fstat is less than
> the really size of the syslog file
> >
> > Hi, Raghavendra,
> >
> > Thanks a lots for your quickly update!
> > In my case, there are so many process(write) is writing to the syslog
> file,
> > it do involve the writer is in the same host and writing in same mount
> point
> > while the tail(reader) is reading it.
> >
> > The bug I just guess is:
> > When a writer write the data with write-behind, it call the call-back
> > function " mdc_writev_cbk" and called "mdc_inode_iatt_set_validate" to
> > validate the "iatt" data, but with the code I mentioned last mail, it do
> > nothing.
>
> mdc_inode_iatt_set_validate has following code
>
> 
> if (!iatt || !iatt->ia_ctime) {
> mdc->ia_time = 0;
> goto unlock;
> }
> 
>
> Which means a NULL iatt sets mdc->ia_time to 0. This results in subsequent
> lookup/stat calls to be NOT served from md-cache. Instead, the stat is
> served from backend bricks. So, I don't see an issue here.
>
> However, one case where a NULL iatt is different from a valid iatt (which
> differs from the value stored in md-cache) is that the latter results in a
> call to inode_invalidate. This invalidation propagates to kernel and all
> dentry and page cache corresponding to file is purged. So, I am suspecting
> whether the stale stat you saw was served from kernel cache (not from
> glusterfs). If this is the case, having mount options "attribute-timeout=0"
> and "entry-timeout=0" should've helped.
>
> I am still at loss to point out the RCA for this issue.
>
>
> > And in same time, the reader(tail) read the "iatt" data, but in case of
> the
> > cache-time is not timeout, it will return the "iatt" data without the
> last
> > change.
> >
> > Do your think it is a possible bug?
> >
> > Thanks & Best Regards,
> > George
> >
> > -Original Message-
> > From: Raghavendra Gowdappa [mailto:rgowd...@redhat.com]
> > Sent: Wednesday, October 19, 2016 2:06 PM
> > To: Lian, George (Nokia - CN/Hangzhou) 
> > Cc: Gluster-devel@gluster.org; 

Re: [Gluster-devel] Gluster Test Thursday - Release 3.9

2016-10-28 Thread Pranith Kumar Karampuri
On Fri, Oct 28, 2016 at 12:35 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> Il 25 ott 2016 12:42, "Aravinda"  ha scritto:
> >
> > Hi,
> >
> > Since Automated test framework for Gluster is in progress, we need help
> from Maintainers and developers to test the features and bug fixes to
> release Gluster 3.9.
> >
>
> Is the following roadmap still valid or any changes was made for this
> release?
> https://www.gluster.org/community/roadmap/3.9/
>
No it is not completely valid. We will update it and announce the release
sometime soon.


>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Gluster Test Thursday - Release 3.9

2016-10-28 Thread Pranith Kumar Karampuri
On Fri, Oct 28, 2016 at 4:33 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2016-10-28 12:32 GMT+02:00 Pranith Kumar Karampuri <pkara...@redhat.com>:
> > No it is not completely valid. We will update it and announce the release
> > sometime soon.
>
> Thank you.
> Could you also fix the other roadmaps with certain features and what
> is being worked on?
> There is a little bit confusion in this gluster area.
>

Yes. Will do that.



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Gluster Test Thursday - Release 3.9

2016-11-03 Thread Pranith Kumar Karampuri
On Thu, Nov 3, 2016 at 4:42 PM, Pranith Kumar Karampuri <pkara...@redhat.com
> wrote:

>
>
> On Thu, Nov 3, 2016 at 9:55 AM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Wed, Nov 2, 2016 at 7:00 PM, Krutika Dhananjay <kdhan...@redhat.com>
>> wrote:
>>
>>> Just finished testing VM storage use-case.
>>>
>>> *Volume configuration used:*
>>>
>>> [root@srv-1 ~]# gluster volume info
>>>
>>> Volume Name: rep
>>> Type: Replicate
>>> Volume ID: 2c603783-c1da-49b7-8100-0238c777b731
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: srv-1:/bricks/rep1
>>> Brick2: srv-2:/bricks/rep2
>>> Brick3: srv-3:/bricks/rep4
>>> Options Reconfigured:
>>> nfs.disable: on
>>> performance.readdir-ahead: on
>>> transport.address-family: inet
>>> performance.quick-read: off
>>> performance.read-ahead: off
>>> performance.io-cache: off
>>> performance.stat-prefetch: off
>>> cluster.eager-lock: enable
>>> network.remote-dio: enable
>>> cluster.quorum-type: auto
>>> cluster.server-quorum-type: server
>>> features.shard: on
>>> cluster.granular-entry-heal: on
>>> cluster.locking-scheme: granular
>>> network.ping-timeout: 30
>>> server.allow-insecure: on
>>> storage.owner-uid: 107
>>> storage.owner-gid: 107
>>> cluster.data-self-heal-algorithm: full
>>>
>>> Used FUSE to mount the volume locally on each of the 3 nodes (no
>>> external clients).
>>> shard-block-size - 4MB.
>>>
>>> *TESTS AND RESULTS:*
>>>
>>> *What works:*
>>>
>>> * Created 3 vm images, one per hypervisor. Installed fedora 24 on all of
>>> them.
>>>   Used virt-manager for ease of setting up the environment. Installation
>>> went fine. All green.
>>>
>>> * Rebooted the vms. Worked fine.
>>>
>>> * Killed brick-1. Ran dd on the three vms to create a 'src' file.
>>> Captured their md5sum value. Verified that
>>> the gfid indices and name indices are created under
>>> .glusterfs/indices/xattrop and .glusterfs/indices/entry-changes
>>> respectively as they should. Brought the brick back up. Waited until heal
>>> completed. Captured md5sum again. They matched.
>>>
>>> * Killed brick-2. Copied 'src' file from the step above into new file
>>> using dd. Captured md5sum on the newly created file.
>>> Checksum matched. Waited for heal to finish. Captured md5sum again.
>>> Everything matched.
>>>
>>> * Repeated the test above with brick-3 being killed and brought back up
>>> after a while. Worked fine.
>>>
>>> At the end I also captured md5sums from the backend of the shards on the
>>> three replicas. They all were found to be
>>> in sync. So far so good.
>>>
>>> *What did NOT work:*
>>>
>>> * Started dd again on all 3 vms to copy the existing files to new files.
>>> While dd was running, I ran replace-brick to replace the third brick with a
>>> new brick on the same node with a different path. This caused dd on all
>>> three vms to simultaneously fail with "Input/Output error". I tried to read
>>> off the files, even that failed. Rebooted the vms. By this time, /.shard is
>>> in
>>> split-brain as per heal-info. And the vms seem to have suffered
>>> corruption and are in an irrecoverable state.
>>>
>>> I checked the logs. The pattern is very much similar to the one in the
>>> add-brick bug Lindsay reported here - https://bugzilla.redhat.com/sh
>>> ow_bug.cgi?id=1387878. Seems like something is going wrong each time
>>> there is a graph switch.
>>>
>>> @Aravinda and Pranith:
>>>
>>> I will need some time to debug this, if 3.9 release can wait until it is
>>> RC'd and fixed.
>>> Otherwise we will need to caution the users to not do replace-brick,
>>> add-brick etc (or any form of graph switch for that matter) *might* cause
>>> vm corruption, irrespective of whether the users are using FUSE or gfapi,
>>> in 3.9.0.
>>>
>>> Let me know what your decision is.
>>>
>>
>> Since this bug is not a regression let us document this as a known issue.
>> Let us do our best to get the fix in next release.
>>
>> I am almost

Re: [Gluster-devel] [Gluster-Maintainers] 'Reviewd-by' tag for commits

2016-10-13 Thread Pranith Kumar Karampuri
On Thu, Oct 6, 2016 at 1:49 AM, Michael Adam <ob...@samba.org> wrote:

> On 2016-10-05 at 09:45 -0400, Ira Cooper wrote:
> > "Feedback-given-by: <nosy.person@silly.place>"
>

Niels/Nigel,
   Is this easier to do?


>
> I like that one - thanks! :-)
>
> Michael
>
> > - Original Message -
> > > On 2016-09-30 at 17:52 +0200, Niels de Vos wrote:
> > > > On Fri, Sep 30, 2016 at 08:50:12PM +0530, Ravishankar N wrote:
> > > > > On 09/30/2016 06:38 PM, Niels de Vos wrote:
> > > > > > On Fri, Sep 30, 2016 at 07:11:51AM +0530, Pranith Kumar Karampuri
> > > > > > wrote:
> > > > ...
> > > > > > Maybe we can add an additional tag that mentions all the people
> that
> > > > > > did do reviews of older versions of the patch. Not sure what the
> tag
> > > > > > would be, maybe just CC?
> > > > > It depends on what tags would be processed to obtain statistics on
> review
> > > > > contributions.
> > > >
> > > > Real statistics would come from Gerrit, not from the 'git log'
> output.
> > > > We do have a ./extras/who-wrote-glusterfs/ in the sources, but that
> is
> > > > only to get an idea about the changes that were made and should not
> be
> > > > used for serious statistics.
> > > >
> > > > It is possible to feed the Gerrit comment-stream into things like
> > > > Elasticsearch and get an accurate impression how many reviews people
> do
> > > > (and much more). I hope we can get some contribution diagrams from
> > > > someting like this at one point.
> > > >
> > > > Would some kind of Gave-feedback tag for people that left a comment
> on
> > > > earlier versions of the patch be appreciated by others? It will show
> in
> > > > the 'git log' who was involved in some way or form.
> > >
> > > I think this would be fair.
> > >
> > > Reviewed-by tags should imho be reserved for the final
> > > incarnation of the patch. Those mean that the person named
> > > in the tag has aproved this version of the patch for getting
> > > into the official tree. A previous version of the patch can
> > > have been entirely different, so a reviewed-by for that
> > > previous version may not actually apply to the new version at all
> > > and hence create a false impression!
> > >
> > > It is also difficult to track all activities by tags,
> > > and anyone who wants to measure performance and contributions
> > > only by looking at git commit tags will not be doing several
> > > people justice. We could add 'discussed-with' or 'designed-by'
> > > tags, etc ... ;-)
> > >
> > > On a serious note, in Samba we use 'Pair-programmed-with' tags,
> > > because we do pair-programming a lot, but only one person can
> > > be an author of a git commit ...
> > >
> > > The 'Gave-feedback' tag I do like. even though it does
> > > not quite match with the foobar-by pattern of other tags.
> > >
> > > Michael
> > >
> > > ___
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org
> > > http://www.gluster.org/mailman/listinfo/gluster-devel
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] 'Reviewd-by' tag for commits

2016-10-15 Thread Pranith Kumar Karampuri
Which review-tool do you suggest Michael? Any other alternatives that are
better? Don't tell me email :-)

On Sun, Oct 16, 2016 at 1:20 AM, Michael Adam  wrote:

> On 2016-10-14 at 11:44 +0200, Niels de Vos wrote:
> > On Fri, Oct 14, 2016 at 02:21:23PM +0530, Nigel Babu wrote:
> > > I've said on this thread before, none of this is easy to do. It needs
> us to
> > > fork Gerrit to make our own changes. I would argue that depending on
> the
> > > data from the commit message is folly.
> >
> > Eventhough we all seem to agree that statistics based on commit messages
> > is not correct,
>
> I think it is the best we can currently offer.
> Let's be honest: Gerrit sucks. Big time!
> If gerrit is no more, the git logs will survive.
> Git is the common denominator that will last,
> with all the tags that the commit messages carry.
> So for now, I'd say the more tags we can fit into
> git commit mesages the better... :-)
>
> > it looks like it is an incentive to get reviewing valued
> > more. We need to promote the reviewing work somehow, and this is one way
> > to do it.
> >
> > Forking Gerrit is surely not the right thing.
>
> Right. Avoid it if possible. Did I mention gerrit sucks? ;-)
>
> Cheers - Michael
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Xavi documented how erasure coding algo works

2016-10-14 Thread Pranith Kumar Karampuri
Your comments are welcome @ http://review.gluster.org/15637

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] 'Reviewd-by' tag for commits

2016-10-14 Thread Pranith Kumar Karampuri
How do we get the following tags in the commit message?

> Smoke: Gluster Build System <jenk...@build.gluster.org>
> NetBSD-regression: NetBSD Build System <jenk...@build.gluster.org>
> CentOS-regression: Gluster Build System <jenk...@build.gluster.org>


On Fri, Oct 14, 2016 at 3:14 PM, Niels de Vos <nde...@redhat.com> wrote:

> On Fri, Oct 14, 2016 at 02:21:23PM +0530, Nigel Babu wrote:
> > I've said on this thread before, none of this is easy to do. It needs us
> to
> > fork Gerrit to make our own changes. I would argue that depending on the
> > data from the commit message is folly.
>
> Eventhough we all seem to agree that statistics based on commit messages
> is not correct, it looks like it is an incentive to get reviewing valued
> more. We need to promote the reviewing work somehow, and this is one way
> to do it.
>
> Forking Gerrit is surely not the right thing. But could it not get
> discussed with the rest of the Gerrit community? I hope that the Gerrit
> admins follow the Gerrit project and know how to report feature requests
> or such?
>
> Thanks,
> Niels
>
>
> >
> > On Fri, Oct 14, 2016 at 12:23 PM, Niels de Vos <nde...@redhat.com>
> wrote:
> >
> > > On Thu, Oct 13, 2016 at 11:01:43PM +0530, Pranith Kumar Karampuri
> wrote:
> > > > On Thu, Oct 6, 2016 at 1:49 AM, Michael Adam <ob...@samba.org>
> wrote:
> > > >
> > > > > On 2016-10-05 at 09:45 -0400, Ira Cooper wrote:
> > > > > > "Feedback-given-by: <nosy.person@silly.place>"
> > > > >
> > > >
> > > > Niels/Nigel,
> > > >Is this easier to do?
> > >
> > > No idea if this can be done by a Gerrit configuration, I'm not an admin
> > > there :)
> > >
> > > I suspect Gerrit gives the option to run a script after someone pressed
> > > the [submit] button for merging, and before the actual commit is pushed
> > > into the branch. If there is no config option, such a hook-script could
> > > be made to work. But, my Gerrit experience on that level is
> > > non-existent, so I can be completely wrong.
> > >
> > > Niels
> > >
> > > >
> > > >
> > > > >
> > > > > I like that one - thanks! :-)
> > > > >
> > > > > Michael
> > > > >
> > > > > > - Original Message -
> > > > > > > On 2016-09-30 at 17:52 +0200, Niels de Vos wrote:
> > > > > > > > On Fri, Sep 30, 2016 at 08:50:12PM +0530, Ravishankar N
> wrote:
> > > > > > > > > On 09/30/2016 06:38 PM, Niels de Vos wrote:
> > > > > > > > > > On Fri, Sep 30, 2016 at 07:11:51AM +0530, Pranith Kumar
> > > Karampuri
> > > > > > > > > > wrote:
> > > > > > > > ...
> > > > > > > > > > Maybe we can add an additional tag that mentions all the
> > > people
> > > > > that
> > > > > > > > > > did do reviews of older versions of the patch. Not sure
> what
> > > the
> > > > > tag
> > > > > > > > > > would be, maybe just CC?
> > > > > > > > > It depends on what tags would be processed to obtain
> > > statistics on
> > > > > review
> > > > > > > > > contributions.
> > > > > > > >
> > > > > > > > Real statistics would come from Gerrit, not from the 'git
> log'
> > > > > output.
> > > > > > > > We do have a ./extras/who-wrote-glusterfs/ in the sources,
> but
> > > that
> > > > > is
> > > > > > > > only to get an idea about the changes that were made and
> should
> > > not
> > > > > be
> > > > > > > > used for serious statistics.
> > > > > > > >
> > > > > > > > It is possible to feed the Gerrit comment-stream into things
> like
> > > > > > > > Elasticsearch and get an accurate impression how many reviews
> > > people
> > > > > do
> > > > > > > > (and much more). I hope we can get some contribution diagrams
> > > from
> > > > > > > > someting like this at one point.
> > > > > > > >
> > > > > > > > Would some kind of Gave-feedback tag for people that

Re: [Gluster-devel] [Gluster-users] opportunist for outreachy

2016-10-14 Thread Pranith Kumar Karampuri
On Fri, Oct 14, 2016 at 9:27 PM, Shyam  wrote:

> On 10/14/2016 10:48 AM, Manikandan Selvaganesh wrote:
>
>> Hi Soumya,
>>
>> Welcome to the community.
>>
>> Here[1] is the link for Gluster Documentation. I would suggest you to
>> google and
>> read a bit about GlusterFS and then get started with "Quick Start
>> Guide[2]".
>> Once you have done your setup and have played a bit around the
>> installation and
>> configuration move on with "Developers Guide[3]".
>>
>> If you want to get started with Code contributions pick some EasyFix
>> bugs which
>> can be found here[4]. After this I hope you would have got a minimal
>> idea and then
>> explore more in depth and pick up the project/component which interests
>> you more.
>> Again, we have some list of projects[5] already listed, check out if
>> anything interests
>> you here. Feel free to bring your own ideas as well. These are quite
>> generic for anyone
>> who is new to the community and in case if  you want to know
>> specifically about
>> Outreachy, someone in the community will surely respond to you shortly.
>>
>
> Let me take the Outreachy part up.
>
> There are 2 projects there, one relating to the documentation, for which
> Manikandan has filled in some links and thoughts. The other being the
> instrumentation tooling around performance.
>
> For the latter, I would suggest that you get a gluster volume up and
> running, and attempt the GlusterBench.py [6] against it, and start with
> reporting the results. Again, Manikandan has covered getting gluster up and
> running. For any questions, or things that you get stuck on when running
> the bench script, post back here and we will help as needed.
>
>
>> If you have queries, please mail us back. Also, we are always available
>> on #gluster-dev
>> and #gluster-meeting in Freenode.
>>
>> All the best :-)
>>
>> [1] https://gluster.readthedocs.io/en/latest/
>>
>> [2] https://gluster.readthedocs.io/en/latest/Quick-Start-Guide/
>> Quickstart/
>>
>> [3] https://gluster.readthedocs.io/en/latest/Developer-guide/Dev
>> elopers-Index/
>>
>> [4] https://gluster.readthedocs.io/en/latest/Developer-guide/Eas
>> y-Fix-Bugs/
>>
>> [5] https://gluster.readthedocs.io/en/latest/Developer-guide/Projects/
>>
>
> [6] GlusterBench.py : https://github.com/gluster/gbe
> nch/tree/master/bench-tests/bt--0001


Hi Soumya,
I see that the important information is already given by Mani and
Shyam. I went to IIIT-Hyderabad for my Engineering (2003-2007). It is
really good to see you here :-). I will be happy to visit the campus next
time I visit Hyderabad and introduce folks to gluster (I am hoping Linux
Users Group is still as active as it used to be). I heard that our college
is very famous now because of the performances in ACM ICPC, may be we
should make it famous for open-source contributions too in future :-).

All the best!


>
>
>>
>> --
>> Cheers,
>> Manikandan Selvaganesh.
>>
>> On Fri, Oct 14, 2016 at 7:56 PM, Ms ms > > wrote:
>>
>> Hi,
>>
>> I'm a research student pursuing my Masters in IIIT-Hyderabad. I am
>> keen on working on Gluster's Outreachy project.
>>
>> I have prior experience in configuring, maintaining and managing
>> systems in an MHRD project. I have completed the required course
>> credits towards my degree and am working on my Thesis currently. It
>> would be great opportunity for me to learn and contribute to the
>> project as well.
>>
>> As I am a bit new to the community it would be nice if anyone can
>> guide me a few useful resources to get me started.
>>
>> Thanks and regards,
>> Soumya
>>
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org 
>> http://www.gluster.org/mailman/listinfo/gluster-users
>> 
>>
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] 1402538 : Assertion failure during rebalance of symbolic links

2016-12-14 Thread Pranith Kumar Karampuri
On Wed, Dec 14, 2016 at 1:48 PM, Xavier Hernandez <xhernan...@datalab.es>
wrote:

> There's another issue with the patch that Ashish sent.
>
> The original problem is that a setattr on a symbolic link gets transformed
> to a regular file while the fop is being executed. Even if we apply the
> Ashish' patch to avoid the assert, the setattr fop will still succeed and
> incorrectly change the attributes of a gluster special file that shouldn't
> change.
>
> I think that's a bigger problem that needs to be addressed globally.
>
> I'm sure this is not an easy solution, but probably the best way would be
> to have distinct inodes for the gluster link files and the original file.
> This way most of these problems should be solved.
>

Is there any reason why there is a difference in type of the file on
hashed/cached subvols? We can have the same type of file on both dht
subvolumes? That will prevent unlink of regular file and recreate with the
actual type of the file?


>
> Xavi
>
>
> On 12/14/2016 09:02 AM, Xavier Hernandez wrote:
>
>> On 12/14/2016 06:10 AM, Raghavendra Gowdappa wrote:
>>
>>>
>>>
>>> - Original Message -
>>>
>>>> From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
>>>> To: "Ashish Pandey" <aspan...@redhat.com>
>>>> Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Shyam Ranganathan"
>>>> <srang...@redhat.com>, "Nithya Balachandran"
>>>> <nbala...@redhat.com>, "Xavier Hernandez" <xhernan...@datalab.es>,
>>>> "Raghavendra Gowdappa" <rgowd...@redhat.com>
>>>> Sent: Tuesday, December 13, 2016 9:29:46 PM
>>>> Subject: Re: 1402538 : Assertion failure during rebalance of symbolic
>>>> links
>>>>
>>>> On Tue, Dec 13, 2016 at 2:45 PM, Ashish Pandey <aspan...@redhat.com>
>>>> wrote:
>>>>
>>>> Hi All,
>>>>>
>>>>> We have been seeing an issue where re balancing symbolic links leads
>>>>> to an
>>>>> assertion failure in EC volume.
>>>>>
>>>>> The root cause of this is that while migrating symbolic links to
>>>>> other sub
>>>>> volume, it creates a link file (with attributes .T) .
>>>>> This file is a regular file.
>>>>> Now, during migration a setattr comes to this link and because of
>>>>> possible
>>>>> race, posix_stat return stats of this "T" file.
>>>>> In ec_manager_seattr, we receive callbacks and check the type of
>>>>> entry. If
>>>>> it is a regular file we try to get size and if it is not there, we
>>>>> raise an
>>>>> assert.
>>>>> So, basically we are checking a size of the link (which will not have
>>>>> size) which has been returned as regular file and we are ending up when
>>>>> this condition
>>>>> becomes TRUE.
>>>>>
>>>>> Now, this looks like a problem with re balance and difficult to fix at
>>>>> this point (as per the discussion).
>>>>> We have an alternative to fix it in EC but that will be more like a
>>>>> hack
>>>>> than an actual fix. We should not modify EC
>>>>> to deal with an individual issue which is in other translator.
>>>>>
>>>>
>>> I am afraid, dht doesn't have a better way of handling this. While DHT
>>> maintains abstraction (of a symbolic link) to layers above, the layers
>>> below it cannot be shielded from seeing the details like a linkto file
>>> etc.
>>>
>>
>> That's ok, and I think it's the right thing to do. From the point of
>> view of EC, it's irrelevant how the file is seen by upper layers. It
>> only cares about the files below it.
>>
>> If the concern really is that the file is changing its type in a span
>>> of single fop, we can probably explore the option of locking (or other
>>> synchronization mechanisms) to prevent migration taking place, while a
>>> fop is in progress.
>>>
>>
>> That's the real problem. Some operations receive an inode referencing a
>> symbolic link on input but the iatt structures from the callback
>> reference a regular file. It's even worse because it's an asynchronous
>> race so some of the bricks may return a regular file and some may return
>> a symbolic link. If there are more than redundancy bricks returni

Re: [Gluster-devel] 1402538 : Assertion failure during rebalance of symbolic links

2016-12-13 Thread Pranith Kumar Karampuri
On Tue, Dec 13, 2016 at 2:45 PM, Ashish Pandey  wrote:

> Hi All,
>
> We have been seeing an issue where re balancing symbolic links leads to an
> assertion failure in EC volume.
>
> The root cause of this is that while migrating symbolic links to other sub
> volume, it creates a link file (with attributes .T) .
> This file is a regular file.
> Now, during migration a setattr comes to this link and because of possible
> race, posix_stat return stats of this "T" file.
> In ec_manager_seattr, we receive callbacks and check the type of entry. If
> it is a regular file we try to get size and if it is not there, we raise an
> assert.
> So, basically we are checking a size of the link (which will not have
> size) which has been returned as regular file and we are ending up when
> this condition
> becomes TRUE.
>
> Now, this looks like a problem with re balance and difficult to fix at
> this point (as per the discussion).
> We have an alternative to fix it in EC but that will be more like a hack
> than an actual fix. We should not modify EC
> to deal with an individual issue which is in other translator.
>
> Now the question is how to proceed with this? Any suggestions?
>

Raghavendra/Nithya,
 Could one of you explain the difficulties in fixing this issue in
DHT so that Xavi will also be caught up with why we should add this change
in EC in the short term.


>
> Details on this bug can be found here -
> https://bugzilla.redhat.com/show_bug.cgi?id=1402538
>
> 
> Ashish
>
>
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] What is the answer to the 3.9.1 release question?

2017-01-06 Thread Pranith Kumar Karampuri
On Fri, Jan 6, 2017 at 4:51 PM, Kaleb Keithley  wrote:

>
> Nothing?
>
>
The reason I asked for 2 maintainers for the release is so that there will
be load distribution. But unfortunately the pairing was bad, both of us are
impacted by the same work which is leading to not enough time for upstream
release maintenance. Last time I was loaded a bit less so took care of most
of the things at the end with help from Amye and Vijay. But this time I am
swamped with work too. Please suggest how we can get the release out.

May be Aravinda can add if he is a bit free to do this.


>
> - Original Message -
> > From: "Kaleb S. KEITHLEY" 
> >
> >
> > There was considerable discussion in the community meeting yesterday.
> >
> > If we're not going to get one (any time soon) I'm contemplating a
> > 3.9.0-n+1 update in Fedora, Ubuntu Launchpad PPA, etc., that would
> > consist of 3.9.0 plus all the commits to the release-3.9 branch to date.
> >
> > Obviously I'd rather have an official 3.9.1 release by the maintainers.
> >
> > --
> >
> > Kaleb
> >
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> >
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] What is the answer to the 3.9.1 release question?

2017-01-06 Thread Pranith Kumar Karampuri
I am fine with it. Thanks Kaleb!!

On Fri, Jan 6, 2017 at 6:08 PM, Kaleb S. KEITHLEY <kkeit...@redhat.com>
wrote:

> On 01/06/2017 06:42 AM, Pranith Kumar Karampuri wrote:
> >
> >
> > On Fri, Jan 6, 2017 at 4:51 PM, Kaleb Keithley <kkeit...@redhat.com
> > <mailto:kkeit...@redhat.com>> wrote:
> >
> >
> > Nothing?
> >
> >
> > The reason I asked for 2 maintainers for the release is so that there
> > will be load distribution. But unfortunately the pairing was bad, both
> > of us are impacted by the same work which is leading to not enough time
> > for upstream release maintenance. Last time I was loaded a bit less so
> > took care of most of the things at the end with help from Amye and
> > Vijay. But this time I am swamped with work too. Please suggest how we
> > can get the release out.
> >
> > May be Aravinda can add if he is a bit free to do this.
>
> I'd certainly be willing to step in and help. I don't have time either
> to do an extensive round of testing.
>
> I'm not convinced that an STM release update needs huge amounts of
> testing either. (But feel free to disagree with me. ;-))
>
> If you and Aravinda are okay with it, I'll do some minimal testing, tag,
> and release.
>
> Just so we can get _something_ out!?!  What do you think?
>
> --
>
> Kaleb
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Get GFID for a file through libgfapi

2017-01-09 Thread Pranith Kumar Karampuri
Try getxattr for glusterfs.gfid.string may be?

On Mon, Jan 9, 2017 at 11:23 PM, Ankireddypalle Reddy 
wrote:

> Hi,
>
> I am trying to extract the GFID for a file through libgfapi
> interface. When I try to extract the value of extended attribute
> glusterfs.gfid through libgfapi I get the errorno: 95.  This works for FUSE
> though. Is there a way to extract the GFID for a file through libgfapi.
>
>
>
> Thanks and Regards,
>
> Ram
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for
> the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Maintainers 2.0 Proposal

2017-03-24 Thread Pranith Kumar Karampuri
Do we also plan to publish similar guidelines for deciding on Project
maintainer?

On Fri, Mar 24, 2017 at 2:24 AM, Michael Scherer <msche...@redhat.com>
wrote:

> Le samedi 18 mars 2017 à 16:47 +0530, Pranith Kumar Karampuri a écrit :
> > On Sat, Mar 18, 2017 at 1:20 AM, Amar Tumballi <atumb...@redhat.com>
> wrote:
> >
> > > I don't want to take the discussions in another direction, but want
> > > clarity on few things:
> > >
> > > 1. Does maintainers means they are only reviewing/ merging patches?
> > > 2. Should maintainers be responsible for answering ML / IRC questions
> > > (well, they should focus more on documentation IMO).
> > > 3. Who's responsibility is it to keep the gluster.org webpage? I
> > > personally feel the responsibility should be well defined.
>
> Theses point seems to have been overlooked (as no one answered), yet I
> think they do matter if we want to expand the community besides coders.
>
> And since one of the goal is to "Welcome more contibutors(sic) at a
> project impacting level", I think we should be also speaking of
> contributions besides code (ie, website, for example, documentation for
> another).
>
> While on it, I would like to see some points about:
>
> - ensure that someone is responsible for having the design discussion in
> the open
> - ensure that each feature get proper testing when committed, and the
> maintainers is responsible for making sure this happen
> - ensure that each feature get documented when committed.
>
> If we think of contribution as a pipeline (kinda like the sales funnel),
> making sure there is documentation also mean people can use the
> software, thus increasing the community, and so helping to recruit
> people in a contributor pipeline.
>
> Proper testing means that it make refactoring easier, thus easing
> contributions (ie, people can submit patches and see nothing break, even
> for new features), thus also making people likely more at ease to submit
> patches later.
>
> And making sure the design discussion occurs in the open is also more
> welcoming to contributors, since they can see how we discuss, and learn
> from it.
>
> And while on it, is there a similar document being prepared about
> Community Lead and Project Lead (especially for transition, etc) ?
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Not able to compile glusterfs

2017-03-26 Thread Pranith Kumar Karampuri
hi,
When I compile I get the following errors:

EC dynamic support   : x64 sse avx
Use memory pools : yes
Nanosecond m/atimes  : yes

file `cli1-xdr.tmp' already exists and may be overwritten
file `changelog-xdr.tmp' already exists and may be overwritten
file `glusterfs3-xdr.tmp' already exists and may be overwritten
file `mount3udp.tmp' already exists and may be overwritten
file `glusterd1-xdr.tmp' already exists and may be overwritten
cp: cannot stat 'cli1-xdr.c': No such file or directory
make[1]: *** [cli1-xdr.c] Error 1
make[1]: *** Waiting for unfinished jobs
file `rpc-common-xdr.tmp' already exists and may be overwritten
file `portmap-xdr.tmp' already exists and may be overwritten
file `nlm4-xdr.tmp' already exists and may be overwritten
file `nsm-xdr.tmp' already exists and may be overwritten
cp: file `glusterfs-fops.tmp' already exists and may be overwritten
cannot stat 'mount3udp.c': No such file or directory
make[1]: *** [mount3udp.c] Error 1
cp: cannot stat 'changelog-xdr.c': No such file or directory
make[1]: *** [changelog-xdr.c] Error 1
cp: cp: cannot stat 'glusterfs3-xdr.c'cannot stat 'glusterd1-xdr.c': No
such file or directory
: No such file or directory
cp: cannot stat 'rpc-common-xdr.c': No such file or directory
make[1]: *** [glusterfs3-xdr.c] Error 1
make[1]: *** [rpc-common-xdr.c] Error 1
make[1]: *** [glusterd1-xdr.c] Error 1
cp: cannot stat 'portmap-xdr.c': No such file or directory
make[1]: *** [portmap-xdr.c] Error 1
file `acl3-xdr.tmp' already exists and may be overwritten
cp: cannot stat 'glusterfs-fops.c': No such file or directory
cp: cannot stat 'nlm4-xdr.c': No such file or directory
make[1]: *** [glusterfs-fops.c] Error 1
make[1]: *** [nlm4-xdr.c] Error 1
cp: cannot stat 'acl3-xdr.c': No such file or directory
make[1]: *** [acl3-xdr.c] Error 1
mv: cannot stat 'nsm-xdr.tmp': No such file or directory
cp: cannot stat 'nsm-xdr.c': No such file or directory
make[1]: *** [nsm-xdr.c] Error 1
make: *** [install-recursive] Error 1

Wondering if anyone else is facing the same problem. If there is a way to
fix this please let me know.

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

<    1   2   3   4   5   6   7   8   >