Re: [Gluster-devel] [Gluster-users] gfid generation

2016-11-15 Thread Pranith Kumar Karampuri
On Wed, Nov 16, 2016 at 3:31 AM, Ankireddypalle Reddy 
wrote:

> Kaushal/Pranith,
>   Thanks for clarifying this. As I
> understand there are 2 id's. Please correct if there is a mistake in my
> assumptions:
>   1) HASH generated by DHT and this will
> generate the same id for a given file all the time.
>   2) GFID which is an version 4 UUID. As
> per the below links this is supposed to contain a time stamp field in it.
> So this will not generate the same id for a given file all the time.
>https://en.wikipedia.org/wiki/
> Universally_unique_identifier
>https://tools.ietf.org/html/rfc4122


That is correct. There is no involvement of parent gfid in either of this
:-).


>
>
> Thanks and Regards,
> ram
> -Original Message-
> From: Kaushal M [mailto:kshlms...@gmail.com]
> Sent: Tuesday, November 15, 2016 1:21 PM
> To: Ankireddypalle Reddy
> Cc: Pranith Kumar Karampuri; gluster-us...@gluster.org; Gluster Devel
> Subject: Re: [Gluster-users] gfid generation
>
> On Tue, Nov 15, 2016 at 11:33 PM, Ankireddypalle Reddy <
> are...@commvault.com> wrote:
> > Pranith,
> >
> >  Thanks for getting back on this. I am trying to see
> > how gfid can be generated programmatically. Given a file name how do
> > we generate gfid for it. I was reading some of the email threads about
> > it where it was mentioned that gfid is generated based upon parent
> > directory gfid and the file name. Given a same parent gfid and file
> > name do we always end up with the same gfid.
>
> You're probably confusing the hash as generated for the elastic hash
> algorithm in DHT, with UUID. That is a combination of
>
> I always thought that the GFID was a UUID, which was randomly generated.
> (The random UUID might be being modified a little to allow some leeway with
> directory listing, IIRC).
>
> Adding gluster-devel to get more eyes on this.
>
> >
> >
> >
> > Thanks and Regards,
> >
> > ram
> >
> >
> >
> > From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
> > Sent: Tuesday, November 15, 2016 12:58 PM
> > To: Ankireddypalle Reddy
> > Cc: gluster-us...@gluster.org
> > Subject: Re: [Gluster-users] gfid generation
> >
> >
> >
> > Sorry, didn't understand the question. Are you saying give a file on
> > gluster how to get gfid of the file?
> >
> > #getfattr -d -m. -e hex /path/to/file shows it
> >
> >
> >
> > On Fri, Nov 11, 2016 at 9:47 PM, Ankireddypalle Reddy
> > 
> > wrote:
> >
> > Hi,
> >
> > Is the mapping from file name to gfid an idempotent operation.
> > If so please point me to the function that does this.
> >
> >
> >
> > Thanks and Regards,
> >
> > Ram
> >
> > ***Legal Disclaimer***
> >
> > "This communication may contain confidential and privileged material
> > for the
> >
> > sole use of the intended recipient. Any unauthorized review, use or
> > distribution
> >
> > by others is strictly prohibited. If you have received the message by
> > mistake,
> >
> > please advise the sender by reply email and delete the message. Thank
> you."
> >
> > **
> >
> >
> > ___
> > Gluster-users mailing list
> > gluster-us...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> >
> >
> >
> >
> > --
> >
> > Pranith
> >
> > ***Legal Disclaimer***
> > "This communication may contain confidential and privileged material
> > for the sole use of the intended recipient. Any unauthorized review,
> > use or distribution by others is strictly prohibited. If you have
> > received the message by mistake, please advise the sender by reply
> > email and delete the message. Thank you."
> > **
> >
> > ___
> > Gluster-users mailing list
> > gluster-us...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for
> the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] GlusterD 1.0 updates

2016-11-15 Thread Atin Mukherjee
Last week's status:

1. tier as a service patch is going through multiple review iterations.
2. We have started reviewing the patch from Saravana on placing pidfiles
into correct location. First set of comments have been published.
3. Xin from 126.com has been helping us in figuring out a case where we may
end up with a zero byte info file. I (have) and will be spending some time
to see if this issue can be tackled. But the test causing this issue is not
something we will hit in production very often.
4. Last week we touched upon about an issue on friend-sm and we figured out
that with a particular given state and an incoming event a wrong function
pointer is getting executed which doesn't allow the friend-sm to move
forward in a right manner. We are still yet to figure out how can we solve
this issue. Parallely Samikshan has started working on a visual diagram to
explain how friend-sm works. Once its ready we will publish it upstream.

This week:

1. We have got some new BZs filed in 3.8 version in GlusterD. Will be
starting to take a look at them.
2. Focus on finishing the review for tier as service and take the patch
into mainline.
3. Address the other incoming reviews.

On Wed, Nov 9, 2016 at 12:00 PM, Atin Mukherjee  wrote:

> Not much of an update from GlusterD 1.0 side as the last week both
> Samikshan and me were on holidays. We have managed to review few patches
> and get them merged in mainline and other release branches where applicable.
>
> This week's plan is to focus on the patches which need review attention.
> Mainly tiering as a service and putting the pid files in /var/run/gluster
> (patch from Saravana). We are also looking into one of the issue where peer
> state doesn't reflect correct status and it seems that there is an issue in
> friend state machine with that workflow.
>
> ~ Atin (atinm)
>



-- 

~ Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Upstream smoke test failures

2016-11-15 Thread Vijay Bellur
On Tue, Nov 15, 2016 at 8:40 AM, Nithya Balachandran
 wrote:
>
>
> On 15 November 2016 at 18:55, Vijay Bellur  wrote:
>>
>> On Mon, Nov 14, 2016 at 10:34 PM, Nithya Balachandran
>>  wrote:
>> >
>> >
>> > On 14 November 2016 at 21:38, Vijay Bellur  wrote:
>> >>
>> >> I would prefer that we disable dbench only if we have an owner for
>> >> fixing the problem and re-enabling it as part of smoke tests. Running
>> >> dbench seamlessly on gluster has worked for a long while and if it is
>> >> failing today, we need to address this regression asap.
>> >>
>> >> Does anybody have more context or clues on why dbench is failing now?
>> >>
>> > While I agree that it needs to be looked at asap, leaving it in until we
>> > get
>> > an owner seems rather pointless as all it does is hold up various
>> > patches
>> > and waste machine time. Re-triggering it multiple times so that it
>> > eventually passes does not add anything to the regression test processes
>> > or
>> > validate the patch as we know there is a problem.
>> >
>> > I would vote for removing it and assigning someone to look at it
>> > immediately.
>> >
>>
>> From the debugging done so far can we identify an owner to whom this
>> can be assigned? I looked around for related discussions and could
>> figure out that we are looking to get statedumps. Do we have more
>> information/context beyond this?
>>
> I have updated the BZ (https://bugzilla.redhat.com/show_bug.cgi?id=1379228)
> with info from the last failure - looks like hangs in write-behind and
> read-ahead.
>


I spent some time on this today and it does look like write-behind is
absorbing READs without performing any WIND/UNWIND actions. I have
attached a statedump from a slave that had the dbench problem (thanks,
Nigel!) to the above bug.

Snip from statedump:

[global.callpool.stack.2]
stack=0x7fd970002cdc
uid=0
gid=0
pid=31884
unique=37870
lk-owner=
op=READ
type=1
cnt=2

[global.callpool.stack.2.frame.1]
frame=0x7fd9700036ac
ref_count=0
translator=patchy-read-ahead
complete=0
parent=patchy-readdir-ahead
wind_from=ra_page_fault
wind_to=FIRST_CHILD (fault_frame->this)->fops->readv
unwind_to=ra_fault_cbk

[global.callpool.stack.2.frame.2]
frame=0x7fd97000346c
ref_count=1
translator=patchy-readdir-ahead
complete=0


Note that the frame which was wound from ra_page_fault() to
write-behind is not yet complete and write-behind has not progressed
the call. There are several callstacks with a similar signature in
statedump.

In write-behind's readv implementation, we stub READ fops and enqueue
them in the relevant inode context. Once enqueued the stub resumes
when appropriate set of conditions happen in write-behind. This is not
happening now and  I am not certain if:

- READ fops are languishing in a queue and not being resumed or
- READ fops are pre-maturely dropped from a queue without winding or unwinding

When I gdb'd into the client process and examined the inode contexts
for write-behind, I found all queues to be empty. This seems to
indicate that the latter reason is more plausible but I have not yet
found a code path to account for this possibility.

One approach to proceed further is to add more logs in write-behind to
get a better understanding of the problem. I will try that out
sometime later this week. We are also considering disabling
write-behind for smoke tests in the interim after a trial run (with
write-behind disabled) later in the day.

Thanks,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] gfid generation

2016-11-15 Thread Kaushal M
On Tue, Nov 15, 2016 at 11:33 PM, Ankireddypalle Reddy
 wrote:
> Pranith,
>
>  Thanks for getting back on this. I am trying to see how
> gfid can be generated programmatically. Given a file name how do we generate
> gfid for it. I was reading some of the email threads about it where it was
> mentioned that gfid is generated based upon parent directory gfid and the
> file name. Given a same parent gfid and file name do we always end up with
> the same gfid.

You're probably confusing the hash as generated for the elastic hash
algorithm in DHT, with UUID. That is a combination of

I always thought that the GFID was a UUID, which was randomly
generated. (The random UUID might be being modified a little to allow
some leeway with directory listing, IIRC).

Adding gluster-devel to get more eyes on this.

>
>
>
> Thanks and Regards,
>
> ram
>
>
>
> From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
> Sent: Tuesday, November 15, 2016 12:58 PM
> To: Ankireddypalle Reddy
> Cc: gluster-us...@gluster.org
> Subject: Re: [Gluster-users] gfid generation
>
>
>
> Sorry, didn't understand the question. Are you saying give a file on gluster
> how to get gfid of the file?
>
> #getfattr -d -m. -e hex /path/to/file shows it
>
>
>
> On Fri, Nov 11, 2016 at 9:47 PM, Ankireddypalle Reddy 
> wrote:
>
> Hi,
>
> Is the mapping from file name to gfid an idempotent operation.  If
> so please point me to the function that does this.
>
>
>
> Thanks and Regards,
>
> Ram
>
> ***Legal Disclaimer***
>
> "This communication may contain confidential and privileged material for the
>
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
>
> by others is strictly prohibited. If you have received the message by
> mistake,
>
> please advise the sender by reply email and delete the message. Thank you."
>
> **
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> --
>
> Pranith
>
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Upstream smoke test failures

2016-11-15 Thread Nithya Balachandran
On 15 November 2016 at 18:55, Vijay Bellur  wrote:

> On Mon, Nov 14, 2016 at 10:34 PM, Nithya Balachandran
>  wrote:
> >
> >
> > On 14 November 2016 at 21:38, Vijay Bellur  wrote:
> >>
> >> I would prefer that we disable dbench only if we have an owner for
> >> fixing the problem and re-enabling it as part of smoke tests. Running
> >> dbench seamlessly on gluster has worked for a long while and if it is
> >> failing today, we need to address this regression asap.
> >>
> >> Does anybody have more context or clues on why dbench is failing now?
> >>
> > While I agree that it needs to be looked at asap, leaving it in until we
> get
> > an owner seems rather pointless as all it does is hold up various patches
> > and waste machine time. Re-triggering it multiple times so that it
> > eventually passes does not add anything to the regression test processes
> or
> > validate the patch as we know there is a problem.
> >
> > I would vote for removing it and assigning someone to look at it
> > immediately.
> >
>
> From the debugging done so far can we identify an owner to whom this
> can be assigned? I looked around for related discussions and could
> figure out that we are looking to get statedumps. Do we have more
> information/context beyond this?
>
> I have updated the BZ (https://bugzilla.redhat.com/show_bug.cgi?id=1379228) 
> with
info from the last failure - looks like hangs in write-behind and
read-ahead.

Regards,
Nithya

> Regards,
> Vijay
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Upstream smoke test failures

2016-11-15 Thread Vijay Bellur
On Mon, Nov 14, 2016 at 10:34 PM, Nithya Balachandran
 wrote:
>
>
> On 14 November 2016 at 21:38, Vijay Bellur  wrote:
>>
>> I would prefer that we disable dbench only if we have an owner for
>> fixing the problem and re-enabling it as part of smoke tests. Running
>> dbench seamlessly on gluster has worked for a long while and if it is
>> failing today, we need to address this regression asap.
>>
>> Does anybody have more context or clues on why dbench is failing now?
>>
> While I agree that it needs to be looked at asap, leaving it in until we get
> an owner seems rather pointless as all it does is hold up various patches
> and waste machine time. Re-triggering it multiple times so that it
> eventually passes does not add anything to the regression test processes or
> validate the patch as we know there is a problem.
>
> I would vote for removing it and assigning someone to look at it
> immediately.
>

>From the debugging done so far can we identify an owner to whom this
can be assigned? I looked around for related discussions and could
figure out that we are looking to get statedumps. Do we have more
information/context beyond this?

Regards,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 5 minutes)

2016-11-15 Thread Soumya Koduri

Hi all,

Apologies for the late notice.

This meeting is scheduled for anyone who is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
  (https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
  (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,
Soumya
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel