Re: [Gluster-devel] [release-3.6] compile error: 'GF_REPLACE_OP_START' undeclared

2015-08-17 Thread Avra Sengupta
Still hitting this on freebsd and netbsd smoke runs on release 3.6 
branch. Are we merging patches on release 3.6 branch for now even with 
these failures. I have two such patches that need to be merged.


Regards,
Avra

On 07/06/2015 02:32 PM, Niels de Vos wrote:

On Mon, Jul 06, 2015 at 02:19:07PM +0530, Raghavendra Bhat wrote:

On 07/06/2015 01:39 PM, Niels de Vos wrote:

On Mon, Jul 06, 2015 at 12:09:28PM +0530, Raghavendra Bhat wrote:

On 07/06/2015 09:52 AM, Kaushal M wrote:

I checked on NetBSD-7.0_BETA and FreeBSD-10.1. I couldn't reproduce
this. I'll try on NetBSD-6 next.

~kaushal

I think it has to be included before 3.6.4 is made G.A. I can wait till the
fix for this issue is merged before making 3.6.4. Does it sound ok? Or
should I go ahead with 3.6.4 and make a quick 3.6.5 with this fix?

I only care about getting http://review.gluster.org/11335 merged :-)

This is a patch I promised to take into release-3.5. It would be nicer
to have this change included in the release-3.6 branch before I merge
the 3.5 backport. At the moment, 3.5.5 is waiting on this patch. But I
do not think you really need to delay 3.6.4 off for that one. It should
be fine if it lands in 3.6.5. (The compile error looks more like a 3.6.4
blocker.)

Niels

Niels,

The patch you mentioned has received the acks and also has passed the linux
regression tests. But it seem to have failed netbsd regression tests.

Yes, at least the smoke tests on NetBSD and FreeBSD fail with the
compile error mentioned in the subject of this email :)

Thanks,
Niels



Regards,
Raghavendra Bhat


Regards,
Raghavendra Bhat


On Mon, Jul 6, 2015 at 8:38 AM, Kaushal M  wrote:

Krutika hit this last week, and let us (GlusterD maintiners) know of
it. I volunteered to look into this, but couldn't find time. I'll do
it now.

~kaushal

On Sun, Jul 5, 2015 at 10:43 PM, Atin Mukherjee
 wrote:

I remember Krutika reporting it few days back. So it seems like its not
fixed yet. If there is no taker I will send a patch tomorrow.

-Atin
Sent from one plus one

On Jul 5, 2015 9:58 PM, "Niels de Vos"  wrote:

Hi,

it seems that the current release-3.6 branch does not compile on
FreedBSD and NetBSD (not sure why it compiles on CentOS-6). These errors
are thrown:

   --- glusterd_la-glusterd-op-sm.lo ---
 CC   glusterd_la-glusterd-op-sm.lo

/home/jenkins/root/workspace/netbsd6-smoke/xlators/mgmt/glusterd/src/glusterd-op-sm.c:
In function 'glusterd_op_start_rb_timer':

/home/jenkins/root/workspace/netbsd6-smoke/xlators/mgmt/glusterd/src/glusterd-op-sm.c:3685:19:
error: 'GF_REPLACE_OP_START' undeclared (first use in this function)

/home/jenkins/root/workspace/netbsd6-smoke/xlators/mgmt/glusterd/src/glusterd-op-sm.c:3685:19:
note: each undeclared identifier is reported only once for each function it
appears in

/home/jenkins/root/workspace/netbsd6-smoke/xlators/mgmt/glusterd/src/glusterd-op-sm.c:
In function 'glusterd_bricks_select_status_volume':

/home/jenkins/root/workspace/netbsd6-smoke/xlators/mgmt/glusterd/src/glusterd-op-sm.c:5800:34:
warning: unused variable 'snapd'
   *** [glusterd_la-glusterd-op-sm.lo] Error code 1


Could someone send a (pointer to the) backport that addresses this?

Thanks,
Niels


On Sun, Jul 05, 2015 at 08:59:32AM -0700, Gluster Build System (Code
Review) wrote:

Gluster Build System has posted comments on this change.

Change subject: nfs: make it possible to disable nfs.mount-rmtab
..


Patch Set 1: -Verified

Build Failed

http://build.gluster.org/job/compare-bug-version-and-git-branch/9953/ :
SUCCESS

http://build.gluster.org/job/freebsd-smoke/8551/ : FAILURE

http://build.gluster.org/job/smoke/19820/ : SUCCESS

http://build.gluster.org/job/netbsd6-smoke/7808/ : FAILURE

--
To view, visit http://review.gluster.org/11335
To unsubscribe, visit http://review.gluster.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I40c4d8d754932f86fb2b1b2588843390464c773d
Gerrit-PatchSet: 1
Gerrit-Project: glusterfs
Gerrit-Branch: release-3.6
Gerrit-Owner: Niels de Vos 
Gerrit-Reviewer: Gluster Build System 
Gerrit-Reviewer: Kaleb KEITHLEY 
Gerrit-Reviewer: NetBSD Build System 
Gerrit-Reviewer: Niels de Vos 
Gerrit-Reviewer: Raghavendra Bhat 
Gerrit-Reviewer: jiffin tony Thottan 
Gerrit-HasComments: No

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Glu

Re: [Gluster-devel] Implementing Flat Hierarchy for trashed files

2015-08-17 Thread Prashanth Pai

- Original Message -
> From: "Anoop C S" 
> To: gluster-devel@gluster.org
> Sent: Monday, August 17, 2015 6:20:50 PM
> Subject: [Gluster-devel] Implementing Flat Hierarchy for trashed files
> 
> Hi all,
> 
> As we move forward, in order to fix the limitations with current trash
> translator we are planning to replace the existing criteria for trashed
> files inside trash directory with a general flat hierarchy as described
> in the following sections. Please have your thoughts on following
> design considerations.
> 
> Current implementation
> ==
> * Trash translator resides on glusterfs server stack just above posix.
> * Trash directory (.trashcan) is created during volume start and is
>   visible under root of the volume.
> * Each trashed file is moved (renamed) to trash directory with an
>   appended time stamp in the file name.

Do these files get moved during re-balance due to name change or do you choose 
file name according to the DHT regex magic to avoid that ?

> * Exact directory hierarchy (w.r.t the root of volume) is maintained
>   inside trash directory whenever a file is deleted/truncated from a
>   directory
> 
> Outstanding issues
> ==
> * Since renaming occurs at the server side, client-side is unaware of
>   trash doing rename or create operations.
> * As a result files/directories may not be visible from mount point.
> * Files/Directories created from from trash translator will not have
>   gfid associated with it until lookup is performed.
> 
> Proposed Flat hierarchy
> ===
> * Instead of creating the whole directory under trash, we will rename
>   the file and place it directly under trash directory (of course with
>   appended time stamp).

The .trashcan directory might not scale with millions of such files placed 
under one directory. We had faced the same problem with gluster-swift project 
for object expiration feature and had decided to distribute our files across 
multiple directories in a deterministic way. And, personally, I'd prefer 
storing absolute timestamp, for example: as returned by `date +%s` command.

> * Directory hierarchy can be stored via either of the following two
>   approaches:
>   (a) File name will contain the whole path with time stamp
>   appended

If this approach is taken, you might have trouble with choosing a "magic 
letter" representing slashes.

>   (b) Store whole hierarchy as an xattr
> 
> Other enhancements
> ==
> * Create the trash directory only
> when trash xlator is enabled.

This is a needed enhancement. Upgrade to 3.7.* from older glusterfs versions 
caused undesired results in gluster-swift integration because .trashcan was 
visible by default on all glusterfs volumes.

> * Operations such as unlink, rename etc
> will be prevented on trash
>   directory only when trash xlator is
> enabled.
> * A new trash helper translator on client side(loaded only when
> trash
>   is enabled) to resolve split brain issues with truncation of
> files.
> * Restore files from trash with the help of an explicit setfattr
> call.

You have to be very careful with races involved in re-creating the path when 
clients are accessing volume, also with over-writing if path exists.
It's way easier (from implementer's perspective) if this is a manual process.

> 
> Thanks & Regards,
> -Anoop C S
> -Jiffin Tony Thottan
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD regression failures

2015-08-17 Thread Kotresh Hiremath Ravishankar
Yes, it makes sense to move both geo-rep test to bad tests for now till.
the issue gets fixed in netBSD. I am looking into it netbsd failures.

Thanks and Regards,
Kotresh H R

- Original Message -
> From: "Avra Sengupta" 
> To: "Atin Mukherjee" , "Gluster Devel" 
> , "gluster-infra"
> , "Raghavendra Talur" 
> Sent: Tuesday, August 18, 2015 11:02:08 AM
> Subject: Re: [Gluster-devel] NetBSD regression failures
> 
> On 08/18/2015 09:25 AM, Atin Mukherjee wrote:
> >
> > On 08/17/2015 02:20 PM, Avra Sengupta wrote:
> >> That patch itself might not pass all regressions as it might fail at the
> >> geo-rep test. I have sent a patch (http://review.gluster.org/#/c/11934/)
> >> with both the tests being moved to bad test. Talur could you please
> >> abandon 11933.
> > It seems like we need to move tests/geo-rep/georep-basic-dr-tarssh.t as
> > well to the bad test?
> Yes looks like it. I will resend the patch with this change.
> >> Regards,
> >> Avra
> >>
> >> On 08/17/2015 02:12 PM, Atin Mukherjee wrote:
> >>> tests/basic/mount-nfs-auth.t has been already been added to bad test by
> >>> http://review.gluster.org/11933
> >>>
> >>> ~Atin
> >>>
> >>> On 08/17/2015 02:09 PM, Avra Sengupta wrote:
>  Will send a patch moving ./tests/basic/mount-nfs-auth.t and
>  ./tests/geo-rep/georep-basic-dr-rsync.t to bad test.
> 
>  Regards,
>  Avra
> 
>  On 08/17/2015 12:45 PM, Avra Sengupta wrote:
> > On 08/17/2015 12:29 PM, Vijaikumar M wrote:
> >> On Monday 17 August 2015 12:22 PM, Avra Sengupta wrote:
> >>> Hi,
> >>>
> >>> The NetBSD regression tests are continuously failing with errors in
> >>> the following tests:
> >>>
> >>> ./tests/basic/mount-nfs-auth.t
> >>> ./tests/basic/quota-anon-fd-nfs.t
> >> quota-anon-fd-nfs.t is known issues with NFS client caching so it is
> >> marked as bad test, final test will be marked as success even if this
> >> test fails.
> > Yes it seems "./tests/geo-rep/georep-basic-dr-rsync.t" also fails in
> > the runs where quota-anon-fd-nfs.t fails, and that marks the final
> > tests as failure.
> >
> >>
> >>> Is there any recent change that is trigerring this behaviour. Also
> >>> currently one machine is running NetBSD tests. Can someone with
> >>> access to Jenkins, bring up a few more slaves to run NetBSD
> >>> regressions in parallel.
> >>>
> >>> Regards,
> >>> Avra
> >>> ___
> >>> Gluster-devel mailing list
> >>> Gluster-devel@gluster.org
> >>> http://www.gluster.org/mailman/listinfo/gluster-devel
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
>  ___
>  Gluster-devel mailing list
>  Gluster-devel@gluster.org
>  http://www.gluster.org/mailman/listinfo/gluster-devel
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster 3.6.4 tune2fs and inode size errors

2015-08-17 Thread Atin Mukherjee


On 08/18/2015 10:41 AM, Atin Mukherjee wrote:
> Just a quick update. I was wrong saying the issue is reproducible in
> 3.7. What I could see is this issue is fixed in 3.7. Now I need to find
> out the patch which fixed it and backport it to 3.6. Would it be
> possible for you to upgrade the setup to 3.7 if you want a quick solution?
I've backported the fix at [1]. This bug [2] was addressed by [3] in
mainline and that's why the issue is not seen in 3.7

[1] http://review.gluster.org/11941
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1130462
[3] http://review.gluster.org/8492

> 
> ~Atin
> 
> On 08/17/2015 07:23 PM, Atin Mukherjee wrote:
>> I've not got a chance to look at it which I will do now. Thanks for
>> reminder!
>>
>> -Atin
>> Sent from one plus one
>> On Aug 17, 2015 7:19 PM, "Davy Croonen"  wrote:
>>
>>> Hi Atin
>>>
>>> Any news on this one?
>>>
>>> KR
>>> Davy
>>>
>>> On 12 Aug 2015, at 16:41, Atin Mukherjee 
>>> wrote:
>>>
>>> Davy,
>>>
>>> I will check this with Kaleb and get back to you.
>>>
>>> -Atin
>>> Sent from one plus one
>>> On Aug 12, 2015 7:22 PM, "Davy Croonen"  wrote:
>>>
 Atin

 No problem to raise a bug for this, but isn’t this already addressed here:

 Bug 670  - 
 continuous
 log entries failed to get inode size
 https://bugzilla.redhat.com/show_bug.cgi?id=670#c2

 KR
 Davy

 On 12 Aug 2015, at 14:56, Atin Mukherjee  wrote:

 Well, this looks like a bug even in 3.7 as well. I've posted a fix [1]
 to address it.

 [1] http://review.gluster.org/11898

 Could you please raise a bug for this?

 ~Atin

 On 08/12/2015 01:32 PM, Davy Croonen wrote:

 Hi Atin

 Thanks for your answer. The op-version was indeed an old one, 30501 to be
 precise. I’ve updated the op-version to the one you suggested with the
 command: gluster volume set all cluster.op-version 30603. From testing it
 seems this issue is solved for the moment.

 Considering the errors in the etc-glusterfs-glusterd.vol.log file I’m
 looking forward to hear from you.

 Thanks in advance.

 KR
 Davy

 On 11 Aug 2015, at 19:28, Atin Mukherjee >>> mailto:atin.mukherje...@gmail.com >> wrote:



 -Atin
 Sent from one plus one
 On Aug 11, 2015 7:54 PM, "Davy Croonen" >>> mailto:davy.croo...@smartbit.be >> wrote:


 Hi all

 Our etc-glusterfs-glusterd.vol.log is filling up with entries as shown:

 [2015-08-11 11:40:33.807940] E
 [glusterd-utils.c:7410:glusterd_add_inode_size_to_dict] 0-management:
 tune2fs exited with non-zero exit status
 [2015-08-11 11:40:33.807962] E
 [glusterd-utils.c:7436:glusterd_add_inode_size_to_dict] 0-management:
 failed to get inode size

 I will check this and get back to you.


 From the mailinglist archive I could understand this was a problem in
 gluster version 3.4 and should be fixed. We started out from version 3.5
 and upgraded in the meantime to version 3.6.4 but the error in the errorlog
 still exists.

 We are also unable to execute the command

 $gluster volume status all inode

 as a result gluster hangs up with the message: “Another transaction is in
 progress. Please try again after sometime.” while executing the command

 $gluster volume status

 Have you bump up the op version to 30603? Otherwise glusterd will still
 have cluster locking and then multiple commands can't run simultaneously.


 Are the error messages in the logs related to the hung up of gluster
 while executing the mentioned commands? And any ideas about how to fix 
 this?

 The error messages are not because of this.


 Kind regards
 Davy
 ___
 Gluster-users mailing list
 gluster-us...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users




 ___
 Gluster-users mailing list
 gluster-us...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users


 --
 ~Atin



>>>
>>
> 

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD regression failures

2015-08-17 Thread Avra Sengupta

On 08/18/2015 09:25 AM, Atin Mukherjee wrote:


On 08/17/2015 02:20 PM, Avra Sengupta wrote:

That patch itself might not pass all regressions as it might fail at the
geo-rep test. I have sent a patch (http://review.gluster.org/#/c/11934/)
with both the tests being moved to bad test. Talur could you please
abandon 11933.

It seems like we need to move tests/geo-rep/georep-basic-dr-tarssh.t as
well to the bad test?

Yes looks like it. I will resend the patch with this change.

Regards,
Avra

On 08/17/2015 02:12 PM, Atin Mukherjee wrote:

tests/basic/mount-nfs-auth.t has been already been added to bad test by
http://review.gluster.org/11933

~Atin

On 08/17/2015 02:09 PM, Avra Sengupta wrote:

Will send a patch moving ./tests/basic/mount-nfs-auth.t and
./tests/geo-rep/georep-basic-dr-rsync.t to bad test.

Regards,
Avra

On 08/17/2015 12:45 PM, Avra Sengupta wrote:

On 08/17/2015 12:29 PM, Vijaikumar M wrote:

On Monday 17 August 2015 12:22 PM, Avra Sengupta wrote:

Hi,

The NetBSD regression tests are continuously failing with errors in
the following tests:

./tests/basic/mount-nfs-auth.t
./tests/basic/quota-anon-fd-nfs.t

quota-anon-fd-nfs.t is known issues with NFS client caching so it is
marked as bad test, final test will be marked as success even if this
test fails.

Yes it seems "./tests/geo-rep/georep-basic-dr-rsync.t" also fails in
the runs where quota-anon-fd-nfs.t fails, and that marks the final
tests as failure.




Is there any recent change that is trigerring this behaviour. Also
currently one machine is running NetBSD tests. Can someone with
access to Jenkins, bring up a few more slaves to run NetBSD
regressions in parallel.

Regards,
Avra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] testcase ./tests/geo-rep/georep-basic-dr-rsync.t failure

2015-08-17 Thread Kotresh Hiremath Ravishankar
Thanks Emmanuel, I could not look into it as I was out of station.
I will debug it today.

Thanks and Regards,
Kotresh H R

- Original Message -
> From: "Emmanuel Dreyfus" 
> To: "Kotresh Hiremath Ravishankar" 
> Cc: "Gluster Devel" 
> Sent: Friday, August 14, 2015 12:45:49 AM
> Subject: Re: [Gluster-devel] testcase ./tests/geo-rep/georep-basic-dr-rsync.t 
> failure
> 
> Kotresh Hiremath Ravishankar  wrote:
> 
> > We need a netbsd machine off the ring to debug. Could you please provide
> > one?
> 
> nbslave75 is offline for you.
> 
> 
> --
> Emmanuel Dreyfus
> http://hcpnet.free.fr/pubz
> m...@netbsd.org
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster 3.6.4 tune2fs and inode size errors

2015-08-17 Thread Atin Mukherjee
Just a quick update. I was wrong saying the issue is reproducible in
3.7. What I could see is this issue is fixed in 3.7. Now I need to find
out the patch which fixed it and backport it to 3.6. Would it be
possible for you to upgrade the setup to 3.7 if you want a quick solution?

~Atin

On 08/17/2015 07:23 PM, Atin Mukherjee wrote:
> I've not got a chance to look at it which I will do now. Thanks for
> reminder!
> 
> -Atin
> Sent from one plus one
> On Aug 17, 2015 7:19 PM, "Davy Croonen"  wrote:
> 
>> Hi Atin
>>
>> Any news on this one?
>>
>> KR
>> Davy
>>
>> On 12 Aug 2015, at 16:41, Atin Mukherjee 
>> wrote:
>>
>> Davy,
>>
>> I will check this with Kaleb and get back to you.
>>
>> -Atin
>> Sent from one plus one
>> On Aug 12, 2015 7:22 PM, "Davy Croonen"  wrote:
>>
>>> Atin
>>>
>>> No problem to raise a bug for this, but isn’t this already addressed here:
>>>
>>> Bug 670  - 
>>> continuous
>>> log entries failed to get inode size
>>> https://bugzilla.redhat.com/show_bug.cgi?id=670#c2
>>>
>>> KR
>>> Davy
>>>
>>> On 12 Aug 2015, at 14:56, Atin Mukherjee  wrote:
>>>
>>> Well, this looks like a bug even in 3.7 as well. I've posted a fix [1]
>>> to address it.
>>>
>>> [1] http://review.gluster.org/11898
>>>
>>> Could you please raise a bug for this?
>>>
>>> ~Atin
>>>
>>> On 08/12/2015 01:32 PM, Davy Croonen wrote:
>>>
>>> Hi Atin
>>>
>>> Thanks for your answer. The op-version was indeed an old one, 30501 to be
>>> precise. I’ve updated the op-version to the one you suggested with the
>>> command: gluster volume set all cluster.op-version 30603. From testing it
>>> seems this issue is solved for the moment.
>>>
>>> Considering the errors in the etc-glusterfs-glusterd.vol.log file I’m
>>> looking forward to hear from you.
>>>
>>> Thanks in advance.
>>>
>>> KR
>>> Davy
>>>
>>> On 11 Aug 2015, at 19:28, Atin Mukherjee >> mailto:atin.mukherje...@gmail.com >> wrote:
>>>
>>>
>>>
>>> -Atin
>>> Sent from one plus one
>>> On Aug 11, 2015 7:54 PM, "Davy Croonen" >> mailto:davy.croo...@smartbit.be >> wrote:
>>>
>>>
>>> Hi all
>>>
>>> Our etc-glusterfs-glusterd.vol.log is filling up with entries as shown:
>>>
>>> [2015-08-11 11:40:33.807940] E
>>> [glusterd-utils.c:7410:glusterd_add_inode_size_to_dict] 0-management:
>>> tune2fs exited with non-zero exit status
>>> [2015-08-11 11:40:33.807962] E
>>> [glusterd-utils.c:7436:glusterd_add_inode_size_to_dict] 0-management:
>>> failed to get inode size
>>>
>>> I will check this and get back to you.
>>>
>>>
>>> From the mailinglist archive I could understand this was a problem in
>>> gluster version 3.4 and should be fixed. We started out from version 3.5
>>> and upgraded in the meantime to version 3.6.4 but the error in the errorlog
>>> still exists.
>>>
>>> We are also unable to execute the command
>>>
>>> $gluster volume status all inode
>>>
>>> as a result gluster hangs up with the message: “Another transaction is in
>>> progress. Please try again after sometime.” while executing the command
>>>
>>> $gluster volume status
>>>
>>> Have you bump up the op version to 30603? Otherwise glusterd will still
>>> have cluster locking and then multiple commands can't run simultaneously.
>>>
>>>
>>> Are the error messages in the logs related to the hung up of gluster
>>> while executing the mentioned commands? And any ideas about how to fix this?
>>>
>>> The error messages are not because of this.
>>>
>>>
>>> Kind regards
>>> Davy
>>> ___
>>> Gluster-users mailing list
>>> gluster-us...@gluster.org>> >
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>>
>>>
>>> ___
>>> Gluster-users mailing list
>>> gluster-us...@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>> --
>>> ~Atin
>>>
>>>
>>>
>>
> 

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Implementing Flat Hierarchy for trashed files

2015-08-17 Thread Niels de Vos
On Mon, Aug 17, 2015 at 06:20:50PM +0530, Anoop C S wrote:
> Hi all,
> 
> As we move forward, in order to fix the limitations with current trash
> translator we are planning to replace the existing criteria for trashed
> files inside trash directory with a general flat hierarchy as described
> in the following sections. Please have your thoughts on following
> design considerations.
> 
> Current implementation
> ==
> * Trash translator resides on glusterfs server stack just above posix.
> * Trash directory (.trashcan) is created during volume start and is
>   visible under root of the volume.
> * Each trashed file is moved (renamed) to trash directory with an
>   appended time stamp in the file name. 
> * Exact directory hierarchy (w.r.t the root of volume) is maintained
>   inside trash directory whenever a file is deleted/truncated from a
>   directory
> 
> Outstanding issues
> ==
> * Since renaming occurs at the server side, client-side is unaware of
>   trash doing rename or create operations.
> * As a result files/directories may not be visible from mount point.

This might be something upcall could help with. If the trash xlator is
placed above upcall, any clients interested in the .trashcan directory
(or subdirs) could get an in/revalidation request.

> * Files/Directories created from from trash translator will not have
>   gfid associated with it until lookup is performed.

When a client receives an invalidation of the parent directory (from
upcall), a LOOKUP will follow on the next request.

> Proposed Flat hierarchy
> ===

I'm missing a bit of info here, what limitations need to be addressed?

> * Instead of creating the whole directory under trash, we will rename
>   the file and place it directly under trash directory (of course with
>   appended time stamp).
> * Directory hierarchy can be stored via either of the following two
>   approaches:
>   (a) File name will contain the whole path with time stamp
>   appended
>   (b) Store whole hierarchy as an xattr

If this is needed, definitely go with (b). Filenames have a limit, and
the full path (directories + filename + timestamp) could surely hit
that.

> Other enhancements
> ==

Have these been filed as bugs/RFEs? If not, please do so and include a
good description of the work that is needed. Maybe others in the Gluster
community are interested in providing patches, and details on what to do
is very helpful.

Thanks,
Niels

> * Create the trash directory only
> when trash xlator is enabled.
> * Operations such as unlink, rename etc
> will be prevented on trash
>   directory only when trash xlator is
> enabled.
> * A new trash helper translator on client side(loaded only when
> trash
>   is enabled) to resolve split brain issues with truncation of
> files.
> * Restore files from trash with the help of an explicit setfattr
> call.
> 
> Thanks & Regards,
> -Anoop C S
> -Jiffin Tony Thottan
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


pgp9j6aBfP7It.pgp
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD regression failures

2015-08-17 Thread Atin Mukherjee


On 08/17/2015 02:20 PM, Avra Sengupta wrote:
> That patch itself might not pass all regressions as it might fail at the
> geo-rep test. I have sent a patch (http://review.gluster.org/#/c/11934/)
> with both the tests being moved to bad test. Talur could you please
> abandon 11933.
It seems like we need to move tests/geo-rep/georep-basic-dr-tarssh.t as
well to the bad test?
> 
> Regards,
> Avra
> 
> On 08/17/2015 02:12 PM, Atin Mukherjee wrote:
>> tests/basic/mount-nfs-auth.t has been already been added to bad test by
>> http://review.gluster.org/11933
>>
>> ~Atin
>>
>> On 08/17/2015 02:09 PM, Avra Sengupta wrote:
>>> Will send a patch moving ./tests/basic/mount-nfs-auth.t and
>>> ./tests/geo-rep/georep-basic-dr-rsync.t to bad test.
>>>
>>> Regards,
>>> Avra
>>>
>>> On 08/17/2015 12:45 PM, Avra Sengupta wrote:
 On 08/17/2015 12:29 PM, Vijaikumar M wrote:
>
> On Monday 17 August 2015 12:22 PM, Avra Sengupta wrote:
>> Hi,
>>
>> The NetBSD regression tests are continuously failing with errors in
>> the following tests:
>>
>> ./tests/basic/mount-nfs-auth.t
>> ./tests/basic/quota-anon-fd-nfs.t
> quota-anon-fd-nfs.t is known issues with NFS client caching so it is
> marked as bad test, final test will be marked as success even if this
> test fails.
 Yes it seems "./tests/geo-rep/georep-basic-dr-rsync.t" also fails in
 the runs where quota-anon-fd-nfs.t fails, and that marks the final
 tests as failure.

>
>
>> Is there any recent change that is trigerring this behaviour. Also
>> currently one machine is running NetBSD tests. Can someone with
>> access to Jenkins, bring up a few more slaves to run NetBSD
>> regressions in parallel.
>>
>> Regards,
>> Avra
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
> 

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Implementing Flat Hierarchy for trashed files

2015-08-17 Thread Soumya Koduri

This approach sounds good. Few inputs/queries inline.


On 08/17/2015 06:20 PM, Anoop C S wrote:

Hi all,

As we move forward, in order to fix the limitations with current trash
translator we are planning to replace the existing criteria for trashed
files inside trash directory with a general flat hierarchy as described
in the following sections. Please have your thoughts on following
design considerations.

Current implementation
==
* Trash translator resides on glusterfs server stack just above posix.
* Trash directory (.trashcan) is created during volume start and is
   visible under root of the volume.
* Each trashed file is moved (renamed) to trash directory with an
   appended time stamp in the file name.
* Exact directory hierarchy (w.r.t the root of volume) is maintained
   inside trash directory whenever a file is deleted/truncated from a
   directory

Outstanding issues
==
* Since renaming occurs at the server side, client-side is unaware of
   trash doing rename or create operations.
* As a result files/directories may not be visible from mount point.
* Files/Directories created from from trash translator will not have
   gfid associated with it until lookup is performed.

Proposed Flat hierarchy
===
* Instead of creating the whole directory under trash, we will rename
   the file and place it directly under trash directory (of course with
   appended time stamp).
* Directory hierarchy can be stored via either of the following two
   approaches:
(a) File name will contain the whole path with time stamp
appended
(b) Store whole hierarchy as an xattr

IMO, (b) sounds better compared to (a) as storing entire hierarchical 
path as the file name may end up reaching file_name max length limit 
sooner. Also users may wish to look at the file names with the original 
names for easy identification in the .trash directory.



Other enhancements
==
* Create the trash directory only
when trash xlator is enabled.


Can the trash xlator be disabled once its enabled? If yes, will the 
files be still visible from the mount point?



* Operations such as unlink, rename etc
will be prevented on trash
   directory only when trash xlator is
enabled.
* A new trash helper translator on client side(loaded only when
trash
   is enabled) to resolve split brain issues with truncation of
files.
Doesn't AFR/EC already take care of this? Could you please provide more 
details on this issue.



Thanks,
Soumya


* Restore files from trash with the help of an explicit setfattr
call.

Thanks & Regards,
-Anoop C S
-Jiffin Tony Thottan
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] semi-sync replication

2015-08-17 Thread Jeff Darcy
> Do we have plans to support "semi-synchronous" type replication in the
> future? By semi-sync I mean writing to one leg the replica, securing the
> write on a faster stable storage (capacitor backed SSD or NVRAM) and then
> acknowledge the client. The write on other replica leg may happen at later
> point in time.

This is possible, but introduces a lot of consistency/ordering concerns.
It has always been part of the plan for NSR, with leader election to
help with ordering and issue/completion counts to give the user some
control over consistency.  The same basic idea can be implemented in
AFR, but without the mechanisms mentioned above the consistency might
not be noticeably better than we already have with geo-replication.  I
also wonder whether deferring unlocks or pending-count updates further
might introduce performance glitches, or exacerbate the split-brain
problems we've been battling all these years.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Serialization of fops acting on same dentry on server

2015-08-17 Thread Shyam

On 08/17/2015 01:19 AM, Raghavendra Gowdappa wrote:



- Original Message -

From: "Raghavendra Gowdappa" 
To: "Gluster Devel" 
Cc: "Sakshi Bansal" 
Sent: Monday, 17 August, 2015 10:39:38 AM
Subject: [Gluster-devel] Serialization of fops acting on same dentry on server

All,

Pranith and me were discussing about implementation of compound operations
like "create + lock", "mkdir + lock", "open + lock" etc. These operations
are useful in situations like:

1. To prevent locking on all subvols during directory creation as part of
self heal in dht. Currently we are following approach of locking _all_
subvols by both rmdir and lookup-heal [1].


Correction. It should've been, "to prevent locking on all subvols during rmdir". The 
lookup self-heal should lock on all subvols (with compound "mkdir + lookup" if directory 
is not present on a subvol). With this rmdir/rename can lock on just any one subvol and this will 
prevent any parallel lookup-heal from preventing directory creation.


2. To lock a file in advance so that there is less performance hit during
transactions in afr.


I see multiple thoughts here and am splitting what I think into these parts,

- Compound FOPs:
The whole idea and need for compound FOPs I think is very useful. 
Initially compounding the FOP+Lock is a good idea as this is mostly 
internal to Gluster and does not change any interface to any of the 
consumers. Also, as Pranith is involved we can iron out AFR/EC related 
possibilities in such compounding as well.


In compounding I am only concerned about cases where part of the 
compound operation succeeds on one replica, but fails on the other, as 
an example if the mkdir succeeds on one and so locking subsequently 
succeeds, but mkdir fails on the other (because a competing clients 
compound FOP raced this one), how can we handle such situations? Do we 
need server side AFR/EC with leader election link in NSR to handle this? 
(maybe the example is not a good/firm one for this case, but 
nevertheless can compounding create such problems?)


Another question would be, we need to compound it as Lock+FOP rather 
than FOP+Lock in some cases, right?


- Advance locking to reduce serial RPC requests that degrade performance:
This is again a good thing to do, part of such a concept is in eager 
locking already (as I see it). What I would like to see in this regard 
would be eager leasing (piggyback leases) of a file (and loosely 
directory, as I need to think through that case more) so that we can 
optimize the common case when a file is being operated by a single 
client and degrade to fine grained locking when multiple clients compete.


Assuming eager leasing, AFR transactions need only client side in memory 
locking (to prevent 2 threads/consumers of the client racing on the same 
file/dir) and also, with leasing and lease breaking we can get better at 
cooperating with other clients than what eager locking does now.


In short, I would like to see the advance locking or leasing be, is part 
of the client side caching stack, so that multiple xlators on the client 
can leverage the same and I would like the leasing model over the 
locking model as it allows easier breaking than locks.




While thinking about implementing such compound operations, it occurred to me
that one of the problems would be how do we handle a racing mkdir/create and
a (named lookup - simply referred as lookup from now on - followed by lock).
This is because,
1. creation of directory/file on backend
2. linking of the inode with the gfid corresponding to that file/directory

are not atomic. It is not guaranteed that inode passed down during
mkdir/create call need not be the one that survives in inode table. Since
posix-locks xlator maintains all the lock-state in inode, it would be a
problem if a different inode is linked in inode table than the one passed
during mkdir/create. One way to solve this problem is to serialize fops
(like mkdir/create, lookup, rename, rmdir, unlink) that are happening on a
particular dentry. This serialization would also solve other bugs like:

1. issues solved by [2][3] and possibly many such issues.
2. Stale dentries left out in bricks' inode table because of a racing lookup
and dentry modification ops (like rmdir, unlink, rename etc).

Initial idea I've now is to maintain fops in-progress on a dentry in parent
inode (may be resolver code in protocol/server). Based on this we can
serialize the operations. Since we need to serialize _only_ operations on a
dentry (we don't serialize nameless lookups), it is guaranteed that we do
have a parent inode always. Any comments/discussion on this would be
appreciated.


My initial comments on this would be to refer to FS locking notes in 
Linux kernel, which has rules for locking during dentry operations and such.


The next part is as follows,
- Why create the name (dentry) before creating the inode (gfid instance) 
for a file or a directory?
  - A client cannot do a nameless lookup or wil

Re: [Gluster-devel] [Gluster-users] Gluster 3.6.4 tune2fs and inode size errors

2015-08-17 Thread Atin Mukherjee
I've not got a chance to look at it which I will do now. Thanks for
reminder!

-Atin
Sent from one plus one
On Aug 17, 2015 7:19 PM, "Davy Croonen"  wrote:

> Hi Atin
>
> Any news on this one?
>
> KR
> Davy
>
> On 12 Aug 2015, at 16:41, Atin Mukherjee 
> wrote:
>
> Davy,
>
> I will check this with Kaleb and get back to you.
>
> -Atin
> Sent from one plus one
> On Aug 12, 2015 7:22 PM, "Davy Croonen"  wrote:
>
>> Atin
>>
>> No problem to raise a bug for this, but isn’t this already addressed here:
>>
>> Bug 670  - 
>> continuous
>> log entries failed to get inode size
>> https://bugzilla.redhat.com/show_bug.cgi?id=670#c2
>>
>> KR
>> Davy
>>
>> On 12 Aug 2015, at 14:56, Atin Mukherjee  wrote:
>>
>> Well, this looks like a bug even in 3.7 as well. I've posted a fix [1]
>> to address it.
>>
>> [1] http://review.gluster.org/11898
>>
>> Could you please raise a bug for this?
>>
>> ~Atin
>>
>> On 08/12/2015 01:32 PM, Davy Croonen wrote:
>>
>> Hi Atin
>>
>> Thanks for your answer. The op-version was indeed an old one, 30501 to be
>> precise. I’ve updated the op-version to the one you suggested with the
>> command: gluster volume set all cluster.op-version 30603. From testing it
>> seems this issue is solved for the moment.
>>
>> Considering the errors in the etc-glusterfs-glusterd.vol.log file I’m
>> looking forward to hear from you.
>>
>> Thanks in advance.
>>
>> KR
>> Davy
>>
>> On 11 Aug 2015, at 19:28, Atin Mukherjee > mailto:atin.mukherje...@gmail.com >> wrote:
>>
>>
>>
>> -Atin
>> Sent from one plus one
>> On Aug 11, 2015 7:54 PM, "Davy Croonen" > mailto:davy.croo...@smartbit.be >> wrote:
>>
>>
>> Hi all
>>
>> Our etc-glusterfs-glusterd.vol.log is filling up with entries as shown:
>>
>> [2015-08-11 11:40:33.807940] E
>> [glusterd-utils.c:7410:glusterd_add_inode_size_to_dict] 0-management:
>> tune2fs exited with non-zero exit status
>> [2015-08-11 11:40:33.807962] E
>> [glusterd-utils.c:7436:glusterd_add_inode_size_to_dict] 0-management:
>> failed to get inode size
>>
>> I will check this and get back to you.
>>
>>
>> From the mailinglist archive I could understand this was a problem in
>> gluster version 3.4 and should be fixed. We started out from version 3.5
>> and upgraded in the meantime to version 3.6.4 but the error in the errorlog
>> still exists.
>>
>> We are also unable to execute the command
>>
>> $gluster volume status all inode
>>
>> as a result gluster hangs up with the message: “Another transaction is in
>> progress. Please try again after sometime.” while executing the command
>>
>> $gluster volume status
>>
>> Have you bump up the op version to 30603? Otherwise glusterd will still
>> have cluster locking and then multiple commands can't run simultaneously.
>>
>>
>> Are the error messages in the logs related to the hung up of gluster
>> while executing the mentioned commands? And any ideas about how to fix this?
>>
>> The error messages are not because of this.
>>
>>
>> Kind regards
>> Davy
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org> >
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>> --
>> ~Atin
>>
>>
>>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Backup bricks?

2015-08-17 Thread Avra Sengupta
Yes for long term backups LVM snapshots might not be the solution. There 
is no side effect in backing up the bricks. The data would indeed be 
readable. And if you back up "/var/lib/glusterd/vols/" on each 
volume as well, you can effectively recreate the volume from the bricks 
at a later stage.


Regards,
Avra

On 08/17/2015 04:11 PM, Thibault Godouet wrote:


Thanks Avra.

I am aware of the Gluster snapshots, but didn't think about using them 
on the offsite replica.  That could indeed cover the short term 
backups, and be used to do longer term backups from.


What I perhaps wasn't clear about is that we'll need longer term 
backups to tape (e.g. to keep multiple years).  I don't think keeping 
LVM snapshots for that long would really work.
So basically my initial question was on whether backing up the brick 
instead of the volume, which would be significantly faster, would be a 
good idea: would the data be readable ok? Any known side effect that 
could cause issues?


On 17 Aug 2015 10:12 am, "Avra Sengupta" > wrote:


Hi Thibault,

Instead of backing up, individual bricks or the entire thin
logical volume, you can take a gluster volume snapshot, and you
will have a point in time backup of the volume.

gluster snapshots internally use thin lv snapshots, so you can't
move the backup out of the system. Also having the backup on the
same filesystem as the data doesn't protect you from device
failure scenarios. However in events of any other data loss or
corruption, you can restore the volume from the snapshot, mount
the read-only snapshot and copy the necessary files.

In order to take backup at a remore site, using geo-rep is
recommended.

Regards,
Avra

On 08/17/2015 02:27 PM, Thibault Godouet wrote:


I have a 1 x 2 = 2 volume geo-replicated to a single-brick volume
in another physical site, where I would like to set up a backup.

I could setup a backup on a mount of the volume, but a quick test
shows it is slow in this setup (presumably because there are
loads of small files on there).

Instead I thought I could maybe backup the filesystem where the
brick is (or rather a snapshot of the thin logical volume).  My
understanding is that all the files will be in there, and
readable, so it seems to me it would be fine to back things up
from there.

Is that right, or am I missing something here?

Note the .glusterfs directory would also be backed up too,
although I'm not sure whether that would be of any use in a backup.

More generally is there a recommended way to setup backups?

Thanks,
Thibault.



___
Gluster-users mailing list
gluster-us...@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Implementing Flat Hierarchy for trashed files

2015-08-17 Thread Anoop C S
Hi all,

As we move forward, in order to fix the limitations with current trash
translator we are planning to replace the existing criteria for trashed
files inside trash directory with a general flat hierarchy as described
in the following sections. Please have your thoughts on following
design considerations.

Current implementation
==
* Trash translator resides on glusterfs server stack just above posix.
* Trash directory (.trashcan) is created during volume start and is
  visible under root of the volume.
* Each trashed file is moved (renamed) to trash directory with an
  appended time stamp in the file name. 
* Exact directory hierarchy (w.r.t the root of volume) is maintained
  inside trash directory whenever a file is deleted/truncated from a
  directory

Outstanding issues
==
* Since renaming occurs at the server side, client-side is unaware of
  trash doing rename or create operations.
* As a result files/directories may not be visible from mount point.
* Files/Directories created from from trash translator will not have
  gfid associated with it until lookup is performed.

Proposed Flat hierarchy
===
* Instead of creating the whole directory under trash, we will rename
  the file and place it directly under trash directory (of course with
  appended time stamp).
* Directory hierarchy can be stored via either of the following two
  approaches:
(a) File name will contain the whole path with time stamp
appended
(b) Store whole hierarchy as an xattr

Other enhancements
==
* Create the trash directory only
when trash xlator is enabled.
* Operations such as unlink, rename etc
will be prevented on trash
  directory only when trash xlator is
enabled.
* A new trash helper translator on client side(loaded only when
trash
  is enabled) to resolve split brain issues with truncation of
files.
* Restore files from trash with the help of an explicit setfattr
call.

Thanks & Regards,
-Anoop C S
-Jiffin Tony Thottan
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Plans for Gluster 3.8

2015-08-17 Thread Prasanna Kalever
Hi Athin :)

I shall take Bug 1245380
[RFE] Render all mounts of a volume defunct upon access revocation 
https://bugzilla.redhat.com/show_bug.cgi?id=1245380 

Thanks & Regards,
Prasanna Kumar K.


- Original Message -
From: "Atin Mukherjee" 
To: "Kaushal M" 
Cc: "Csaba Henk" , gluster-us...@gluster.org, "Gluster Devel" 

Sent: Thursday, August 13, 2015 8:58:20 PM
Subject: Re: [Gluster-users] [Gluster-devel] Plans for Gluster 3.8



Can we have some volunteers of these BZs? 

-Atin 
Sent from one plus one 
On Aug 12, 2015 12:34 PM, "Kaushal M" < kshlms...@gmail.com > wrote: 


Hi Csaba, 

These are the updates regarding the requirements, after our meeting 
last week. The specific updates on the requirements are inline. 

In general, we feel that the requirements for selective read-only mode 
and immediate disconnection of clients on access revocation are doable 
for GlusterFS-3.8. The only problem right now is that we do not have 
any volunteers for it. 

> 1. Bug 829042 - [FEAT] selective read-only mode 
> https://bugzilla.redhat.com/show_bug.cgi?id=829042 
> 
> absolutely necessary for not getting tarred & feathered in Tokyo ;) 
> either resurrect http://review.gluster.org/3526 
> and _find out integration with auth mechanism for special 
> mounts_, or come up with a completely different concept 
> 

With the availability of client_t, implementing this should become 
easier. The server xlator would store the incoming connections common 
name or address in the client_t associated with the connection. The 
read-only xlator could then make use of this information to 
selectively allow read-only clients. The read-only xlator would need 
to implement a new option for selective read-only, which would be 
populated with lists of common-names and addresses of clients which 
would get read-only access. 

> 2. Bug 1245380 - [RFE] Render all mounts of a volume defunct upon access 
> revocation 
> https://bugzilla.redhat.com/show_bug.cgi?id=1245380 
> 
> necessary to let us enable a watershed scalability 
> enhancement 
> 

Currently, when auth.allow/reject and auth.ssl-allow options are 
changed, the server xlator does a reconfigure to reload its access 
list. It just does a reload, and doesn't affect any existing 
connections. To bring this feature in, the server xlator would need to 
iterate through its xprt_list and check every connection for 
authorization again on a reconfigure. Those connections which have 
lost authorization would be disconnected. 

> 3. Bug 1226776 – [RFE] volume capability query 
> https://bugzilla.redhat.com/show_bug.cgi?id=1226776 
> 
> eventually we'll be choking in spaghetti if we don't get 
> this feature. The ugly version checks we need to do against 
> GlusterFS as in 
> 
> https://review.openstack.org/gitweb?p=openstack/manila.git;a=commitdiff;h=29456c#patch3
>  
> 
> will proliferate and eat the guts of the code out of its 
> living body if this is not addressed. 
> 

This requires some more thought to figure out the correct solution. 
One possible way to get the capabilities of the cluster would be to 
look at the clusters running op-version. This can be obtained using 
`gluster volume get all cluster.op-version` (the volume get command is 
available in glusterfs-3.6 and above). But this doesn't provide much 
improvement over the existing checks being done in the driver. 
___ 
Gluster-devel mailing list 
Gluster-devel@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-devel 

___
Gluster-users mailing list
gluster-us...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Skipped files during rebalance

2015-08-17 Thread Christophe TREFOIS
Dear Rafi,

Thanks for submitting a patch.

@DHT, I have two additional questions / problems.

1. When doing a rebalance (with data) RAM consumption on the nodes goes 
dramatically high, eg out of 196 GB available per node, RAM usage would fill up 
to 195.6 GB. This seems quite excessive and strange to me.

2. As you can see, the rebalance (with data) failed as one endpoint becomes 
unconnected (even though it still is connected). I’m thinking this could be due 
to the high RAM usage?

Thank you for your help,

—
Christophe

Dr Christophe Trefois, Dipl.-Ing.
Technical Specialist / Post-Doc

UNIVERSITÉ DU LUXEMBOURG

LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
Campus Belval | House of Biomedicine
6, avenue du Swing
L-4367 Belvaux
T: +352 46 66 44 6124
F: +352 46 66 44 6949
http://www.uni.lu/lcsb

[Facebook]  [Twitter] 
   [Google Plus] 
   [Linkedin] 
   [skype] 



This message is confidential and may contain privileged information.
It is intended for the named recipient only.
If you receive it in error please notify me and permanently delete the original 
message and any copies.




On 17 Aug 2015, at 11:27, Mohammed Rafi K C 
mailto:rkavu...@redhat.com>> wrote:



On 08/17/2015 01:58 AM, Christophe TREFOIS wrote:
Dear all,

I have successfully added a new node to our setup, and finally managed to get a 
successful fix-layout run as well with no errors.

Now, as per the documentation, I started a gluster volume rebalance live start 
task and I see many skipped files.
The error log contains then entires as follows for each skipped file.

[2015-08-16 20:23:30.591161] E [MSGID: 109023] 
[dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file 
failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
s_05(2013-10-11_17-12-02)/004010008.flex lookup failed
[2015-08-16 20:23:30.768391] E [MSGID: 109023] 
[dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file 
failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
s_05(2013-10-11_17-12-02)/007005003.flex lookup failed
[2015-08-16 20:23:30.804811] E [MSGID: 109023] 
[dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file 
failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
s_05(2013-10-11_17-12-02)/006005009.flex lookup failed
[2015-08-16 20:23:30.805201] E [MSGID: 109023] 
[dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file 
failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
s_05(2013-10-11_17-12-02)/005006011.flex lookup failed
[2015-08-16 20:23:30.880037] E [MSGID: 109023] 
[dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file 
failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
s_05(2013-10-11_17-12-02)/005009012.flex lookup failed
[2015-08-16 20:23:31.038236] E [MSGID: 109023] 
[dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file 
failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
s_05(2013-10-11_17-12-02)/003008007.flex lookup failed
[2015-08-16 20:23:31.259762] E [MSGID: 109023] 
[dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file 
failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
s_05(2013-10-11_17-12-02)/004008006.flex lookup failed
[2015-08-16 20:23:31.333764] E [MSGID: 109023] 
[dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file 
failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
s_05(2013-10-11_17-12-02)/007008001.flex lookup failed
[2015-08-16 20:23:31.340190] E [MSGID: 109023] 
[dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file 
failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
s_05(2013-10-11_17-12-02)/006007004.flex lookup failed

Update: one of the rebalance tasks now failed.

@Rafi, I got the same error as Friday except this time with data.

Packets that carrying the ping request could be waiting in the queue during the 
whole time-out period, because of the heavy traffic in the network. I have sent 
a patch for this. You can track the status here : 
http://review.gluster.org/11935



[2015-08-16 20:24:34.533167] C 
[rpc-clnt-ping.c:161:rpc_clnt_ping_timer_expired] 0-live-client-0: server 
192.168.123.104:49164 has not responded in the last 42 seconds, disconnecting.
[2015-08-16 20:24:34.533614] E [rpc-clnt.c:362:saved_frames_unwind] (--> 
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7fa454de59e6] (--> 
/lib64/libgfrpc.so.0(saved_frames_unwin
d+0x1de)[0x7fa454bb09be] (--> 
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fa454bb0ace] (--> 
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x9c)[0x7fa454bb247c] (--> 
/lib64/li
bgfrpc.so.0(rpc_clnt_notify+0x48)[0x7fa454bb2c38] ) 0-live-client-0: forced 
unwinding frame type(GlusterFS 3.3) op(INODELK(29)) called at 2015-08-16 
20:

[Gluster-devel] GlusterFS firewalld control

2015-08-17 Thread Christopher Blum
Hey Gluster Developers,

I'm fairly new to GlusterFS, but noticed, that it is missing the
possibility to control firewalld, which is also addressed in [1]
Since I wanted to propose a solution for this problem, I briefly talked to
Niels de Vos and we identified 2 possible ways to fix this:

1) Use the dbus connection to control firewalld when we do bind() as a
server - it looks like there is only one place where we do that [2]
 --> Pretty much a catch all solution, but will require to link against
dbus and a precompiler check for OSs with firewalld

2) Use the glusterfs hooks to call a script, when we create volumes to open
up the (dynamic) ports of the involved bricks
 --> Easier to implement, but where do we get the port information
from? Additionally involves the creation of a static config for the
glusterd process.

Looking at [3], we need to open up additional (dynamic) ports for NFS? Is
that info correct?

Since I'm fairly new, I would welcome a discussion, which approach is best
in your opinion. Please also tell me if any assumptions from above are
incorrect...

Best Regards,
Chris

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1057295
[2]
https://forge.gluster.org/glusterfs-core/glusterfs/blobs/master/rpc/rpc-transport/socket/src/socket.c#line758
[3]
http://www.gluster.org/community/documentation/index.php/Gluster_3.1:_Installing_GlusterFS_on_Red_Hat_Package_Manager_(RPM)_Distributions
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Skipped files during rebalance

2015-08-17 Thread Mohammed Rafi K C


On 08/17/2015 01:58 AM, Christophe TREFOIS wrote:
>
> Dear all,
>
>  
>
> I have successfully added a new node to our setup, and finally managed
> to get a successful fix-layout run as well with no errors.
>
>  
>
> Now, as per the documentation, I started a gluster volume rebalance
> live start task and I see many skipped files. 
>
> The error log contains then entires as follows for each skipped file.
>
>  
>
> [2015-08-16 20:23:30.591161] E [MSGID: 109023]
> [dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file
> failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
>
> s_05(2013-10-11_17-12-02)/004010008.flex lookup failed
>
> [2015-08-16 20:23:30.768391] E [MSGID: 109023]
> [dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file
> failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
>
> s_05(2013-10-11_17-12-02)/007005003.flex lookup failed
>
> [2015-08-16 20:23:30.804811] E [MSGID: 109023]
> [dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file
> failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
>
> s_05(2013-10-11_17-12-02)/006005009.flex lookup failed
>
> [2015-08-16 20:23:30.805201] E [MSGID: 109023]
> [dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file
> failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
>
> s_05(2013-10-11_17-12-02)/005006011.flex lookup failed
>
> [2015-08-16 20:23:30.880037] E [MSGID: 109023]
> [dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file
> failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
>
> s_05(2013-10-11_17-12-02)/005009012.flex lookup failed
>
> [2015-08-16 20:23:31.038236] E [MSGID: 109023]
> [dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file
> failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
>
> s_05(2013-10-11_17-12-02)/003008007.flex lookup failed
>
> [2015-08-16 20:23:31.259762] E [MSGID: 109023]
> [dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file
> failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
>
> s_05(2013-10-11_17-12-02)/004008006.flex lookup failed
>
> [2015-08-16 20:23:31.333764] E [MSGID: 109023]
> [dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file
> failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
>
> s_05(2013-10-11_17-12-02)/007008001.flex lookup failed
>
> [2015-08-16 20:23:31.340190] E [MSGID: 109023]
> [dht-rebalance.c:1965:gf_defrag_get_entry] 0-live-dht: Migrate file
> failed:/hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Mea
>
> s_05(2013-10-11_17-12-02)/006007004.flex lookup failed
>
>  
>
> Update: one of the rebalance tasks now failed.
>
>  
>
> @Rafi, I got the same error as Friday except this time with data.
>

Packets that carrying the ping request could be waiting in the queue
during the whole time-out period, because of the heavy traffic in the
network. I have sent a patch for this. You can track the status here :
http://review.gluster.org/11935


>  
>
> [2015-08-16 20:24:34.533167] C
> [rpc-clnt-ping.c:161:rpc_clnt_ping_timer_expired] 0-live-client-0:
> server 192.168.123.104:49164 has not responded in the last 42 seconds,
> disconnecting.
>
> [2015-08-16 20:24:34.533614] E [rpc-clnt.c:362:saved_frames_unwind]
> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7fa454de59e6]
> (--> /lib64/libgfrpc.so.0(saved_frames_unwin
>
> d+0x1de)[0x7fa454bb09be] (-->
> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fa454bb0ace] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x9c)[0x7fa454bb247c]
> (--> /lib64/li
>
> bgfrpc.so.0(rpc_clnt_notify+0x48)[0x7fa454bb2c38] )
> 0-live-client-0: forced unwinding frame type(GlusterFS 3.3)
> op(INODELK(29)) called at 2015-08-16 20:23:51.305640 (xid=0x5dd4da)
>
> [2015-08-16 20:24:34.533672] E [MSGID: 114031]
> [client-rpc-fops.c:1621:client3_3_inodelk_cbk] 0-live-client-0: remote
> operation failed [Transport endpoint is not connected]
>
> [2015-08-16 20:24:34.534201] E [rpc-clnt.c:362:saved_frames_unwind]
> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7fa454de59e6]
> (--> /lib64/libgfrpc.so.0(saved_frames_unwin
>
> d+0x1de)[0x7fa454bb09be] (-->
> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fa454bb0ace] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x9c)[0x7fa454bb247c]
> (--> /lib64/li
>
> bgfrpc.so.0(rpc_clnt_notify+0x48)[0x7fa454bb2c38] )
> 0-live-client-0: forced unwinding frame type(GlusterFS 3.3)
> op(READ(12)) called at 2015-08-16 20:23:51.303938 (xid=0x5dd4d7)
>
> [2015-08-16 20:24:34.534347] E [MSGID: 109023]
> [dht-rebalance.c:1124:dht_migrate_file] 0-live-dht: Migrate file
> failed: /hcs/hcs/OperaArchiveCol/SK 20131011_Oligo_Rot_lowConc_P1/Meas_
>
> 12(2013-10-12_00-12-55)/007008007.flex: failed to migrate data
>
> [2015-08-16 20:24:34.534413] E [rpc-clnt.c:362:saved_frames_unwind]
> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7fa454de59e6]
> (--> /l

Re: [Gluster-devel] NetBSD regression failures

2015-08-17 Thread Avra Sengupta
That patch itself might not pass all regressions as it might fail at the 
geo-rep test. I have sent a patch (http://review.gluster.org/#/c/11934/) 
with both the tests being moved to bad test. Talur could you please 
abandon 11933.


Regards,
Avra

On 08/17/2015 02:12 PM, Atin Mukherjee wrote:

tests/basic/mount-nfs-auth.t has been already been added to bad test by
http://review.gluster.org/11933

~Atin

On 08/17/2015 02:09 PM, Avra Sengupta wrote:

Will send a patch moving ./tests/basic/mount-nfs-auth.t and
./tests/geo-rep/georep-basic-dr-rsync.t to bad test.

Regards,
Avra

On 08/17/2015 12:45 PM, Avra Sengupta wrote:

On 08/17/2015 12:29 PM, Vijaikumar M wrote:


On Monday 17 August 2015 12:22 PM, Avra Sengupta wrote:

Hi,

The NetBSD regression tests are continuously failing with errors in
the following tests:

./tests/basic/mount-nfs-auth.t
./tests/basic/quota-anon-fd-nfs.t

quota-anon-fd-nfs.t is known issues with NFS client caching so it is
marked as bad test, final test will be marked as success even if this
test fails.

Yes it seems "./tests/geo-rep/georep-basic-dr-rsync.t" also fails in
the runs where quota-anon-fd-nfs.t fails, and that marks the final
tests as failure.





Is there any recent change that is trigerring this behaviour. Also
currently one machine is running NetBSD tests. Can someone with
access to Jenkins, bring up a few more slaves to run NetBSD
regressions in parallel.

Regards,
Avra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD regression failures

2015-08-17 Thread Atin Mukherjee
tests/basic/mount-nfs-auth.t has been already been added to bad test by
http://review.gluster.org/11933

~Atin

On 08/17/2015 02:09 PM, Avra Sengupta wrote:
> Will send a patch moving ./tests/basic/mount-nfs-auth.t and
> ./tests/geo-rep/georep-basic-dr-rsync.t to bad test.
> 
> Regards,
> Avra
> 
> On 08/17/2015 12:45 PM, Avra Sengupta wrote:
>> On 08/17/2015 12:29 PM, Vijaikumar M wrote:
>>>
>>>
>>> On Monday 17 August 2015 12:22 PM, Avra Sengupta wrote:
 Hi,

 The NetBSD regression tests are continuously failing with errors in
 the following tests:

 ./tests/basic/mount-nfs-auth.t
 ./tests/basic/quota-anon-fd-nfs.t
>>> quota-anon-fd-nfs.t is known issues with NFS client caching so it is
>>> marked as bad test, final test will be marked as success even if this
>>> test fails.
>> Yes it seems "./tests/geo-rep/georep-basic-dr-rsync.t" also fails in
>> the runs where quota-anon-fd-nfs.t fails, and that marks the final
>> tests as failure.
>>
>>>
>>>
>>>

 Is there any recent change that is trigerring this behaviour. Also
 currently one machine is running NetBSD tests. Can someone with
 access to Jenkins, bring up a few more slaves to run NetBSD
 regressions in parallel.

 Regards,
 Avra
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD regression failures

2015-08-17 Thread Avra Sengupta
Will send a patch moving ./tests/basic/mount-nfs-auth.t and 
./tests/geo-rep/georep-basic-dr-rsync.t to bad test.


Regards,
Avra

On 08/17/2015 12:45 PM, Avra Sengupta wrote:

On 08/17/2015 12:29 PM, Vijaikumar M wrote:



On Monday 17 August 2015 12:22 PM, Avra Sengupta wrote:

Hi,

The NetBSD regression tests are continuously failing with errors in 
the following tests:


./tests/basic/mount-nfs-auth.t
./tests/basic/quota-anon-fd-nfs.t
quota-anon-fd-nfs.t is known issues with NFS client caching so it is 
marked as bad test, final test will be marked as success even if this 
test fails.
Yes it seems "./tests/geo-rep/georep-basic-dr-rsync.t" also fails in 
the runs where quota-anon-fd-nfs.t fails, and that marks the final 
tests as failure.








Is there any recent change that is trigerring this behaviour. Also 
currently one machine is running NetBSD tests. Can someone with 
access to Jenkins, bring up a few more slaves to run NetBSD 
regressions in parallel.


Regards,
Avra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] Gerrit login not working

2015-08-17 Thread Vipul Nayyar
Hey there,
Just checking on the status for this. Still can't login with github onto 
gerrit. Added my details to the pad.
RegardsVipul Nayyar 
 


 On Thursday, 13 August 2015 7:06 PM, M S Vishwanath Bhat 
 wrote:
   

 Vipul, Please add your name in this pad 
https://public.pad.fsfe.org/p/gluster-gerrit-migration

Vijay, Neils or someone will help you out soon.

Cheers
MS

On 13 August 2015 at 18:55, Vipul Nayyar  wrote:

Hey everyone,
I'm having trouble cloning the glusterfs repo from gerrit. Apparently I need to 
update my ssh keys. But I can't login to the gerrit system like in the past, 
with yahoo or normal email authentication. The github authentication link at 
the top right after authenticating with github credentials redirects to a page 
with simply 'Forbidden' written on it. 
I'm not sure, if I missed a change in the mailing lists about another method to 
officially clone and contribute to the repo. But kindly help me out.
Regards
Vipul Nayyar 

___
Gluster-infra mailing list
gluster-in...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra





  ___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD regression failures

2015-08-17 Thread Jiffin Tony Thottan



On 17/08/15 12:29, Vijaikumar M wrote:



On Monday 17 August 2015 12:22 PM, Avra Sengupta wrote:

Hi,

The NetBSD regression tests are continuously failing with errors in 
the following tests:


./tests/basic/mount-nfs-auth.t

I will look into this issue.
--
Jiffin

./tests/basic/quota-anon-fd-nfs.t
quota-anon-fd-nfs.t is known issues with NFS client caching so it is 
marked as bad test, final test will be marked as success even if this 
test fails.






Is there any recent change that is trigerring this behaviour. Also 
currently one machine is running NetBSD tests. Can someone with 
access to Jenkins, bring up a few more slaves to run NetBSD 
regressions in parallel.


Regards,
Avra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD regression failures

2015-08-17 Thread Avra Sengupta

On 08/17/2015 12:29 PM, Vijaikumar M wrote:



On Monday 17 August 2015 12:22 PM, Avra Sengupta wrote:

Hi,

The NetBSD regression tests are continuously failing with errors in 
the following tests:


./tests/basic/mount-nfs-auth.t
./tests/basic/quota-anon-fd-nfs.t
quota-anon-fd-nfs.t is known issues with NFS client caching so it is 
marked as bad test, final test will be marked as success even if this 
test fails.
Yes it seems "./tests/geo-rep/georep-basic-dr-rsync.t" also fails in the 
runs where quota-anon-fd-nfs.t fails, and that marks the final tests as 
failure.








Is there any recent change that is trigerring this behaviour. Also 
currently one machine is running NetBSD tests. Can someone with 
access to Jenkins, bring up a few more slaves to run NetBSD 
regressions in parallel.


Regards,
Avra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel