Re: [Gluster-devel] Spurious failure - directories_miising_after_attach_tier.t

2015-07-28 Thread Mohammed Rafi K C


On 07/28/2015 10:23 AM, Mohammed Rafi K C wrote:
>
> On 07/28/2015 09:15 AM, Atin Mukherjee wrote:
>> I've been seeing
>> tests/basic/tier/bug-1214222-directories_miising_after_attach_tier.t
>> failing multiple times for netbsd. One of the failures are at [1]. Mind
>> having a look?
> Me and Pamela (CCing ) have been looking into this probelm. Hopefully we
> can send a patch very soon.

mount process crashed because readirp call was sent to hot-tier just
after the graph switch, even before fix layout was completed. A patch[1]
was already pushed for the same. This patch will help to solve the issue.


[1] : http://review.gluster.org/#/c/11368/


Regards
Rafi KC

>
> Regards
> Rafi KC
>
>
>> [1]
>> http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8740/consoleFull

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Modifying GlusterFS to cope with C99 inline semantics

2015-07-28 Thread Jeff Darcy
> IMO this is a different issue, not related to inline semantics.  Anand
> has already posted a patch [1] to address it.
> 
> [1] http://review.gluster.org/#/c/11757/

Thanks, Atin.  Avoiding spurious failures in this test would be a good
thing.  Do you have any information about when the underlying problem
in glusterd might get fixed?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting today at 12:00 UTC (~in 40 minutes)

2015-07-28 Thread Mohammed Rafi K C
Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.


Regards
Rafi KC

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Modifying GlusterFS to cope with C99 inline semantics

2015-07-28 Thread Atin Mukherjee


On 07/28/2015 04:31 PM, Jeff Darcy wrote:
>> IMO this is a different issue, not related to inline semantics.  Anand
>> has already posted a patch [1] to address it.
>>
>> [1] http://review.gluster.org/#/c/11757/
> 
> Thanks, Atin.  Avoiding spurious failures in this test would be a good
> thing.  Do you have any information about when the underlying problem
> in glusterd might get fixed?
Glusterd 2.0 will definitely solve this kind of issues where n * n
handshaking can cause race. For existing glusterD having volumes list
URCU protected could solve it, but I don't see that's getting done in
near future.
> 

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] broken test on release-3.7

2015-07-28 Thread Emmanuel Dreyfus
Hi

On release-3.7, tests/bugs/replicate/bug-1238508-self-heal.t seems
impossible to pass:

TEST 20 (line 33): 0 afr_get_pending_heal_count patchy
./tests/bugs/replicate/../../include.rc: line 270: afr_get_pending_heal_count: 
command not found

not ok 20 Got "" instead of "0"
RESULT 20: 1

And afr_get_pending_heal_count does not appear anywhere. Am I  missing
something, or should it be fixed by someone?

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] broken test on release-3.7

2015-07-28 Thread Atin Mukherjee


On 07/28/2015 06:04 PM, Emmanuel Dreyfus wrote:
> On release-3.7, tests/bugs/replicate/bug-1238508-self-heal.t seems
> impossible to pass:
> 
> TEST 20 (line 33): 0 afr_get_pending_heal_count patchy
> ./tests/bugs/replicate/../../include.rc: line 270: 
> afr_get_pending_heal_count: command not found
> 
> not ok 20 Got "" instead of "0"
> RESULT 20: 1
Its because of a missing backport. I've sent a patch [1] for the same.

[1] http://review.gluster.org/#/c/11776/

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Minutes from todays Gluster Community Bug Triage meeting

2015-07-28 Thread Mohammed Rafi K C


Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2015-07-28/gluster-meeting.2015-07-28-12.01.html
Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2015-07-28/gluster-meeting.2015-07-28-12.01.txt
Log:
http://meetbot.fedoraproject.org/gluster-meeting/2015-07-28/gluster-meeting.2015-07-28-12.01.log.html


Meeting summary
---
* Agenda https://public.pad.fsfe.org/p/gluster-bug-triage  (rafi,
  12:01:15)
* roll call  (rafi, 12:01:22)

* Status of last weeks action items  (rafi, 12:04:06)

* group triage  (rafi, 12:04:44)
  * 17 new bugs that have not been triaged yet : http://goo.gl/WuDQun
(rafi, 12:04:53)
  * LINK: http://goo.gl/WuDQun   (rafi, 12:12:17)
  * #info Agenda https://public.pad.fsfe.org/p/gluster-bug-triage
(rafi, 12:36:48)

* open floor  (rafi, 12:40:06)
  * ACTION: rafi need to send a final reminder for automated bug
workflow  (rafi, 12:45:16)
  * kdhananjay suggested an etherpad based locking system  (rafi,
12:56:08)
  * ACTION: need to discuss about a new locking mechanism  (rafi,
12:56:46)

Meeting ended at 12:59:00 UTC.




Action Items

* rafi need to send a final reminder for automated bug workflow
* need to discuss about a new locking mechanism




Action Items, by person
---
* rafi
  * rafi need to send a final reminder for automated bug workflow
* **UNASSIGNED**
  * need to discuss about a new locking mechanism




People Present (lines said)
---
* rafi (57)
* kdhananjay (39)
* atinm (16)
* pranithk (9)
* zodbot (2)




On 07/28/2015 04:50 PM, Mohammed Rafi K C wrote:
> Hi all,
>
> This meeting is scheduled for anyone that is interested in learning more
> about, or assisting with the Bug Triage.
>
> Meeting details:
> - location: #gluster-meeting on Freenode IRC
> ( https://webchat.freenode.net/?channels=gluster-meeting )
> - date: every Tuesday
> - time: 12:00 UTC
> (in your terminal, run: date -d "12:00 UTC")
> - agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
>
> Currently the following items are listed:
> * Roll Call
> * Status of last weeks action items
> * Group Triage
> * Open Floor
>
> The last two topics have space for additions. If you have a suitable bug
> or topic to discuss, please add it to the agenda.
>
> Appreciate your participation.
>
>
> Regards
> Rafi KC
>

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] broken test on release-3.7

2015-07-28 Thread Emmanuel Dreyfus
On Tue, Jul 28, 2015 at 06:11:24PM +0530, Atin Mukherjee wrote:
> Its because of a missing backport. I've sent a patch [1] for the same.

But how did the breakage got into the branch? Don't we have petty
regression tests that should have caught it?

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NetBSD spurious failures

2015-07-28 Thread Mohammed Rafi K C


On 07/24/2015 12:54 PM, Mohammed Rafi K C wrote:
>
>
> On 07/24/2015 12:28 PM, Krutika Dhananjay wrote:
>> Hi,
>>
>> The following tests seem to be failing fairly consistently on NetBSD:
>> tests/basic/tier/bug-1214222-directories_miising_after_attach_tier.t - 
>> http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8661/consoleFull

http://review.gluster.org/#/c/11368/ will solve this issue.

>> tests/basic/tier/tier_lookup_heal.t - 
>> http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8654/consoleFull

Still trying to figure out the root cause. :) .


Regards
Rafi KC

>>
>>
>> Could someone from tiering team take a look?
> I will take a look.
>
> Rafi
>
>
>>
>> -Krutika
>>
>

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] broken test on release-3.7

2015-07-28 Thread Ravishankar N



On 07/28/2015 06:42 PM, Emmanuel Dreyfus wrote:

On Tue, Jul 28, 2015 at 06:11:24PM +0530, Atin Mukherjee wrote:

Its because of a missing backport. I've sent a patch [1] for the same.

But how did the breakage got into the branch? Don't we have petty
regression tests that should have caught it?

After [1] was sent which did 
`s/afr_get_pending_heal_count/get_pending_heal_count`, [2] was sent 
which added  bug-1238508-self-heal.t

and [1] was merged after [2] passed regressions.

-Ravi

[1] http://review.gluster.org/#/c/11473/
[2] http://review.gluster.org/#/c/11544/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] glusterfs-3.7.3 released

2015-07-28 Thread Kaushal M
Hi All.

I'm pleased to announce the release of glusterfs-3.7.3. This release
includes a lot of bug fixes and stabilizes the 3.7 branch further. The
summary of the bugs fixed is available at the end of this mail.

The source and RPMs can be available at
http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.3/ . I'll
notify the list as other packages become available.

Thanks all who submitted fixes for this release.

Regards,
Kaushal

## Bugs fixed in this release

1212842: tar on a glusterfs mount displays "file changed as we read
it" even though the file was not changed
1214169: glusterfsd crashed while rebalance and self-heal were in progress
1217722: Tracker bug for Logging framework expansion.
1219358: Disperse volume: client crashed while running iozone
1223318: brick-op failure for glusterd command should log error
message in cmd_history.log
122: BitRot :- Handle brick re-connection sanely in bitd/scrub process
1226830: Scrubber crash upon pause
1227572: Sharding - Fix posix compliance test failures.
1227808: Issues reported by Cppcheck static analysis tool
1228535: Memory leak in marker xlator
1228640: afr: unrecognized option in re-balance volfile
1229282: Disperse volume: Huge memory leak of glusterfsd process
1229563: Disperse volume: Failed to update version and size (error 2)
seen during delete operations
1230327: context of access control translator should be updated
properly for GF_POSIX_ACL_*_KEY xattrs
1230399: [Snapshot] Scheduled job is not processed when one of the
node of shared storage volume is down
1230523: glusterd: glusterd crashing if you run  re-balance and vol
status  command parallely.
1230857: Files migrated should stay on a tier for a full cycle
1231024: scrub frequecny and throttle change information need to be
present in Scrubber log
1231608: Add regression test for cluster lock in a heterogeneous cluster
1231767: tiering:compiler warning with gcc v5.1.1
1232173: Incomplete self-heal and split-brain on directories found
when self-healing files/dirs on a replaced disk
1232185: cli correction: if tried to create multiple bricks on same
server shows replicate volume instead of disperse volume
1232199: Skip zero byte files when triggering signing
1232333: Ganesha-ha.sh cluster setup not working with RHEL7 and derivatives
1232335: nfs-ganesha: volume is not in list of exports in case of
volume stop followed by volume start
1232602: bug-857330/xml.t fails spuriously
1232612: Disperse volume: misleading unsuccessful message with heal
and heal full
1232660: Change default values of allow-insecure and bind-insecure
1232883: Snapshot daemon failed to run on newly created dist-rep
volume with uss enabled
1232885: [SNAPSHOT]: "man gluster" needs modification for few snapshot commands
1232886: [SNAPSHOT]: Output message when a snapshot create is issued
when multiple bricks are down needs to be improved
1232887: [SNAPSHOT] : Snapshot delete fails with error - Snap might
not be in an usable state
1232889: Snapshot: When Cluster.enable-shared-storage is enable,
shared storage should get mount after Node reboot
1233041: glusterd crashed when testing heal full on replaced disks
1233158: Null pointer dreference in dht_migrate_complete_check_task
1233518: [Backup]: Glusterfind session(s) created before starting the
volume results in 'changelog not available' error, eventually
1233555: gluster v set help needs to be updated for
cluster.enable-shared-storage option
1233559: libglusterfs: avoid crash due to ctx being NULL
1233611: Incomplete conservative merge for split-brained directories
1233632: Disperse volume: client crashed while running iozone
1233651: pthread cond and mutex variables of fs struct has to be
destroyed conditionally.
1234216: nfs-ganesha: add node fails to add a new node to the cluster
1234225: Data Tiering: add tiering set options to volume set help
(cluster.tier-demote-frequency and cluster.tier-promote-frequency)
1234297: Quota: Porting logging messages to new logging framework
1234408: STACK_RESET may crash with concurrent statedump requests to a
glusterfs process
1234584: nfs-ganesha:delete node throws error and pcs status also
notifies about failures, in fact I/O also doesn't resume post grace
period
1234679: Disperse volume : 'ls -ltrh' doesn't list correct size of the
files every time
1234695: [geo-rep]: Setting meta volume config to false when meta
volume is stopped/deleted leads geo-rep to faulty
1234843: GlusterD does not store updated peerinfo objects.
1234898: [geo-rep]: Feature fan-out fails with the use of meta volume config
1235203: tiering: tier status shows as " progressing " but there is no
rebalance daemon running
1235208: glusterd: glusterd crashes while importing a USS enabled
volume which is already started
1235242: changelog: directory renames not getting recorded
1235258: nfs-ganesha: ganesha-ha.sh --refresh-config not working
1235297: [geo-rep]: set_geo_rep_pem_keys.sh needs modification in
gluster path to support mount broker functionality

Re: [Gluster-devel] NetBSD spurious failures

2015-07-28 Thread Mohammed Rafi K C


On 07/28/2015 06:52 PM, Mohammed Rafi K C wrote:
>
>
> On 07/24/2015 12:54 PM, Mohammed Rafi K C wrote:
>>
>>
>> On 07/24/2015 12:28 PM, Krutika Dhananjay wrote:
>>> Hi,
>>>
>>> The following tests seem to be failing fairly consistently on NetBSD:
>>> tests/basic/tier/bug-1214222-directories_miising_after_attach_tier.t - 
>>> http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8661/consoleFull
>
> http://review.gluster.org/#/c/11368/ will solve this issue.
>
>>> tests/basic/tier/tier_lookup_heal.t - 
>>> http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/8654/consoleFull
>
> Still trying to figure out the root cause. :) .

http://review.gluster.org/#/c/11565/ and
http://review.gluster.org/#/c/11368/ will solve this issue.

Regards
Rafi KC

>
>
> Regards
> Rafi KC
>
>>>
>>>
>>> Could someone from tiering team take a look?
>> I will take a look.
>>
>> Rafi
>>
>>
>>>
>>> -Krutika
>>
>

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel