Re: [Gluster-infra] [Gluster-Maintainers] [Gluster-devel] Master branch lock down status (Fri, August 9th)

2018-08-11 Thread Shyam Ranganathan
On 08/11/2018 02:09 AM, Atin Mukherjee wrote:
> I saw the same behaviour for
> https://build.gluster.org/job/regression-on-demand-full-run/47/consoleFull
> as well. In both the cases the common pattern is if a test was retried
> but overall the job succeeded. Is this a bug which got introduced
> recently? At the moment, this is blocking us to debug any tests which
> has been retried but the job overall succeeded.
> 
> *01:54:20* Archiving artifacts
> *01:54:21* ‘glusterfs-logs.tgz’ doesn’t match anything
> *01:54:21* No artifacts found that match the file pattern 
> "glusterfs-logs.tgz". Configuration error?
> *01:54:21* Finished: SUCCESS
> 
> I saw the same behaviour for 
> https://build.gluster.org/job/regression-on-demand-full-run/47/consoleFull as 
> well.

This has been the behavior always, if we call out a run as failed from
run-tests.sh (when there are retries) then the logs will be archived. We
do not call out a run as a failure in case there were retries, hence no
logs.

I will add this today to the WIP testing patchset.

> 
> 
> On Sat, Aug 11, 2018 at 9:40 AM Ravishankar N  <mailto:ravishan...@redhat.com>> wrote:
> 
> 
> 
> On 08/11/2018 07:29 AM, Shyam Ranganathan wrote:
> > ./tests/bugs/replicate/bug-1408712.t (one retry)
> I'll take a look at this. But it looks like archiving the artifacts
> (logs) for this run
> 
> (https://build.gluster.org/job/regression-on-demand-full-run/44/consoleFull)
> 
> was a failure.
> Thanks,
> Ravi
> ___
> maintainers mailing list
> maintain...@gluster.org <mailto:maintain...@gluster.org>
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Moving Regressions to Centos 7

2017-12-20 Thread Shyam Ranganathan
On 12/20/2017 04:30 AM, Nigel Babu wrote:
> Hello folks,
> 
> We've been using Centos 6 for our regressions for a long time. I believe
> it's time that we moved to Centos 7. It's causing us minor issues. For
> example, tests run fine on the regression boxes but don't work on local
> machines or vice-versa. Moving up gives us the ability to use newer
> versions of tools as well.
> 
> If nobody has any disagreement, the plan is going to look like this:
> * Bring up 10 Rackspace Centos 7 nodes.
> * Test chunked regression runs on Rackspace Centos 7 nodes for one week.
> * If all works well, kill off all the old nodes and switch all normal
> regressions to Rackspace Centos 7 nodes.
> 
> I expect this process to be complete right around 2nd week of Jan.
> Please let me know if there are concerns.

This aligns with the 4.0 branching date, we either need a couple of days
with CentOS7 regressions on master before branching (so that things are
sane). I expect 2nd week of Jan gives us about 4 days till branching
(around 16th of Jan), so sounds good to me.

> 
> -- 
> nigelb
> 
> 
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Where to request for adding versions in Bugzilla

2017-05-15 Thread Shyam

On 05/15/2017 02:56 AM, Raghavendra Talur wrote:

On Mon, May 15, 2017 at 5:49 AM, Vijay Bellur <vbel...@redhat.com> wrote:



On Sun, May 14, 2017 at 8:04 PM, Vijay Bellur <vbel...@redhat.com> wrote:


On 05/14/2017 04:02 PM, Raghavendra Talur wrote:


Hi All,

I need 3.10.1 and 3.10.2 versions added to Glusterfs product. How do I do
it?



You would need to file a request with Red Hat's bugzilla admin. I will
mail details about the process offline.



I have forwarded you details. However I remember a discussion on not adding
bz versions for minor updates in 3.8.x. The expectation was that users would
provide details about the exact version in the bug description. Do we want
to follow a similar process for 3.10?


Thanks Vijay!
May be this is why Shyam hasn't created 3.10.1 version already? If so,
I will not send this request.


We do not add minor version (as discussed here). We only have 
3.8/10/(and now)11 as the versions.


Users can clarify (or we can request the same) when a minor version is 
required for understanding the problem better.






Thanks,
Vijay

___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] Issue in locks xlators

2017-03-10 Thread Shyam

On 03/10/2017 02:45 AM, Xavier Hernandez wrote:

Hi,

I've posted a patch [1] to fix a memory leak in locks xlator. The fix
seems quite straightforward, however I've seen a deadlock in the centos
regression twice [2] [3] on the locks_revocation.t test, causing the
test to timeout and be aborted.

At first sight I haven't seen other failures of this kind for other
patches, so it seems that the spurious failure has been introduced by my
patch.


This has been a cause of a few aborted runs in the past as well, and 
looks like it continues to be so, see [4].


I do not think this is due to your patch, as there are a few instances 
of the same in the past as well.


fstat.gluster.org unfortunately does not report this test as the cause 
of an aborted run (I mean to file a bug on this), as otherwise I am sure 
this would have bubbled up higher in that report.




Anyone with deeper knowledge on locks xlator can help me identify the
cause ? I'm unable to see how the change can interfere with lock
revocation.

I've tried to reproduce it locally, but the test passed successfully all
times.

@Nigel, is it possible to get the logs generated by an aborted job from
some place ? I have looked into the place where failed jobs store their
logs, but aren't there. It seems that the slave node is restarted after
an abort, but logs are not saved.

Thanks,

Xavi

[1] https://review.gluster.org/16838/
[2] https://build.gluster.org/job/centos6-regression/3563/console
[3] https://build.gluster.org/job/centos6-regression/3579/console
[4] Older failures of lock-revocation.t: 
http://lists.gluster.org/pipermail/gluster-devel/2017-February/052158.html

___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] Priorities on recently filed infrastructure issues (by me)

2017-03-03 Thread Shyam

Hi,

My apologies, I just filed 4 issues against infra and did not call out 
what is needed and by when, so here goes, (Prio. 1 does not mean I 
needed it yesterday, just a relative priority among these).


Prio. 1) https://bugzilla.redhat.com/show_bug.cgi?id=1428032
Update WorkerAnt to post to a github issue as soon as a patch is posted 
against the same


Prio. 1) https://bugzilla.redhat.com/show_bug.cgi?id=1428034
Enable gerrit to convert github issue reference as a hyperlink

The above 2 are P1's as we need this information flowing for 3.11

Prio. 2) https://bugzilla.redhat.com/show_bug.cgi?id=1428036
Update rfc.sh to check/request issue # when a commit is an “rfc”
NOTE: This is out of infra anyway, as this is part of the code base, so 
no longer an infra priority


Prio. 5) https://bugzilla.redhat.com/show_bug.cgi?id=1428047
Require a Jenkins job to validate Change-ID on commits to branches in 
glusterfs repository
NOTE: This is still under discussion and NEEDINFO on me, but just noting 
it here for completeness


The above is P5 as we can live without it (as we have been till now) for 
some more time.


Thanks,
Shyam

___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] FB: Branch creation

2016-11-09 Thread Shyam

All,

We (Jeff, Vijay, Amye, and Myself) have been in conversations with FB 
folks over upstreaming their work and also around helping them catch 
upto 3.8 (as they are on 3.6.3 + large number of patches (say ~300)).


Towards this, to ease their movement to the 3.8 release, considering the 
large number of patches etc. it was proposed that we create a special FB 
branch in our git repo, and enable them to maintain the same.


The whole intent and purpose of this can be found in the attached document.

We discussed this off-list with maintainers and did not have any 
objections that were noted, moving this to the respective lists, to get 
it on record and proceed with the branch creation.


Thanks,
Shyam

# Request for special FB development branch against 3.8
FB wants to move to release 3.8 and have quite a few patches that they
need to upstream and port into the 3.8 branch for them to move to this release.

They also want to be in a better position to do upstream first work, so that
they can easily absorb the latest LTM releases for their use, that contains
their work. This requires them to catch up to master first, and vice versa.

Towards this, to reduce the initial turnaround for FB to get into regular
upstream cadence, it is proposed that they would work from a special 3.8 FB
branch.

## How this helps:
This branch will give FB engineers the ability to forward port their changes
into a **static** branch, making it easier for them to move their code upstream.
Further, commits here would go through regular forwarding porting to master
and later pulled down to 3.8.next releases, thereby reducing the gap between the
FB branch and the actual release 3.8 branch.

Features that appear in the FB branch would be forward ported to master, and
appear in the next STM or LTM release. This again reduces what FB is tracking
as features in their special branch.

The end goal of this exercise, is that FB would have all their code in master
and also in the latest LTM release of Gluster, and hence would become first
class citizens, from then on, in driving their changes upstream.

## Point of branch creation:
Create the FB branch at release-3.8

## Branch name:
  - release-3.8-fb

## Branch needs:
  - FB folks would need full merge rights on their branch
- The exact account IDs will be shared subsequently
  - Gerrit integration would be useful, and is possibly a given
  - Regression test integration would be useful, but not a must
- IOW, even if regression fails, they would have full rights to merge into
their branch

## Workflow envisaged:
1) FB forward ports their changes to release-3.8-fb branch
  - Ensures their tests pass
  - Merges the changes as needed in this branch
  - Takes assistance from Gluster members in possibly forward porting some
  broader changes that, either needs some assistance, or has a strong mutual
  interest in terms of functionality provided

2) FB and Gluster members forward port bug fixes from the FB branch to master

3) FB and Gluster members backport accepted bug fixes on master to release-3.8
branch
  - Thereby reducing the effort for FB to catch up with the next 3.8.y release

2.1) As in (2), same applies for features from the FB branch

3.1) As in (3), but features (unless there is a strong exception) are backported
to the next Gluster STM or LTM release.

4) A couple of such cycles later, it should be the case that FB no longer needs
to maintain a special branch and can work from the release branches for their
needs. Thus, moving their work to master first, and getting it backported to the
appropriate release point.
  - A possible point when this is not needed anymore is when 3.10 LTM is
  released (give a sense of the calendar being sought for this)
  - Also a possibility is that we still retain a 3.8+-fb branch for their needs
  on resolving production needs ASAP, rather than waiting on upstream resolution

___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Idea: Failure Trends

2016-08-17 Thread Shyam

On 08/16/2016 09:03 PM, Nigel Babu wrote:

Hello folks,

For the last few weeks, Poornima has been sending in a report of failures over
the last week. I've been thinking of how to use automation to make that report
better. One of the ideas was to use Jenkins to send the email. It seemed like
we were automating the wrong thing there.

Instead, I propose a small website that'll track failures week-on-week and
trends of various failures over time. If a particular failure has increased in
number over one week, we'll know something has gone wrong.

What does everyone think of this before I actually go and write some code?


Needed, and would be very useful, Thanks.



--
nigelb
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] tests/basic/tier/tier-file-create.t dumping core on Linux

2016-03-04 Thread Shyam

Facing the same problem in the following runs as well,

1) 
https://build.gluster.org/job/rackspace-regression-2GB-triggered/18767/console

2) https://build.gluster.org/job/regression-test-burn-in/546/console
3) https://build.gluster.org/job/regression-test-burn-in/547/console
4) https://build.gluster.org/job/regression-test-burn-in/549/console

Last successful burn-in was: 545 (but do not see the test having been 
run here, so this is inconclusive)


burn-in test 544 is hung on the same test here, 
https://build.gluster.org/job/regression-test-burn-in/544/console


(and at this point I am stopping the hunt for when this last succeeded :) )

Let's know if anyone is taking a peek at the cores.

Thanks,
Shyam



On 03/04/2016 07:40 AM, Krutika Dhananjay wrote:

Could someone from tiering dev team please take a look?

https://build.gluster.org/job/rackspace-regression-2GB-triggered/18793/console

-Krutika


___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra