Re: [Gluster-devel] Introduce GD_OP_VERSION_5_10 and increase GD_OP_VERSION_MAX to this version

2020-01-29 Thread Atin Mukherjee
There’s no hard requirement to have an op-version tagged against a release until and unless you introduce a new volume option. What you need to do is introduce a new macro with a higher value than max op version and set the max op version value to the same - just like how a new volume option is

Re: [Gluster-devel] [Release-8] Thin-Arbiter: Unique-ID requirement

2020-01-14 Thread Atin Mukherjee
>From a design perspective 2 is a better choice. However I'd like to see a design on how cluster id will be generated and maintained (with peer addition/deletion scenarios, node replacement etc). On Tue, Jan 14, 2020 at 1:42 PM Amar Tumballi wrote: > Hello, > > As we are gearing up for

Re: [Gluster-devel] Modifying gluster's logging mechanism

2019-11-21 Thread Atin Mukherjee
This is definitely a good start. In fact the experiment you have done which indicates a 20% improvement of run time perf with out logger does put this work for a ‘worth a try’ category for sure. The only thing we need to be mindful here is the ordering of the logs to be provided, either through a

Re: [Gluster-devel] Upstream nightly build on Centos is failing with glusterd crash

2019-08-27 Thread Atin Mukherjee
This issue is fixed now. Thanks to Nithya for root causing and fixing it. On Fri, Aug 23, 2019 at 11:19 AM Bala Konda Reddy Mekala wrote: > Hi, > On fresh installation with the nightly build[1], "systemctl glusterd > start" is crashing with a glusterd crash (coredump). Bug was filed[2] and >

[Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4710

2019-08-26 Thread Atin Mukherjee
Since last few days I was trying to understand the nightly failures we were seeing even after addressing the port already in use issue. So here's the analysis: >From console output of https://build.gluster.org/job/regression-test-burn-in/4710/consoleFull *19:51:56* Started by upstream project

Re: [Gluster-devel] [Gluster-users] [Gluster-Maintainers] glusterfs-7.0rc0 released

2019-08-26 Thread Atin Mukherjee
On Mon, Aug 26, 2019 at 11:18 AM Rinku Kothiya wrote: > Hi, > > Release-7 RC0 packages are built. This is a good time to start testing the > release bits, and reporting any issues on bugzilla. > Do post on the lists any testing done and feedback for the same. > > We have about 2 weeks to GA of

Re: [Gluster-devel] Assistance setting up Gluster

2019-07-23 Thread Atin Mukherjee
Sanju - can you please help Barak? >From a quick glance of the log it seems that this wasn’t a clean setup. Barak - can you please have an empty /var/lib/glusterd/ and start over again? Also make sure that there’s no glusterd process already running. On Mon, 22 Jul 2019 at 14:40, Barak Sason

Re: [Gluster-devel] Re-Compile glusterd1 and add it to the stack

2019-07-15 Thread Atin Mukherjee
David - I don't see a GF_OPTION_INIT in init () of read-only.c . How is that working even when you're compiling the entire source? On Mon, Jul 15, 2019 at 7:40 PM David Spisla wrote: > Hello Vijay, > there is a patch file attached. You can see the code there. I oriented > myself here: >

Re: [Gluster-devel] Requesting reviews [Re: Release 7 Branch Created]

2019-07-15 Thread Atin Mukherjee
Please ensure : 1. commit message has the explanation on the motive behind this change. 2. I always feel more confident if a patch has passed regression to kick start the review. Can you please ensure that verified flag is put up? On Mon, Jul 15, 2019 at 5:27 PM Jiffin Tony Thottan wrote: > Hi,

Re: [Gluster-devel] Release 5.7 or 5.8

2019-07-14 Thread Atin Mukherjee
On Fri, 12 Jul 2019 at 21:14, Niels de Vos wrote: > On Thu, Jul 11, 2019 at 01:02:48PM +0530, Hari Gowtham wrote: > > Hi, > > > > We came across an build issue with release 5.7. It was related the > > python version. > > A fix for it ha been posted [ >

[Gluster-devel] Rebase your patches to avoid fedora-smoke failure

2019-07-13 Thread Atin Mukherjee
With https://review.gluster.org/23033 being now merged, we should be unblocked on the fedora-smoke failure. Request all of the patch owners to rebase your respective patches to get unblocked. ___ Community Meeting Calendar: APAC Schedule - Every 2nd

[Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4631

2019-07-02 Thread Atin Mukherjee
Can we check the following failure? 1 test(s) failed ./tests/00-geo-rep/georep-basic-dr-rsync-arbiter.t -- Forwarded message - From: Date: Mon, Jul 1, 2019 at 11:48 PM Subject: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4631 To: See <

[Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4632

2019-07-02 Thread Atin Mukherjee
Can we check these failures please? 2 test(s) failed ./tests/bugs/glusterd/bug-1699339.t ./tests/bugs/glusterd/bug-857330/normal.t -- Forwarded message - From: Date: Wed, Jul 3, 2019 at 12:08 AM Subject: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in

[Gluster-devel] Fwd: Details on the Scan.coverity.com June 2019 Upgrade

2019-06-12 Thread Atin Mukherjee
cbd442a62=656ea81285804a36a32a2e5aface86fc=1220=1> Coverity Scan 2019 Upgrade Dear Atin Mukherjee, Thank you for being an active user of scan.coverity.com <http://scan.coverity.com/?elqTrackId=9C51F5BAD2400EBFFE0E01243C4AA171=656ea81285804a36a32a2e5aface86fc=1220=1=872>. We have some important news to share

[Gluster-devel] https://build.gluster.org/job/centos7-regression/6404/consoleFull - Problem accessing //job/centos7-regression/6404/consoleFull. Reason: Not found

2019-06-11 Thread Atin Mukherjee
https://bugzilla.redhat.com/show_bug.cgi?id=1719174 The patch which failed the regression is https://review.gluster.org/22851 . ___ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge:

Re: [Gluster-devel] [Gluster-Maintainers] Fwd: Build failed in Jenkins: regression-test-with-multiplex #1359

2019-06-10 Thread Atin Mukherjee
ith-multiplex/1359/) is when this test started failing. > I can recreate the issues locally on my laptop too. > > > On Sat, Jun 1, 2019 at 4:55 PM Atin Mukherjee wrote: > >> subdir-mount.t has started failing in brick mux regression nightly. This >> needs to be

[Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #1357

2019-06-01 Thread Atin Mukherjee
Rafi - tests/bugs/glusterd/serializ e-shd-manager-glusterd- restart.t seems to be failing often. Can you please investigate the reason of this spurious failure? -- Forwarded message - From: Date: Thu, 30 May 2019 at 23:22 Subject: [Gluster-Maintainers] Build failed in Jenkins:

[Gluster-devel] tests are timing out in master branch

2019-05-14 Thread Atin Mukherjee
There're random tests which are timing out after 200 secs. My belief is this is a major regression introduced by some commit recently or the builders have become extremely slow which I highly doubt. I'd request that we first figure out the cause, get master back to it's proper health and then get

Re: [Gluster-devel] [Gluster-infra] is_nfs_export_available from nfs.rc failing too often?

2019-05-08 Thread Atin Mukherjee
On Wed, May 8, 2019 at 7:38 PM Atin Mukherjee wrote: > builder204 needs to be fixed, too many failures, mostly none of the > patches are passing regression. > And with that builder201 joins the pool, https://build.gluster.org/job/centos7-regression/5943/consoleFull > On Wed, May

Re: [Gluster-devel] [Gluster-infra] is_nfs_export_available from nfs.rc failing too often?

2019-05-08 Thread Atin Mukherjee
builder204 needs to be fixed, too many failures, mostly none of the patches are passing regression. On Wed, May 8, 2019 at 9:53 AM Atin Mukherjee wrote: > > > On Wed, May 8, 2019 at 7:16 AM Sanju Rakonde wrote: > >> Deepshikha, >> >> I see the failure here[1] w

Re: [Gluster-devel] [Gluster-infra] is_nfs_export_available from nfs.rc failing too often?

2019-05-07 Thread Atin Mukherjee
ion tonight, so if I find nothing, >>> until I leave, I guess Deepshika will have to look. >>> >>> > On Wed, Apr 24, 2019 at 5:30 PM Yaniv Kaul wrote: >>> > >>> > > >>> > > >>> > > On Tue, Apr

Re: [Gluster-devel] [Gluster-users] Meeting Details on footer of the gluster-devel and gluster-user mailing list

2019-05-07 Thread Atin Mukherjee
On Wed, May 8, 2019 at 9:45 AM Atin Mukherjee wrote: > > > On Wed, May 8, 2019 at 12:08 AM Vijay Bellur wrote: > >> >> >> On Tue, May 7, 2019 at 11:15 AM FNU Raghavendra Manjunath < >> rab...@redhat.com> wrote: >> >>> >>>

Re: [Gluster-devel] Coverity scan - how does it ignore dismissed defects & annotations?

2019-05-03 Thread Atin Mukherjee
On Fri, 3 May 2019 at 16:07, Amar Tumballi Suryanarayan wrote: > > > On Fri, May 3, 2019 at 3:17 PM Atin Mukherjee wrote: > >> >> >> On Fri, 3 May 2019 at 14:59, Xavi Hernandez wrote: >> >>> Hi Atin, >>> >>> On Fri, May 3, 2019

Re: [Gluster-devel] Coverity scan - how does it ignore dismissed defects & annotations?

2019-05-03 Thread Atin Mukherjee
On Fri, 3 May 2019 at 14:59, Xavi Hernandez wrote: > Hi Atin, > > On Fri, May 3, 2019 at 10:57 AM Atin Mukherjee > wrote: > >> I'm bit puzzled on the way coverity is reporting the open defects on GD1 >> component. As you can see from [1], technically we hav

[Gluster-devel] Coverity scan - how does it ignore dismissed defects & annotations?

2019-05-03 Thread Atin Mukherjee
I'm bit puzzled on the way coverity is reporting the open defects on GD1 component. As you can see from [1], technically we have 6 open defects and all of the rest are being marked as dismissed. We tried to put some additional annotations in the code through [2] to see if coverity starts feeling

Re: [Gluster-devel] Should we enable contention notification by default ?

2019-05-02 Thread Atin Mukherjee
On Thu, 2 May 2019 at 20:38, Xavi Hernandez wrote: > On Thu, May 2, 2019 at 4:06 PM Atin Mukherjee > wrote: > >> >> >> On Thu, 2 May 2019 at 19:14, Xavi Hernandez >> wrote: >> >>> On Thu, 2 May 2019, 15:37 Milind Changire, wrote: >&g

Re: [Gluster-devel] Should we enable contention notification by default ?

2019-05-02 Thread Atin Mukherjee
On Thu, 2 May 2019 at 19:14, Xavi Hernandez wrote: > On Thu, 2 May 2019, 15:37 Milind Changire, wrote: > >> On Thu, May 2, 2019 at 6:44 PM Xavi Hernandez >> wrote: >> >>> Hi Ashish, >>> >>> On Thu, May 2, 2019 at 2:17 PM Ashish Pandey >>> wrote: >>> Xavi, I would like to keep

Re: [Gluster-devel] Weekly Untriaged Bugs

2019-04-28 Thread Atin Mukherjee
While I understand this report captured bugs filed since last 1 week and do not have ‘Triaged’ keyword, does it make better sense to exclude bugs which aren’t in NEW state? I believe the intention of this report is to check what all bugs haven’t been looked at by maintainers/developers yet. BZs

Re: [Gluster-devel] [Gluster-Maintainers] BZ updates

2019-04-23 Thread Atin Mukherjee
Absolutely agree and I definitely think this would help going forward. On Wed, Apr 24, 2019 at 8:45 AM Nithya Balachandran wrote: > All, > > When working on a bug, please ensure that you update the BZ with any > relevant information as well as the RCA. I have seen several BZs in the > past

Re: [Gluster-devel] [Gluster-infra] is_nfs_export_available from nfs.rc failing too often?

2019-04-22 Thread Atin Mukherjee
Is this back again? The recent patches are failing regression :-\ . On Wed, 3 Apr 2019 at 19:26, Michael Scherer wrote: > Le mercredi 03 avril 2019 à 16:30 +0530, Atin Mukherjee a écrit : > > On Wed, Apr 3, 2019 at 11:56 AM Jiffin Thottan > > wrot

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-17 Thread Atin Mukherjee
On Wed, 17 Apr 2019 at 10:53, Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.z...@nokia-sbell.com> wrote: > Hi, > > In my recent test, I found that there are very severe glusterfsd memory > leak when enable socket ssl option > What gluster version are you testing? Would you be able to continue your

Re: [Gluster-devel] Release 6.1: Expected tagging on April 10th

2019-04-16 Thread Atin Mukherjee
On Wed, Apr 17, 2019 at 12:33 AM Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Tue, Apr 16, 2019 at 10:27 PM Atin Mukherjee > wrote: > >> >> >> On Tue, Apr 16, 2019 at 9:19 PM Atin Mukherjee >> wrote: >> >>> >>

Re: [Gluster-devel] Release 6.1: Expected tagging on April 10th

2019-04-16 Thread Atin Mukherjee
On Tue, Apr 16, 2019 at 10:26 PM Atin Mukherjee wrote: > > > On Tue, Apr 16, 2019 at 9:19 PM Atin Mukherjee > wrote: > >> >> >> On Tue, Apr 16, 2019 at 7:24 PM Shyam Ranganathan >> wrote: >> >>> Status: Tagging pending >>> >

Re: [Gluster-devel] Release 6.1: Expected tagging on April 10th

2019-04-16 Thread Atin Mukherjee
On Tue, Apr 16, 2019 at 9:19 PM Atin Mukherjee wrote: > > > On Tue, Apr 16, 2019 at 7:24 PM Shyam Ranganathan > wrote: > >> Status: Tagging pending >> >> Waiting on patches: >> (Kotresh/Atin) - glusterd: fix loading ctime in client graph logic >> h

Re: [Gluster-devel] Release 6.1: Expected tagging on April 10th

2019-04-16 Thread Atin Mukherjee
cent bug (15th April), does not seem to have any critical data > corruption or service availability issues, planning on not waiting for > the fix in 6.1 > > - Shyam > On 4/6/19 4:38 AM, Atin Mukherjee wrote: > > Hi Mohit, > > > > https://review.gluster.org/22495 should

Re: [Gluster-devel] SHD crash in https://build.gluster.org/job/centos7-regression/5510/consoleFull

2019-04-10 Thread Atin Mukherjee
Rafi mentioned to me earlier that this will be fixed through https://review.gluster.org/22468 . This crash is more often seen in the nightly regression these days. Patch needs review and I'd request the respective maintainers to take a look at it. On Wed, Apr 10, 2019 at 5:08 PM Nithya

Re: [Gluster-devel] Release 6.1: Expected tagging on April 10th

2019-04-06 Thread Atin Mukherjee
Hi Mohit, https://review.gluster.org/22495 should get into 6.1 as it’s a regression. Can you please attach the respective bug to the tracker Ravi pointed out? On Sat, 6 Apr 2019 at 12:00, Ravishankar N wrote: > Tracker bug is https://bugzilla.redhat.com/show_bug.cgi?id=1692394, in > case

Re: [Gluster-devel] [Gluster-infra] rebal-all-nodes-migrate.t always fails now

2019-04-04 Thread Atin Mukherjee
ael Scherer a écrit : > > Le jeudi 04 avril 2019 à 13:53 +0200, Michael Scherer a écrit : > > > Le jeudi 04 avril 2019 à 16:13 +0530, Atin Mukherjee a écrit : > > > > Based on what I have seen that any multi node test case will fail > > > > and > >

[Gluster-devel] rebal-all-nodes-migrate.t always fails now

2019-04-04 Thread Atin Mukherjee
Based on what I have seen that any multi node test case will fail and the above one is picked first from that group and If I am correct none of the code fixes will go through the regression until this is fixed. I suspect it to be an infra issue again. If we look at

Re: [Gluster-devel] is_nfs_export_available from nfs.rc failing too often?

2019-04-03 Thread Atin Mukherjee
t; can happen or not. > That's precisely what the question is. Why suddenly we're seeing this happening too frequently. Today I saw atleast 4 to 5 such failures already. Deepshika - Can you please help in inspecting this? > Regards, > Jiffin > > > - Original Message -

[Gluster-devel] is_nfs_export_available from nfs.rc failing too often?

2019-04-02 Thread Atin Mukherjee
I'm observing the above test function failing too often because of which arbiter-mount.t test fails in many regression jobs. Such frequency of failures wasn't there earlier. Does anyone know what has changed recently to cause these failures in regression? I also hear when such failure happens a

[Gluster-devel] Backporting important fixes in release branches

2019-04-02 Thread Atin Mukherjee
Off late my observation has been that we're missing to backport critical/important fixes into the release branches and we do a course of correction when users discover the problems which isn't a great experience. I request all developers and maintainers to pay some attention on (a) deciding on

Re: [Gluster-devel] [ovirt-users] oVirt Survey 2019 results

2019-04-02 Thread Atin Mukherjee
Thanks Sahina for including Gluster community mailing lists. As Sahina already mentioned we had a strong focus on upgrade testing path before releasing glusterfs-6. We conducted test day and along with functional pieces, tested upgrade paths like from 3.12, 4 & 5 to release-6, we encountered

[Gluster-devel] Quick update on glusterd's volume scalability improvements

2019-03-29 Thread Atin Mukherjee
All, As many of you already know that the design logic with which GlusterD (here on to be referred as GD1) was implemented has some fundamental scalability bottlenecks at design level, especially around it's way of handshaking configuration meta data and replicating them across all the peers.

Re: [Gluster-devel] requesting review available gluster* plugins in sos

2019-03-22 Thread Atin Mukherjee
On Fri, 22 Mar 2019 at 20:07, Sankarshan Mukhopadhyay < sankarshan.mukhopadh...@gmail.com> wrote: > On Wed, Mar 20, 2019 at 10:00 AM Atin Mukherjee > wrote: > > > > From glusterd perspective couple of enhancements I'd propose to be added > (a) to capture get-

[Gluster-devel] GF_CALLOC to GF_MALLOC conversion - is it safe?

2019-03-21 Thread Atin Mukherjee
All, In the last few releases of glusterfs, with stability as a primary theme of the releases, there has been lots of changes done on the code optimization with an expectation that such changes will have gluster to provide better performance. While many of these changes do help, but off late we

Re: [Gluster-devel] requesting review available gluster* plugins in sos

2019-03-19 Thread Atin Mukherjee
>From glusterd perspective couple of enhancements I'd propose to be added (a) to capture get-state dump and make it part of sosreport . Off late, we have seen get-state dump has been very helpful in debugging few cases apart from it's original purpose of providing source of cluster/volume

Re: [Gluster-devel] [Gluster-Maintainers] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-03-04 Thread Atin Mukherjee
On Mon, 4 Mar 2019 at 20:33, Amar Tumballi Suryanarayan wrote: > Thanks to those who participated. > > Update at present: > > We found 3 blocker bugs in upgrade scenarios, and hence have marked release > as pending upon them. We will keep these lists updated about progress. I’d like to clarify

[Gluster-devel] test failure reports for last 30 days

2019-02-26 Thread Atin Mukherjee
[1] captures the test failures report since last 30 days and we'd need volunteers/component owners to see why the number of failures are so high against few tests. [1] https://fstat.gluster.org/summary?start_date=2019-01-26_date=2019-02-25=all ___

Re: [Gluster-devel] Regression health for release-5.next and release-6

2019-01-16 Thread Atin Mukherjee
On Tue, Jan 15, 2019 at 2:13 PM Atin Mukherjee wrote: > Interesting. I’ll do a deep dive at it sometime this week. > > On Tue, 15 Jan 2019 at 14:05, Xavi Hernandez wrote: > >> On Mon, Jan 14, 2019 at 11:08 AM Ashish Pandey >> wrote: >> >>> >>> I

Re: [Gluster-devel] Regression health for release-5.next and release-6

2019-01-15 Thread Atin Mukherjee
=, parent=0x7f8374ff8de0, data=0x7f83a8030ab0) at >>> ec-heald.c:294 >>> #4 0x7f83bc930ac2 in syncop_ftw (subvol=0x7f83a801b890, >>> loc=loc@entry=0x7f8374ff8de0, pid=pid@entry=-6, >>> data=data@entry=0x7f83a8030ab0, >>> fn=fn@entry=0x7f83add03140 ) at s

[Gluster-devel] GCS 0.5 release

2019-01-10 Thread Atin Mukherjee
Today, we are announcing the availability of GCS (Gluster Container Storage) 0.5. Highlights and updates since v0.4: - GCS environment updated to kube 1.13 - CSI deployment moved to 1.0 - Integrated Anthill deployment - Kube & etcd metrics added to prometheus - Tuning of etcd to increase

Re: [Gluster-devel] Regression health for release-5.next and release-6

2019-01-10 Thread Atin Mukherjee
Mohit, Sanju - request you to investigate the failures related to glusterd and brick-mux and report back to the list. On Thu, Jan 10, 2019 at 12:25 AM Shyam Ranganathan wrote: > Hi, > > As part of branching preparation next week for release-6, please find > test failures and respective test

[Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4293

2018-12-30 Thread Atin Mukherjee
Can we please check the reason of the failures? -- Forwarded message - From: Date: Sat, 29 Dec 2018 at 23:48 Subject: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4293 To: See <

[Gluster-devel] Update on GCS 0.5 release

2018-12-24 Thread Atin Mukherjee
We've decided to delay GCS 0.5 release and postpone by few days (new date : 1st week of Jan) considering (a) most of the team members are out on holidays (b) some of the critical issues/PRs are yet to be addressed from [1] . Regards, GCS team [1] https://waffle.io/gluster/gcs?label=GCS%2F0.5

[Gluster-devel] GCS 0.4 release

2018-12-12 Thread Atin Mukherjee
Today, we are announcing the availability of GCS (Gluster Container Storage) 0.4. The release was bit delayed to address some of the critical issues identified. This release brings in a good amount of bug fixes along with some key feature enhancements in GlusterD2. We’d request all of you to try

Re: [Gluster-devel] Shard test failing more commonly on master

2018-12-04 Thread Atin Mukherjee
We can't afford to keep a bad test hanging for more than a day which penalizes other fixes to be blocked (I see atleast 4-5 more patches failed on the same test today). I thought we already had a rule to mark a test bad at earliest in such occurrences. Not sure why we haven't done that yet. In

Re: [Gluster-devel] GD2 & glusterfs smoke issue

2018-11-08 Thread Atin Mukherjee
On Thu, 8 Nov 2018 at 15:07, Yaniv Kaul wrote: > > > On Tue, Nov 6, 2018 at 11:34 AM Atin Mukherjee > wrote: > >> We have enabled GD2 smoke results as a mandatory vote in glusterfs smoke >> since yesterday through BZ [1], however we just started seeing GD2 smoke >&

[Gluster-devel] GD2 & glusterfs smoke issue

2018-11-06 Thread Atin Mukherjee
We have enabled GD2 smoke results as a mandatory vote in glusterfs smoke since yesterday through BZ [1], however we just started seeing GD2 smoke failing which means glusterfs smoke on all the patches will not go through at this moment.. GD2 dev is currently working on it and trying to rectify the

Re: [Gluster-devel] Whats latest on Glusto + GD2 integration?

2018-11-04 Thread Atin Mukherjee
Thank you Rahul for the report. This does help to keep community up to date on the effort being put up here and understand where the things stand. Some comments inline. On Sun, Nov 4, 2018 at 8:01 PM Rahul Hinduja wrote: > Hello, > > Over past few weeks, few folks are engaged in integrating gd2

[Gluster-devel] Update on GCS 0.2 release

2018-10-29 Thread Atin Mukherjee
GCS 0.2 release is being bit delayed and we expect to have this out by this week. The primary reason being one of the issue filed under GCS which was highlighted as a critical issue at [1] . Team is working on this issue on active basis to understand if this has something to do with

Re: [Gluster-devel] Gluster Weekly Report : Static Analyser

2018-10-26 Thread Atin Mukherjee
On Fri, 26 Oct 2018 at 21:17, Sunny Kumar wrote: > Hello folks, > > The current status of static analyser is below: > > Coverity scan status: > Last week we started from 145 and now its 135 (26st Oct scan) and 3 > new defects got introduced. We fixed all 3 of them. > Major contributors - Sunny

[Gluster-devel] Fwd: New Defects reported by Coverity Scan for gluster/glusterfs

2018-10-12 Thread Atin Mukherjee
Write behind related changes introduced new defects. -- Forwarded message - From: Date: Fri, 12 Oct 2018 at 20:43 Subject: New Defects reported by Coverity Scan for gluster/glusterfs To: Hi, Please find the latest report on new defect(s) introduced to gluster/glusterfs found

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Missing option documentation (need inputs)

2018-10-10 Thread Atin Mukherjee
On Wed, 10 Oct 2018 at 20:30, Shyam Ranganathan wrote: > The following options were added post 4.1 and are part of 5.0 as the > first release for the same. They were added in as part of bugs, and > hence looking at github issues to track them as enhancements did not > catch the same. > > We need

Re: [Gluster-devel] Nightly build status (week of 01 - 07 Oct, 2018)

2018-10-09 Thread Atin Mukherjee
On Wed, Oct 10, 2018 at 4:20 AM Shyam Ranganathan wrote: > We have a set of 4 cores which seem to originate from 2 bugs as filed > and referenced below. > > Bug 1: https://bugzilla.redhat.com/show_bug.cgi?id=1636570 > Cleanup sequence issues in posix xlator. Mohit/Xavi/Du/Pranith are we >

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Branched and further dates

2018-10-05 Thread Atin Mukherjee
On Fri, 5 Oct 2018 at 20:29, Shyam Ranganathan wrote: > On 10/04/2018 11:33 AM, Shyam Ranganathan wrote: > > On 09/13/2018 11:10 AM, Shyam Ranganathan wrote: > >> RC1 would be around 24th of Sep. with final release tagging around 1st > >> of Oct. > > > > RC1 now stands to be tagged tomorrow, and

Re: [Gluster-devel] POC- Distributed regression testing framework

2018-10-04 Thread Atin Mukherjee
Deepshika, Please keep us posted on if you see the particular glusterd test failing again. It’ll be great to see this nightly job green sooner than later :-) . On Thu, 4 Oct 2018 at 15:07, Deepshikha Khandelwal wrote: > On Thu, Oct 4, 2018 at 6:10 AM Sanju Rakonde wrote: > > > > > > > > On

Re: [Gluster-devel] Release 5: Branched and further dates

2018-10-04 Thread Atin Mukherjee
On Thu, Oct 4, 2018 at 9:03 PM Shyam Ranganathan wrote: > On 09/13/2018 11:10 AM, Shyam Ranganathan wrote: > > RC1 would be around 24th of Sep. with final release tagging around 1st > > of Oct. > > RC1 now stands to be tagged tomorrow, and patches that are being > targeted for a back port

Re: [Gluster-devel] Status update : Brick Mux threads reduction

2018-10-03 Thread Atin Mukherjee
I have rebased [1] and triggered brick-mux regression as we fixed one genuine snapshot test failure in brick mux through https://review.gluster.org/#/c/glusterfs/+/21314/ which got merged today. On Thu, Oct 4, 2018 at 10:39 AM Poornima Gurusiddaiah wrote: > Hi, > > For each brick, we create

Re: [Gluster-devel] Proposal to change Gerrit -> Bugzilla updates

2018-09-11 Thread Atin Mukherjee
On Mon, Sep 10, 2018 at 7:09 PM Shyam Ranganathan wrote: > On 09/10/2018 08:37 AM, Nigel Babu wrote: > > Hello folks, > > > > We now have review.gluster.org as an > > external tracker on Bugzilla. Our current automation when there is a > > bugzilla attached to a patch

[Gluster-devel] glusterd.log file - few observations

2018-09-09 Thread Atin Mukherjee
As highlighted in the last maintainers meeting that I'm seeing some log entries in the glusterd log file which are (a) informative logs in one way but can cause excessive logging and potentially may run an user with out of space issue (b) some logs might not be errors or be avoided. Even though

[Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4067

2018-08-17 Thread Atin Mukherjee
C7 nightly has a crash too. -- Forwarded message - From: Date: Sat, 18 Aug 2018 at 00:01 Subject: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4067 To: See < https://build.gluster.org/job/regression-test-burn-in/4067/display/redirect?page=changes >

[Gluster-devel] Fwd: [Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #831

2018-08-17 Thread Atin Mukherjee
This is the first nightly job failure since we reopened the master branch. Crash seems to be from fini () code path. Need investigation and RCA here. -- Forwarded message - From: Date: Fri, 17 Aug 2018 at 23:54 Subject: [Gluster-Maintainers] Build failed in Jenkins:

Re: [Gluster-devel] Master branch is closed

2018-08-13 Thread Atin Mukherjee
Nigel, Now that mater branch is reopened, can you please revoke the commit access restrictions? On Mon, 6 Aug 2018 at 09:12, Nigel Babu wrote: > Hello folks, > > Master branch is now closed. Only a few people have commit access now and > it's to be exclusively used to merge fixes to make

[Gluster-devel] Out of regression builders

2018-08-11 Thread Atin Mukherjee
As both Shyam & I are running multiple flavours of manually triggered regression jobs (lcov, centos-7, brick-mux) on top of https://review.gluster.org/#/c/glusterfs/+/20637/ , we'd need to occupy most the builders. I have currently run out of builders to trigger some of the runs and have observed

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status (Fri, August 9th)

2018-08-11 Thread Atin Mukherjee
I saw the same behaviour for https://build.gluster.org/job/regression-on-demand-full-run/47/consoleFull as well. In both the cases the common pattern is if a test was retried but overall the job succeeded. Is this a bug which got introduced recently? At the moment, this is blocking us to debug any

[Gluster-devel] tests/bugs/core/multiplex-limit-issue-151.t timed out

2018-08-10 Thread Atin Mukherjee
https://build.gluster.org/job/line-coverage/455/consoleFull 1 test failed: tests/bugs/core/multiplex-limit-issue-151.t (timed out) The last job https://build.gluster.org/job/line-coverage/454/consoleFull took only 21 secs, so we're not anyway near to breaching the threshold of the timeout secs.

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status (Thu, August 09th)

2018-08-10 Thread Atin Mukherjee
Pranith, https://review.gluster.org/c/glusterfs/+/20685 seems to have caused multiple failure runs out of https://review.gluster.org/c/glusterfs/+/20637/8 out of yesterday's report. Did you get a chance to look at it? On Fri, Aug 10, 2018 at 1:03 PM Pranith Kumar Karampuri wrote: > > > On Fri,

[Gluster-devel] tests/bugs/glusterd/quorum-validation.t ==> glusterfsd core

2018-08-08 Thread Atin Mukherjee
See https://build.gluster.org/job/line-coverage/435/consoleFull . core file can be extracted from [1] The core] seems to be coming from changelog xlator. Please note line-cov doesn't run with brick mux enabled. [1]

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status (Wed, August 08th)

2018-08-08 Thread Atin Mukherjee
On Thu, 9 Aug 2018 at 06:34, Shyam Ranganathan wrote: > Today's patch set 7 [1], included fixes provided till last evening IST, > and its runs can be seen here [2] (yay! we can link to comments in > gerrit now). > > New failures: (added to the spreadsheet) >

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down status

2018-08-08 Thread Atin Mukherjee
On Wed, Aug 8, 2018 at 5:08 AM Shyam Ranganathan wrote: > Deserves a new beginning, threads on the other mail have gone deep enough. > > NOTE: (5) below needs your attention, rest is just process and data on > how to find failures. > > 1) We are running the tests using the patch [2]. > > 2) Run

Re: [Gluster-devel] Test: ./tests/bugs/ec/bug-1236065.t

2018-08-07 Thread Atin Mukherjee
+Mohit Requesting Mohit for help. On Wed, 8 Aug 2018 at 06:53, Shyam Ranganathan wrote: > On 08/07/2018 07:37 PM, Shyam Ranganathan wrote: > > 5) Current test failures > > We still have the following tests failing and some without any RCA or > > attention, (If something is incorrect, write

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-05 Thread Atin Mukherjee
On Mon, 6 Aug 2018 at 06:09, Sankarshan Mukhopadhyay < sankarshan.mukhopadh...@gmail.com> wrote: > On Mon, Aug 6, 2018 at 5:17 AM, Amye Scavarda wrote: > > > > > > On Sun, Aug 5, 2018 at 3:24 PM Shyam Ranganathan > > wrote: > >> > >> On 07/31/2018 07:16 AM, Shyam Ranganathan wrote: > >> > On

[Gluster-devel] validation-server-quorum.t crash

2018-08-04 Thread Atin Mukherjee
The patch [1] addresses the $Subject and it needs to get in the master to address the frequent failures. Request for your reviews. [1] https://review.gluster.org/#/c/20584/ ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Atin Mukherjee
New addition - tests/basic/volume.t - failed twice atleast with shd core. One such ref - https://build.gluster.org/job/centos7-regression/2058/console On Thu, Aug 2, 2018 at 6:28 PM Sankarshan Mukhopadhyay < sankarshan.mukhopadh...@gmail.com> wrote: > On Thu, Aug 2, 2018 at 5:48 PM, Kotresh

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Atin Mukherjee
On Thu, Aug 2, 2018 at 4:37 PM Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > > > On Thu, Aug 2, 2018 at 3:49 PM, Xavi Hernandez > wrote: > >> On Thu, Aug 2, 2018 at 6:14 AM Atin Mukherjee >> wrote: >> >>> >>> >&

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-01 Thread Atin Mukherjee
On Tue, Jul 31, 2018 at 10:11 PM Atin Mukherjee wrote: > I just went through the nightly regression report of brick mux runs and > here's what I can sum

Re: [Gluster-devel] tests/bugs/distribute/bug-1122443.t - spurious failure

2018-08-01 Thread Atin Mukherjee
4/console >> >> -Krutika >> >> On Sun, Jul 29, 2018 at 1:53 PM, Atin Mukherjee >> wrote: >> >>> tests/bugs/distribute/bug-1122443.t fails my set up (3 out of 5 times) >>> running with master branch. As per my knowledge I've not seen th

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-07-31 Thread Atin Mukherjee
I just went through the nightly regression report of brick mux runs and here's what I can summarize. = Fails only with brick-mux

[Gluster-devel] tests/bugs/distribute/bug-1122443.t - spurious failure

2018-07-29 Thread Atin Mukherjee
tests/bugs/distribute/bug-1122443.t fails my set up (3 out of 5 times) running with master branch. As per my knowledge I've not seen this test failing earlier. Looks like some recent changes has caused it. One of such instance is https://build.gluster.org/job/centos7-regression/1955/ . Request

[Gluster-devel] tests/00-geo-rep/georep-basic-dr-tarssh.t times out after 200 secs

2018-07-10 Thread Atin Mukherjee
https://build.gluster.org/job/regression-on-demand-multiplex/115/consoleFull is one such reference. I am sure we should see in non brick mux regression reports too. ___ Gluster-devel mailing list Gluster-devel@gluster.org

Re: [Gluster-devel] [Gluster-infra] bug-1432542-mpx-restart-crash.t failing

2018-07-07 Thread Atin Mukherjee
https://build.gluster.org/job/regression-test-with-multiplex/794/display/redirect has the same test failing. Is the reason of the failure different given this is on jenkins? On Sat, 7 Jul 2018 at 19:12, Deepshikha Khandelwal wrote: > Hi folks, > > The issue[1] has been resolved. Now the

Re: [Gluster-devel] Running regressions with GD2

2018-06-01 Thread Atin Mukherjee
Thanks Nigel for initiating this email. This is a pending (and critical) task for us to get back on to it and take it to completion as getting a confidence on overall gluster features to work with GD2, its a very important task for us. Its just that in 4.1 team was busy in finishing some of the

Re: [Gluster-devel] trash.t failure

2018-04-18 Thread Atin Mukherjee
patch. > Please re-land the patch with any fixes as a fresh review. > Thanks Nigel. The patches waiting on the regression queue need to be rebased. Only doing a ‘recheck centos’ is not going to be helpful. > > On Wed, Apr 18, 2018 at 8:25 AM, Atin Mukherjee <amukh...@re

Re: [Gluster-devel] Regression with brick multiplex on demand

2018-04-17 Thread Atin Mukherjee
Super useful. Thanks Nigel (and Amar for the idea). On Tue, Apr 17, 2018 at 12:04 PM, Nigel Babu wrote: > Hello folks, > > In the past if you had a patch that was fixing a brick multiplex failure, > you couldn't test whether it actually fixed brick multiplex failures >

Re: [Gluster-devel] ./tests/basic/md-cache/bug-1418249.t failing

2018-03-26 Thread Atin Mukherjee
We have more problems, nl-cache.t is also failing. I think need to make sure that any changes a patch introduces on any of the group profiles, a full regression is triggered. On Mon, 26 Mar 2018 at 09:50, Susant Palai wrote: > Sent a patch here -

Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change the version numbers of Gluster project

2018-03-16 Thread Atin Mukherjee
On Fri, Mar 16, 2018 at 11:03 AM, Vijay Bellur <vbel...@redhat.com> wrote: > > > On Wed, Mar 14, 2018 at 9:48 PM, Atin Mukherjee <amukh...@redhat.com> > wrote: > >> >> >> On Thu, Mar 15, 2018 at 9:45 AM, Vijay Bellur <vbel...@redhat.com> wr

Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change the version numbers of Gluster project

2018-03-14 Thread Atin Mukherjee
02:25 PM, Vijay Bellur wrote: >> >> >> >> >> >> On Tue, Mar 13, 2018 at 4:25 AM, Kaleb S. KEITHLEY >> >> <kkeit...@redhat.com <mailto:kkeit...@redhat.com>> wrote: >> >> >> >> On 03/12/2018 02:32 PM, Shyam Rangan

Re: [Gluster-devel] Announcing Softserve- serve yourself a VM

2018-03-12 Thread Atin Mukherjee
On Wed, Feb 28, 2018 at 6:56 PM, Deepshikha Khandelwal wrote: > Hi, > > We have launched the alpha version of SOFTSERVE[1], which allows Gluster > Github organization members to provision virtual machines for a specified > duration of time. These machines will be deleted

Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change the version numbers of Gluster project

2018-03-12 Thread Atin Mukherjee
On Mon, Mar 12, 2018 at 5:51 PM, Amar Tumballi wrote: > Hi all, > > Below is the proposal which most of us in maintainers list have agreed > upon. Sharing it here so we come to conclusion quickly, and move on :-) > > --- > > Until now, Gluster project’s releases followed

  1   2   3   4   5   6   7   8   >