p behavior.
>>>>> Otherwise, considering we have >10 people having ability to merge patches,
>>>>> many people may miss having a look on clang-format issues.
>>>>>
>>>>
>>>> I agree that it could be hard to enforce some rules,
Hi folks,
We have initiated the migration process today. All the patch owners are
requested to move their existing patches from Gerrit[1] to Github[2].
The changes we brought in with this migration:
- The 'devel' branch[3] is the new default branch on GitHub to get away
from master/slave languag
Hi folks,
Now that glusterfs-8.2 is out, we want to get ahead with the plan to move
to GitHub.
We are planning to do that in several steps, but the gist of the plan
is the following:
- Starting from some time this week, we will ask people to resubmit
the PR to github. We are not moving them auto
-smoke
- gluster_glusto
- gluster_kubernetes
- gluster_libgfapi-python
- gluster_run-tests-in-vagrant
- glusterfs-regression
I'm looking for the job owners who can confirm if we can remove these jobs.
On Thu, Jul 23, 2020 at 4:28 PM Yaniv Kaul wrote:
>
>
> On Thu, Jul 23, 2020 at 1:04
FYI, we have the list of jobs running under
https://ci.centos.org/view/Gluster/
Please take a look and start to clean up the ones which are not required.
-- Forwarded message -
From: Vipul Siddharth
Date: Thu, Jun 25, 2020 at 1:50 PM
Subject: [Ci-users] Call for migration to new o
Hi Emmanuel,
You need to have a 'Type: Bug' on the linked Github issue. This should fix
the failing test.
To retrigger the smoke tests again, comment 'recheck smoke' on your PR.
On Mon, Jun 29, 2020 at 6:08 PM Emmanuel Dreyfus wrote:
> Hello
>
> After a long absence, I tried to upgrade gluster
Hi All,
We have migrated the build-jobs repo from Gerrit[1] to Github[2]. It is a
repository for automatically configuring Jenkins jobs of Gluster (and other
Gluster related projects). The automation to update Jenkins and the review
process is in place.
The build-job repo on Gerrit is in 'Read On
Hi everyone,
We have added an option on Softserve[1] to self-provision the centos8
instance. It would help in debugging the centos8 related regression
failures[2].
Also, please keep on updating the issue[3] if there is any
improvement/change fixing the failing test suite.
Let us know if you see
Hi everyone,
We have migrated most of the current upstream bugs(attached below) from the
GlusterFS community Bugzilla product to Github issues.
1. All the issues created as a part of a migration will have Bugzilla URL,
description, comments history, Github labels ('Migrated', 'Type: Bug',
Prio) w
https://ci.centos.org/job/gluster_gd2-nightly-rpms and
https://ci.centos.org/job/gluster_glusterd2 have been deleted from centos
CI.
On Tue, Feb 18, 2020 at 10:25 AM Aravinda VK wrote:
> We can stop this job.
>
> regards
> Aravinda
>
> > On 18-Feb-2020, at 5:53 AM, Sankarshan Mukhopadhyay <
> sa
Hi,
>From the last few days, we have been trying to set up and stabilize the
regression test suite on Centos8 builders.
10 tests are failing:
./tests/basic/afr/entry-self-heal.t
./tests/basic/afr/split-brain-healing-ctime.t
./tests/basic/afr/split-brain-healing.t
./tests/basic/playground/template
On Sat, Jan 4, 2020 at 9:55 AM Amar Tumballi wrote:
>
>
> On Fri, Jan 3, 2020 at 10:10 PM Yaniv Kaul wrote:
>
>>
>>
>> On Fri, Jan 3, 2020 at 4:07 PM Amar Tumballi wrote:
>>
>>> Hi Team,
>>>
>>> First thing first - Happy 2020 !! Hope this year will be great for all
>>> of us :-)
>>>
>>> Few req
On Tue, Oct 15, 2019 at 10:57 AM Aravinda Vishwanathapura Krishna Murthy <
avish...@redhat.com> wrote:
> Centos CI was voting Glusterd2 repo's pull requests.
>
> https://github.com/gluster/centosci/blob/master/jobs/glusterd2.yml
>
> On Mon, Oct 14, 2019 at 8:31 PM Amar Tumballi wrote:
>
>>
>>
>>
Hi,
We have planned a jenkins instance build.gluster.org upgrade today to the
newer stable version so as to pull in the latest plugins updates which
fixes security vulnerabilities. It will stop all the running jobs and will
be unavailable during the downtime window.
The downtime window will be fr
On Tue, Jul 9, 2019 at 5:34 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:
>
>
> On Thu, Jul 4, 2019 at 9:55 PM Amar Tumballi Suryanarayan <
> atumb...@redhat.com> wrote:
>
>>
>>
>> On Thu, Jul 4, 2019 at 9:37 PM Michael Scherer
>> wrote:
>>
>>> Hi,
>>>
>>> I have upgraded for testin
Misc, is EPEL got recently installed on the builders?
Can you please resolve the 'Why EPEL on builders?'. EPEL+python3 on
builders seems not a good option to have.
On Thu, Jun 20, 2019 at 6:37 PM Michael Scherer wrote:
> Le jeudi 20 juin 2019 à 08:38 -0400, Kaleb Keithley a écrit :
> > On Thu,
Regression on release-5 branch is also failing because of this. Can we have
backport Kotresh's patch https://review.gluster.org/#/c/glusterfs/+/22829/ to
these branches.
On Mon, Jun 24, 2019 at 6:06 PM Anoop C S wrote:
> On Fri, 2019-06-07 at 10:24 +0530, Deepshikha Khandelwal wrote
On Thu, Jun 20, 2019 at 3:20 PM Niels de Vos wrote:
> On Thu, Jun 20, 2019 at 02:56:51PM +0530, Amar Tumballi Suryanarayan wrote:
> > On Thu, Jun 20, 2019 at 2:35 PM Niels de Vos wrote:
> >
> > > On Thu, Jun 20, 2019 at 02:11:21PM +0530, Amar Tumballi Suryanarayan
> wrote:
> > > > On Thu, Jun 20
Hello,
review.gluster.org is down since morning. We are looking into the issue.
Will update once it is back.
___
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017
NA/EMEA Sch
On Thu, Jun 13, 2019 at 10:06 AM Kaleb Keithley wrote:
>
>
> On Wed, Jun 12, 2019 at 8:13 PM Deepshikha Khandelwal
> wrote:
>
>>
>> On Thu, Jun 13, 2019 at 4:41 AM Kaleb Keithley
>> wrote:
>>
>>>
>>>
>>> On Wed, Jun 12, 2019 a
Thanks Atin for pointing this out. Yes, I'll upgrade the matching build
tool on builders once coverity is upgraded on scan.coverity.com
On Thu, Jun 13, 2019 at 8:01 AM Atin Mukherjee
wrote:
> Fyi..no scan for 3-4 days starting from June 17th for the upgrade. Post
> that we may have to do some ch
On Thu, Jun 13, 2019 at 4:41 AM Kaleb Keithley wrote:
>
>
> On Wed, Jun 12, 2019 at 11:36 AM Kaleb Keithley
> wrote:
>
>>
>> On Wed, Jun 12, 2019 at 10:43 AM Amar Tumballi Suryanarayan <
>> atumb...@redhat.com> wrote:
>>
>>>
>>> We recently noticed that in one of the package update on builder (i
Hi Yaniv,
I'm working on it. Looking at the logs and further health checkups, I did
not find any memory issue tracebacks on builder201.
On Mon, Jun 10, 2019 at 12:12 PM Yaniv Kaul wrote:
> From [1], we can see that non-root-unlink-stale-linkto.t failed on:
> useradd: /etc/passwd.30380: Cannot a
Hi Yaniv,
We are working on this. The builders are picking up python3.6 which is
leading to modules missing and such undefined errors.
Kotresh has sent a patch https://review.gluster.org/#/c/glusterfs/+/22829/
to fix the issue.
On Thu, Jun 6, 2019 at 11:49 AM Yaniv Kaul wrote:
> From [1].
>
I recently added 3 builders builder208, builder209, builder210 to the
regression pool. Network to these new builders did not come up because it
was looking for non-existing ethernet card eth0 on reboot and hence
failing. I'll reconnect them back and update here once I fix the issue
today.
Sorry fo
Any updates on this?
It's failing few of the regression runs.
On Sat, May 18, 2019 at 6:27 PM Mohit Agrawal wrote:
> Hi Rafi,
>
> I have not checked yet, on Monday I will check the same.
>
> Thanks,
> Mohit Agrawal
>
> On Sat, May 18, 2019 at 3:56 PM RAFI KC wrote:
>
>> All of this links have
gt; failed on builder204 for similar reasons I believe?
>>>
>>> I am bit more worried on this issue being resurfacing more often these
>>> days. What can we do to fix this permanently?
>>>
>>>
>>>> [1] https://build.gluster.org/job/centos7-regression/59
Sanju, can you please give us more info about the failures.
I see the failures occurring on just one of the builder (builder206). I'm
taking it back offline for now.
On Tue, May 7, 2019 at 9:42 PM Michael Scherer wrote:
> Le mardi 07 mai 2019 à 20:04 +0530, Sanju Rakonde a écrit :
> > Looks lik
This list also captures the BZs which are in NEW state. The search
description goes like this:
- *Status:* NEW
- *Product:* GlusterFS
- *Changed:* (is greater than or equal to) -4w
- *Creation date:* (changed after) -4w
- *Keywords:* (does not contain the string) Triaged
On Sun, A
The above test is failing all day today because of rpc-statd going into
inactive state on builders.
I've been trying to enable this service going in dead state after every
run.
1. rpcbind and rpcbind.socket is running on builders
2. ipv6 is disabled.
3. nfs-server is also working fine
4. is_nfs_ex
Hello,
I’ve planned to do an upgrade of build.gluster.org tomorrow morning so as
to install and pull in the latest security upgrade of the Jenkins plugins.
I’ll stop all the running jobs and re-trigger them once I'm done with the
upgrade.
The downtime window will be from :
UTC: 0330 to 0400
IST:
Hello,
I had to do an unplanned Jenkins restart. Jenkins was not responding to any
of the requests and was not giving back the regression votes.
I did update the vote verified values of regression jobs which seemed to
change to 0 all of a sudden and was not giving back the vote. I'm
investigating
On Fri, Apr 5, 2019 at 12:16 PM Michael Scherer wrote:
> Le jeudi 04 avril 2019 à 18:24 +0200, Michael Scherer a écrit :
> > Le jeudi 04 avril 2019 à 19:10 +0300, Yaniv Kaul a écrit :
> > > I'm not convinced this is solved. Just had what I believe is a
> > > similar
> > > failure:
> > >
> > > *00
Hello folks,
Softserve is deployed back today with AWS stack to loan centos machines for
regression testing. I've tested them a few times today to confirm it works
as expected. In the past, Softserve[1] machines would be a clean Centos 7
image. Now, we have an AMI image with all the dependencies i
Hello,
I’ve planned to do an upgrade of build.gluster.org tomorrow morning so as
to install and pull in the latest security upgrade of the Jenkins plugins.
I’ll stop all the running jobs and re-trigger them once I'm done with
upgradation.
The downtime window will be from :
UTC: 0330 to 0400
IST:
019 at 5:38 PM Deepshikha Khandelwal
> wrote:
> >
> > Hello,
> >
> > Today while debugging the centos7-regression failed builds I saw most of
> the builders did not pass the instance status check on AWS and were
> unreachable.
> >
> > Misc investigate
Hello,
Today while debugging the centos7-regression failed builds I saw most of
the builders did not pass the instance status check on AWS and were
unreachable.
Misc investigated this and came to know about the patch[1] which seems to
break the builder one after the other. They all ran the regres
Hi,
After all the RAX builders are moved to AWS. We are planning to
migrate softserve application to AWS completely.
As a part of this migration, we're bringing down softserve for a few
days. So softserve will not be able to lend any more machines until we
are ready with the migrated code.
Thank
On Thu, Oct 4, 2018 at 6:10 AM Sanju Rakonde wrote:
>
>
>
> On Wed, Oct 3, 2018 at 3:26 PM Deepshikha Khandelwal
> wrote:
>>
>> Hello folks,
>>
>> Distributed-regression job[1] is now a part of Gluster's
>> nightly-master build pipeline. The fol
any other issues, please file a bug[3].
[1]: https://build.gluster.org/job/distributed-regression
[2]: https://build.gluster.org/job/distributed-regression/264/console
[3]:
https://bugzilla.redhat.com/enter_bug.cgi?product=glusterfs&component=project-infrastructure
Thanks,
Deepshikha Kha
Gerrit is now upgraded to the newer version and is back online.
Please file a bug if you face any issue.
On Tue, Aug 7, 2018 at 11:53 AM Nigel Babu wrote:
>
> Reminder, this upgrade is tomorrow.
>
> -- Forwarded message -
> From: Nigel Babu
> Date: Fri, Jul 27, 2018 at 5:28 PM
>
Shyam,
Thank you for pointing this out. I've updated the logs for bug-990028.t test.
On Wed, Jul 18, 2018 at 8:40 PM Shyam Ranganathan wrote:
>
> On 07/18/2018 10:51 AM, Shyam Ranganathan wrote:
> > On 07/18/2018 05:42 AM, Deepshikha Khandelwal wrote:
> >> Hi all,
>
ts.
[1] https://build.gluster.org/job/distributed-regression/
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1602282
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1602262
Thanks,
Deepshikha Khandelwal
___
Gluster-devel mailing list
Gluster-devel@glu
Hi folks,
The issue[1] has been resolved. Now the softserve instance will be
having 2GB RAM i.e. same as that of the Jenkins builder's sizing
configurations.
[1] https://github.com/gluster/softserve/issues/40
Thanks,
Deepshikha Khandelwal
On Fri, Jul 6, 2018 at 6:14 PM, Karthik Subrah
onvenience.
[1] https://github.com/gluster/softserve/issues/40
Thanks,
Deepshikha Khandelwal
On Fri, Jul 6, 2018 at 3:44 PM, Karthik Subrahmanya wrote:
> Thanks Poornima for the analysis.
> Can someone work on fixing this please?
>
> ~Karthik
>
> On Fri, Jul 6, 2018 at 3:17
rg/job/distributed-regression
Regards,
Deepshikha Khandelwal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
Hi,
We have launched the alpha version of SOFTSERVE[1], which allows Gluster
Github organization members to provision virtual machines for a specified
duration of time. These machines will be deleted automatically afterwards.
Now you don’t need to file a bug to get VM. It’s just a form away with
abandoned reviews[2].
If you find anything odd or have any queries, please feel free to ask.
[1]https://build.gluster.org/job/close-old-reviews/
[2]https://build.gluster.org/job/close-old-reviews/7/console
Regards,
Deepshikha Khandelwal
___
Gluster-d
& Regards,
Deepshikha Khandelwal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
you
have any feedback or questions, please feel free to get in touch with us.
[1] https://build.gluster.org/job/cppcheck/
[2] https://build.gluster.org/job/clang-scan/
Thanks & Regards,
Deepshikha Khandelwal
___
Gluster-devel mailing list
Gluster-d
50 matches
Mail list logo