On Thu, Jan 11, 2018 at 10:15 AM, Nigel Babu <nig...@redhat.com> wrote:

> Hello folks,
>
> We may have been a little too quick to blame Meltdown on the Jenkins
> failures yesterday. In any case, we've open a ticket with our provider and
> they're looking into the failures. I've looked at the last 90 failures to
> get a comprehensive number on the failures.
>
> Total Jobs: 90
> Failures: 62
> Failure Percentage: 68.89%
>
> I've analyzed the individual failures categorized them as well.
>
> slave28.cloud.gluster.org failure: 9
> Geo-replication failures: 12
> Fops-during-migration.t: 4
> Compilation failures: 3
> durability-off.t failures: 7
>
> These alone total to 35 failures. The slave28 failures were due to the
> machine running out of disk space. We had a very large binary archived from
> an experimental branch build failure. I've cleared that core out and this
> is now fixed. The geo-replication failures were due to geo-rep tests
> depending on root's .bashrc having the PATH variable modified. This was not
> a standard setup and therefore didn't work on many machines. This has now
> been fixed. The other 3 were transient failures either limited to a
> particular review or a temporary bustage on master. The majority of the
> recent failures had more to do with infra than to do with tests.
>

While Nigel tells me that the infra related problems for geo-rep tests are
already fixed, I see a similar failure pops up through
https://build.gluster.org/job/centos6-regression/8356/console .

@Kotresh - Could you check if this is something different?


> I'm therefore cautiously moving with the assumption that the impact of
> KPTI patch is minimal so far.
>
> --
> nigelb
>
> _______________________________________________
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
_______________________________________________
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra

Reply via email to