, volume status and all other gluster basic
> commands go through. I also inspected the processes and I don't see any
> suspect of processes being hung.
>
> So the mystery continues and we need to see why the test script is not all
> moving forward.
>
An additional thing that could be inter
event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2019-01-16 16:28:49.223376] I [cli-rpc-ops.c:1448:gf_cli_start_volume_cbk]
0-cli: Received resp to start volume <=== successfully processed the
callback
[2019-01-16 16:28:49.223668] I [input.c:31:cli_batch] 0-: Exiting with: 0
cmd
up all
those issues. They are adding noise to debugging the real issue(s) and
pollute our logs.
Y.
> [2019-01-08 16:36:43.824437] I [input.c:31:cli_batch] 0-: Exiting with: 0
>
>
> Bad run:
>
> [2019-01-08 16:36:43.940361] I [cli.c:834:main] 0-cli: Started running
> gluster wit
36:43.824437] I [input.c:31:cli_batch] 0-: Exiting with: 0
>
>
> Bad run:
>
> [2019-01-08 16:36:43.940361] I [cli.c:834:main] 0-cli: Started running
> gluster with version 6dev
> [2019-01-08 16:36:44.147364] I [MSGID: 101190]
> [event-epoll.c:675:event_dispatch_epoll_worker] 0-e
ispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2019-01-08 16:36:44.147583] E [MSGID: 101191]
[event-epoll.c:759:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
In glusterd.log it seems as if it hasn't received any status request. It
looks like the
"Shyam Ranganathan"
Cc: "Gluster Devel"
Sent: Saturday, January 12, 2019 6:46:20 PM
Subject: Re: [Gluster-devel] Regression health for release-5.next and release-6
Previous logs related to client not bricks, below are the brick logs
[2019-01-12 12:25:25.893485]:
Previous logs related to client not bricks, below are the brick logs
[2019-01-12 12:25:25.893485]:++
G_LOG:./tests/bugs/ec/bug-1236065.t: TEST: 68 rm -f 0.o 10.o 11.o 12.o 13.o
14.o 15.o 16.o 17.o 18.o 19.o 1.o 2.o 3.o 4.o 5.o 6.o 7.o 8.o 9.o ++
The message "I [MSGID: 101016] [glus
For specific to "add-brick-and-validate-replicated-volume-options.t" i have
posted a patch https://review.gluster.org/22015.
For test case "ec/bug-1236065.t" I think the issue needs to be check by ec
team
On the brick side, it is showing below logs
>
on wire in the future [Invali
We can check health on master post the patch as stated by Mohit below.
Release-5 is causing some concerns as we need to tag the release
yesterday, but we have the following 2 tests failing or coredumping
pretty regularly, need attention on these.
ec/bug-1236065.t
glusterd/add-brick-and-validate-r
That is a good point Mohit, but do we know how many of these tests failed
because of 'timeout' ? If most of these are due to timeout, then yes, it
may be a valid point.
-Amar
On Thu, Jan 10, 2019 at 4:51 PM Mohit Agrawal wrote:
> I think we should consider regression-builds after merged the pat
I think we should consider regression-builds after merged the patch (
https://review.gluster.org/#/c/glusterfs/+/21990/)
as we know this patch introduced some delay.
Thanks,
Mohit Agrawal
On Thu, Jan 10, 2019 at 3:55 PM Atin Mukherjee wrote:
> Mohit, Sanju - request you to investigate the failu
Mohit, Sanju - request you to investigate the failures related to glusterd
and brick-mux and report back to the list.
On Thu, Jan 10, 2019 at 12:25 AM Shyam Ranganathan
wrote:
> Hi,
>
> As part of branching preparation next week for release-6, please find
> test failures and respective test link
Hi,
As part of branching preparation next week for release-6, please find
test failures and respective test links here [1].
The top tests that are failing/dumping-core are as below and need attention,
- ec/bug-1236065.t
- glusterd/add-brick-and-validate-replicated-volume-options.t
- readdir-ahead
13 matches
Mail list logo