Pranith Kumar Karampuri pkara...@redhat.com wrote:
The following tests keep failing spuriously nowadays. I CCed
glusterd folks and original author(Kritika) and Last change author
(Emmanuel).
I did not suspect it was my change. The strange bit is that the same
wrappers in other tests do
On Sun, 9 Nov 2014 09:57:36 +0100
m...@netbsd.org (Emmanuel Dreyfus) wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
The following tests keep failing spuriously nowadays. I CCed
glusterd folks and original author(Kritika) and Last change author
(Emmanuel).
I did not
Hi,
we've been working on triaging bugs against unsupported versions. I
think we would do our community users a favour if we have a deprecated
or old unsupported version in Bugzilla. There should be no need for
users to select old versions that we do not update anymore.
There are still many bugs
Justin Clift jus...@gluster.org wrote:
I've just used that page to disconnect slave25, so you're fine to
investigate there (same login credentials as before). Please reconnect
it when you're done. :)
I have made a build for testing on slave25:~root/manu20141109
But the problem with spurious
On 11/08/2014 05:21 AM, Justin Clift wrote:
On Wed, 05 Nov 2014 14:58:06 +0530
Atin Mukherjee amukh...@redhat.com wrote:
snip
Can there be any cases where glusterd instance may go down
unexpectedly without a crash?
[1] http://build.gluster.org/job/rackspace-regression-2GB-triggered
On 11/10/2014 01:04 AM, Emmanuel Dreyfus wrote:
Justin Clift jus...@gluster.org wrote:
I've just used that page to disconnect slave25, so you're fine to
investigate there (same login credentials as before). Please reconnect
it when you're done. :)
Since I could spot nothing from, I
Pranith Kumar Karampuri pkara...@redhat.com wrote:
Since I could spot nothing from, I reconnected it. I will try by
submitting a change with set -x for that script.
It was consistently happening with my change just on regression machine.
So I added set -x and submitted the change. Lets see
On 11/08/2014 08:19 AM, Peter Auyeung wrote:
I have a node down while gfs still open for writing.
Got tons of heal-failed on a replicated volume showing as gfid.
Tried gfid-resolver and got the following:
# ./gfid-resolver.sh /brick02/gfs/ 88417c43-7d0f-4ec5-8fcd-f696617b5bc1
On 11/10/2014 10:58 AM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
Since I could spot nothing from, I reconnected it. I will try by
submitting a change with set -x for that script.
It was consistently happening with my change just on regression machine.
So I
Heal-failed can be for any reason that's not defined as split brain. The only
place I've been able to find clues is in the log files. Look at the timestamp
on the heal-failed output and match it to log entries in glustershd logs.
On November 7, 2014 6:49:36 PM PST, Peter Auyeung
On 11/09/2014 05:23 PM, Niels de Vos wrote:
Hi,
we've been working on triaging bugs against unsupported versions. I
think we would do our community users a favour if we have a deprecated
or old unsupported version in Bugzilla. There should be no need for
users to select old versions that we do
11 matches
Mail list logo