- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Vijaikumar Mallikarjuna vmall...@redhat.com, Gluster Devel
gluster-devel@gluster.org
Sent: Monday, May 4, 2015 11:21:32 AM
Subject: [Gluster-devel] regarding spurious failure
Hi All,
There are about 70+ dependent bugs in New and Assigned states on the
3.7.0 tracker [1]. I suspect that a good number of them need a state
change to reflect the current status. If you happen to own a bug or have
sent across a patch for any in the list, can you please update the
On Mon, May 04, 2015 at 09:20:45AM +0530, Atin Mukherjee wrote:
[2015-05-04 03:43:50.309769] E [socket.c:823:__socket_server_bind]
4-tcp.patchy-server: binding to failed: Address already in use
It seems to even have trouble displaying it. I will have a look.
--
Emmanuel Dreyfus
- Original Message -
From: Sachin Pandit span...@redhat.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Monday, May 4, 2015 11:31:01 AM
Subject: Re: [Gluster-devel] regarding spurious failure
Hi All,
There has been a spate of regression test failures (due to broken tests
or race conditions showing up) in the recent past [1] and I am inclined
to block 3.7.0 GA along with acceptance of patches until we fix *all*
regression test failures. We seem to have reached a point where this
- Original Message -
From: Sachin Pandit span...@redhat.com
To: Emmanuel Dreyfus m...@netbsd.org
Cc: gluster-devel@gluster.org
Sent: Monday, April 27, 2015 10:58:21 AM
Subject: Re: [Gluster-devel] NetBSD regression status upate
- Original Message -
From: Emmanuel
Hi all,
[TLDR; jump down to the list of xlators and see if they are in the right
package, please reply with corrections]
Many new features introduce new xlators. It is not always straight
forward to see if an xlator is intended for server-side usage,
client-side or maybe even both. There
Hi Pranith,
Could you please provide a regression instance where the snapshot tests
failed. I had a look at
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8148/consoleFull
but, the logs for bug-1162498.t are not present for that instance.
Similarly other instances recorded
On Mon, May 04, 2015 at 09:20:45AM +0530, Atin Mukherjee wrote:
I see the following log from the brick process:
[2015-05-04 03:43:50.309769] E [socket.c:823:__socket_server_bind]
4-tcp.patchy-server: binding to failed: Address already in use
This happens before the failing test 52 (volume
On Mon, May 04, 2015 at 02:13:19PM +0530, Kaushal M wrote:
io-threads should be in the client package. It is possible to have
io-threads on the client by enabling performance.client-io-threads
option. This is not a common option, but it someone could use it.
In that case, it should probably be
On 05/04/2015 12:31 PM, Sachin Pandit wrote:
- Original Message -
From: Sachin Pandit span...@redhat.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Monday, May 4, 2015 11:31:01 AM
Subject: Re: [Gluster-devel] regarding spurious
On 05/04/2015 01:44 PM, Avra Sengupta wrote:
Hi Pranith,
Could you please provide a regression instance where the snapshot
tests failed. I had a look at
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8148/consoleFull
but, the logs for bug-1162498.t are not present for that
On Mon, May 04, 2015 at 03:24:38AM -0400, Sachin Pandit wrote:
83 TEST_IN_LOOP ! fd_write $i content
(...)
In this test case we are writing content into the files (fd's).
But for some reason the data content is not being written
(...)
We suspect this could be because of NFS client
I have faced a similar issue of data not being written to file with
geo-rep setup. It is aux-gfid-mount though. The root cause could
be same for both. In geo-rep file creation phase is successful
but the data (rsync) was hung with no data being synced.
I have not RCA'ed it yet. I will try to
On Mon, May 04, 2015 at 11:33:45AM +0530, Vijay Bellur wrote:
Hi All,
There are about 70+ dependent bugs in New and Assigned states on the 3.7.0
tracker [1]. I suspect that a good number of them need a state change to
reflect the current status. If you happen to own a bug or have sent across
Kotresh Hiremath Ravishankar khire...@redhat.com wrote:
I have faced a similar issue of data not being written to file with
geo-rep setup.
Just on NetBSD, or also on Linux?
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Thanks Vijay! I forgot to upgrade the kernel(thinp 6.6 perf bug gah)
before I created this data set, so its a bit smaller:
total threads = 16
total files = 7,060,700 (64 kb files, 100 files per dir)
total data = 430.951 GB
88.26% of requested files processed, minimum is 70.00
10101.355737
On Mon, May 04, 2015 at 07:57:23PM +0530, Raghavendra Talur wrote:
On Thursday 30 April 2015 02:28 PM, Kaushal M wrote:
The log-level is set by default to
prefix/var/log/etc-glusterfs-glusterd.vol.log, even when running in
`-N` mode. Only when running in `--debug` the log itself is redirected
On May 4, 2015 20:17, Niels de Vos nde...@redhat.com wrote:
On Mon, May 04, 2015 at 07:57:23PM +0530, Raghavendra Talur wrote:
On Thursday 30 April 2015 02:28 PM, Kaushal M wrote:
The log-level is set by default to
prefix/var/log/etc-glusterfs-glusterd.vol.log, even when running in
I see:
#define GF_DECIDE_DEFRAG_THROTTLE_COUNT(throttle_count, conf) { \
\
throttle_count = MAX ((get_nprocs() - 4), 4);
\
\
Gluster: Software {re}defined storage
is one I really like. I wouldn't want to eliminate Gluster completely as
newcomers would then wonder about the binaries, package names etc. The tagline
speaks to that we've taken the time to consider some of the common pitfalls of
storage and makes things
I agree completely. This is the one that speaks volumes all on three words.
On 05/04/2015 09:08 AM, Josh Boon wrote:
Gluster: Software {re}defined storage
is one I really like. I wouldn't want to eliminate Gluster completely
as newcomers would then wonder about the binaries, package names
On 4 May 2015, at 08:06, Vijay Bellur vbel...@redhat.com wrote:
Hi All,
There has been a spate of regression test failures (due to broken tests or
race conditions showing up) in the recent past [1] and I am inclined to block
3.7.0 GA along with acceptance of patches until we fix *all*
On 05/05/2015 12:58 AM, Justin Clift wrote:
On 4 May 2015, at 08:06, Vijay Bellur vbel...@redhat.com wrote:
Hi All,
There has been a spate of regression test failures (due to broken tests or race
conditions showing up) in the recent past [1] and I am inclined to block 3.7.0
GA along with
On 05/05/2015 06:12 AM, Pranith Kumar Karampuri wrote:
On 05/05/2015 12:58 AM, Justin Clift wrote:
On 4 May 2015, at 08:06, Vijay Bellur vbel...@redhat.com wrote:
Hi All,
There has been a spate of regression test failures (due to broken
tests or race conditions showing up) in the recent
It looks like my issue was due to a change in the way name resolution is
now handled in 3.6.3. I'll send in an explanation tomorrow in case
anyone else is having a similar issue.
David
-- Original Message --
From: David Robinson drobin...@corvidtec.com
To: gluster-us...@gluster.org
On 05/05/2015 08:10 AM, Jeff Darcy wrote:
Jeff's patch failed again with same problem:
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/4531/console
Wouldn't have expected anything different. This one looks like a
problem in the Jenkins/Gerrit infrastructure.
Sorry for the
hi Vijai/Sachin,
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8268/console
Doesn't seem like an obvious failure. Know anything about it?
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
On 05/05/2015 08:39 AM, Pranith Kumar Karampuri wrote:
Vijai/Sachin,
Did you get a chance to work on this?
http://review.gluster.com/10166 failed Just now again in ec because
http://review.gluster.org/10069 is merged yesterday which can lead to
same problem. I sent
hi,
Doesn't seem like obvious failure. It does say there is
version mismatch, I wonder how? Could you look into it.
Gluster version mismatch between master and slave.
Geo-replication session between master and slave21.cloud.gluster.org::slave
does not exist.
[08:27:15]
Just saw two more failures in the same place for netbsd regressions. I
am ignoring NetBSD status for the test fixes for now. I am not sure how
this needs to be fixed. Please help!
Pranith
On 05/05/2015 07:17 AM, Pranith Kumar Karampuri wrote:
On 05/05/2015 06:12 AM, Pranith Kumar Karampuri
Vijai/Sachin,
Did you get a chance to work on this?
http://review.gluster.com/10166 failed Just now again in ec because
http://review.gluster.org/10069 is merged yesterday which can lead to
same problem. I sent http://review.gluster.org/10539 to address the
issue for now. Please look
hi,
I fixed it along with the patch on which this test failed
@http://review.gluster.org/10391. Letting everyone know in case they
face the same issue.
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
Jeff's patch failed again with same problem:
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/4531/console
Wouldn't have expected anything different. This one looks like a
problem in the Jenkins/Gerrit infrastructure.
___
Hi,
As already discussed, if you encounter this or any other snapshot tests,
it would be great to provide the regression run instance so that we can
have a look at the logs if there are any. Also I tried running the test
in a loop as you suggested. After an hour and a half I stopped it so
Geo-rep runs /usr/local/libexec/glusterfs/gverify.sh to compare
gluster version between master and slave volume. It runs following
command
gluster --version | head -1 | cut -f2 -d
locally in the master and over ssh in slave.
If for some reason, version returned is empty string. It could
On 05/05/2015 10:31 AM, Kotresh Hiremath Ravishankar wrote:
Geo-rep runs /usr/local/libexec/glusterfs/gverify.sh to compare
gluster version between master and slave volume. It runs following
command
gluster --version | head -1 | cut -f2 -d
locally in the master and over ssh in slave.
You can
On 05/05/2015 10:32 AM, Avra Sengupta wrote:
Hi,
As already discussed, if you encounter this or any other snapshot
tests, it would be great to provide the regression run instance so
that we can have a look at the logs if there are any. Also I tried
running the test in a loop as you
On 05/05/2015 10:43 AM, Pranith Kumar Karampuri wrote:
On 05/05/2015 10:32 AM, Avra Sengupta wrote:
Hi,
As already discussed, if you encounter this or any other snapshot
tests, it would be great to provide the regression run instance so
that we can have a look at the logs if there are any.
On 05/05/2015 10:48 AM, Avra Sengupta wrote:
On 05/05/2015 10:43 AM, Pranith Kumar Karampuri wrote:
On 05/05/2015 10:32 AM, Avra Sengupta wrote:
Hi,
As already discussed, if you encounter this or any other snapshot
tests, it would be great to provide the regression run instance so
that we
Also, one of us should
go through the last however-many failures and determine the relative
frequency of failures caused by each test, so we can prioritize.
I started doing this, and very quickly found a runaway winner -
data-self-heal.t, which also happens to be the very first test we
run.
Sachin Pandit span...@redhat.com wrote:
In this test case we are writing content into the files (fd's).
But for some reason the data content is not being written
into the files and because of that quota fails to account the
size.
I did some tests and here are the results:
# exec
There has been a spate of regression test failures (due to broken tests
or race conditions showing up) in the recent past [1] and I am inclined
to block 3.7.0 GA along with acceptance of patches until we fix *all*
regression test failures. We seem to have reached a point where this
seems to
Hey folks,
Do we know when the ubuntu PPA will be up-to-date? I'll be doing a major
upgrade on my infrastructure and don't want to have to do it more than once.
Thanks,
Josh
___
Gluster-devel mailing list
Gluster-devel@gluster.org
44 matches
Mail list logo