On 05/09/2015 11:42 AM, Atin Mukherjee wrote:
> Gaurav,
>
> Can you quickly check [1]
>
> [1]
> http://build.gluster.org/job/rackspace-regression-2GB-triggered/8881/consoleFull
http://review.gluster.org/10702 should fix all of these spurious
failures coming from bitrot.
Rafi,
You would need
Gaurav,
Can you quickly check [1]
[1]
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8881/consoleFull
--
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
> Ah! now I understood the confusion. I never said maintainer should fix
> all the bugs in tests. I am only saying that they maintain tests, just
> like we maintain code. Whether you personally work on it or not, you at
> least have an idea of what is the problem and what is the solution so
> some
Hi Pranith,
I think you pasted the wrong patch link.
Could you share the correct patch link?
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Pranith Kumar Karampuri"
> To: "Kotresh Hiremath Ravishankar" , "Aravinda
> Vishwanathapura Krishna Murthy"
>
> Cc: "Gluster Deve
> Hmm... I am not sure, most of the fixes in the last week I saw were bugs
> in tests or .rc files. The failures in afr and ec were problems that
> existed even in 3.6. They are showing up more now probably because 3.7
> is a bit more parallel.
If we merged features ahead in time and spaced them
Hi all,
I Install gluster in one node,disable gluster nfs and export nfs by kernel nfs
server,they are in the same linux system;
another linux client mount nfs path,when dd file on the nfs mount
point;Sometimes the io return error;
Who knows what is the reason, if there is a corresponding kernel
On 05/09/2015 01:25 AM, Pranith Kumar Karampuri wrote:
>
> On 05/08/2015 09:14 AM, Krishnan Parthasarathi wrote:
>>
>> - Original Message -
>>> hi,
>>> I think we fixed quite a few heavy hitters in the past week and
>>> reasonable number of regression runs are passing which is a
On 05/08/2015 11:37 PM, Pranith Kumar Karampuri wrote:
On 05/08/2015 04:45 PM, Ravishankar N wrote:
On 05/08/2015 08:45 AM, Pranith Kumar Karampuri wrote:
Do you guys have any ideas in keeping the regression failures under
control?
I sent a patch to append the commands being run in the
Hi,
Deadline for patch merging for glusterfs-3.7.0 has been put on Saturday
9 May, 16:00 UTC. In order to know what the progress of 3.7.0 is, I have
been working to get the associated bugs in their correct status.
The status of most of the bugs that were added to the glusterfs-3.7.0
tracker [1] s
On 05/09/2015 03:26 AM, Pranith Kumar Karampuri wrote:
hi Kotresh/Aravinda,
Do you guys know anything about following core which comes
because of changelog xlator init failure? It just failed regression on
one of my patches: http://review.gluster.org/#/c/10688
Sorry wrong URL, this is the
hi Kotresh/Aravinda,
Do you guys know anything about following core which comes because
of changelog xlator init failure? It just failed regression on one of my
patches: http://review.gluster.org/#/c/10688
24 [2015-05-08 21:34:47.750460] E [xlator.c:426:xlator_init]
0-patchy-changelog:
On 05/09/2015 02:31 AM, Jeff Darcy wrote:
What is so special about 'test' code?
A broken test blocks everybody's progress in a way that an incomplete
feature does not.
It is still code, if maintainers
are maintaining feature code and held responsible, why not test code? It
is not that maintai
> What is so special about 'test' code?
A broken test blocks everybody's progress in a way that an incomplete
feature does not.
> It is still code, if maintainers
> are maintaining feature code and held responsible, why not test code? It
> is not that maintainer is the only one who fixes all the
On 05/08/2015 09:14 AM, Krishnan Parthasarathi wrote:
- Original Message -
hi,
I think we fixed quite a few heavy hitters in the past week and
reasonable number of regression runs are passing which is a good sign.
Most of the new heavy hitters in regression failures seem to be
On 05/09/2015 12:33 AM, Jeff Darcy wrote:
I submit a patch for new-component/changing log-level of one of the logs
for which there is not a single caller after you moved it from INFO ->
DEBUG. So the code is not at all going to be executed. Yet the
regressions will fail. I am 100% sure it has no
Hi All,
Thanks to all our efforts, we are inching closer towards 3.7.0. Here is
the updated release schedule for 3.7.0:
1. Patch merge deadline and beta2 tagging - 1600 UTC 05/09
2. Final test window starts at 1600 UTC 05/09
3. If no significant problems are observed in 2., I will go ahead w
> I submit a patch for new-component/changing log-level of one of the logs
> for which there is not a single caller after you moved it from INFO ->
> DEBUG. So the code is not at all going to be executed. Yet the
> regressions will fail. I am 100% sure it has nothing to do with my
> patch. I neithe
I agree on the experience and have no questions/comments on that.
The deal is though, we at least had people (including myself) ignoring
spurious failures, re-triggering jobs, to get that +1 V and move on.
Which causes issues, as the failures could have been at least flagged
for others to be
On 05/08/2015 01:27 PM, Pranith Kumar Karampuri wrote:
On 05/08/2015 06:45 PM, Shyam wrote:
On 05/08/2015 08:16 AM, Jeff Darcy wrote:
Here are some of the things that I can think of: 0) Maintainers
should also maintain tests that are in their component.
It is not possible for me as glusterd
On 05/08/2015 04:45 PM, Ravishankar N wrote:
On 05/08/2015 08:45 AM, Pranith Kumar Karampuri wrote:
Do you guys have any ideas in keeping the regression failures under
control?
I sent a patch to append the commands being run in the .t files to
gluster logs @ http://review.gluster.org/#/c/
On 05/08/2015 05:27 PM, Jeff Darcy wrote:
I think we should remove "if it is a known bad test treat it as success"
code in some time and never add it again in future.
I disagree. We were in a cycle where a fix for one bad regression test
would be blocked because of others, so it was impossible
On 05/08/2015 11:15 PM, Justin Clift wrote:
On 8 May 2015, at 18:37, Pranith Kumar Karampuri wrote:
On 05/08/2015 10:53 PM, Justin Clift wrote:
Seems like a new one, so it's been added to the Etherpad.
http://build.gluster.org/job/regression-test-burn-in/23/console
This looks a lot simil
On 8 May 2015, at 18:41, Pranith Kumar Karampuri wrote:
>> Break the regression tests into parts that can be run in parallel.
>>
>> So, instead of the regression testing for a particular CR going from the
>> first test to the last in a serial sequence, we break it up into a number
>> of chunks (
On 8 May 2015, at 18:37, Pranith Kumar Karampuri wrote:
> On 05/08/2015 10:53 PM, Justin Clift wrote:
>> Seems like a new one, so it's been added to the Etherpad.
>>
>> http://build.gluster.org/job/regression-test-burn-in/23/console
> This looks a lot similar to the data-self-heal.t test where
On 05/08/2015 08:54 PM, Justin Clift wrote:
On 8 May 2015, at 10:02, Mohammed Rafi K C wrote:
Hi All,
As we all know, our regression tests are killing us. An average, one
regression will take approximately two and half hours to complete the
run. So i guess this is the right time to think abou
On 05/08/2015 10:53 PM, Justin Clift wrote:
Seems like a new one, so it's been added to the Etherpad.
http://build.gluster.org/job/regression-test-burn-in/23/console
This looks a lot similar to the data-self-heal.t test where healing
fails to happen because both the threads end up not getti
On 05/08/2015 06:45 PM, Shyam wrote:
On 05/08/2015 08:16 AM, Jeff Darcy wrote:
Here are some of the things that I can think of: 0) Maintainers
should also maintain tests that are in their component.
It is not possible for me as glusterd co-maintainer to 'maintain'
tests that are added under t
Seems like a new one, so it's been added to the Etherpad.
http://build.gluster.org/job/regression-test-burn-in/23/console
It's on a new slave VM (slave1), which has been disconnected in
Jenkins so it can be investigated. It's using our standard
Jenkins auth.
+ Justin
--
GlusterFS - http://ww
On 8 May 2015, at 17:37, Atin Mukherjee wrote:
On 05/08/2015 08:54 PM, Justin Clift wrote:
>> On 8 May 2015, at 10:02, Mohammed Rafi K C wrote:
>>> Hi All,
>>>
>>> As we all know, our regression tests are killing us. An average, one
>>> regression will take approximately two and half hours to co
On 05/08/2015 08:54 PM, Justin Clift wrote:
> On 8 May 2015, at 10:02, Mohammed Rafi K C wrote:
>> Hi All,
>>
>> As we all know, our regression tests are killing us. An average, one
>> regression will take approximately two and half hours to complete the
>> run. So i guess this is the right time
On 05/08/2015 09:14 PM, Aravinda wrote:
Fix is in. Master and release-3.7 branch is fine now.
Reason:
Patches 10620 and 9572 dev started in parallel. Both regressions
completed successfully. But both had conflicting changes. After one
patch is merged, regression was not run again or missed in ma
On 05/08/2015 08:34 PM, Justin Clift wrote:
On 8 May 2015, at 13:16, Jeff Darcy wrote:
Perhaps the change that's needed
is to make the fixing of likely-spurious test failures a higher
priority than adding new features.
YES! A million times Yes.
We need to move this project to operating wit
Fix is in. Master and release-3.7 branch is fine now.
Reason:
Patches 10620 and 9572 dev started in parallel. Both regressions
completed successfully. But both had conflicting changes. After one
patch is merged, regression was not run again or missed in manual rebase.
Patches:
http://review.g
Thanks Paul. That's for an ancient series of GlusterFS (3.4.x) we're
not really looking to release further updates for.
If that's the version you guys are running in your production
environment, having you looked into moving to a newer release series?
+ Justin
On 8 May 2015, at 10:55, Paul Guo
On 8 May 2015, at 10:02, Mohammed Rafi K C wrote:
> Hi All,
>
> As we all know, our regression tests are killing us. An average, one
> regression will take approximately two and half hours to complete the
> run. So i guess this is the right time to think about enhancing our
> regression.
>
> Pro
On 8 May 2015, at 16:19, Jeff Darcy wrote:
>> Proposal 2:
>>
>> Use ip address instead of host name, because it takes some good amount
>> of time to resolve from host name, and even some times causes spurious
>> failure.
>
> If resolution is taking a long time, that's probably fixable in the
>
> Proposal 1:
>
> Create a new option for the daemons to specify that it is running as
> test mode, then we can skip fsync calls used for data durability.
Alternatively, run tests with the relevant directories (bricks and
/var/lib stuff) in ramdisks. No code change needed, but some tests
might n
Update:
Aravinda is putting together a fix and we will address it that way
rather than reverting.
Regards,
On 05/08/2015 10:52 AM, Shyam wrote:
Due to compile failure @
changelog-helpers.c:374:9: error: implicit declaration of function
'CHANGELOG_GET_ENCODING' [-Werror=implicit-function-dec
On 8 May 2015, at 04:15, Pranith Kumar Karampuri wrote:
> 2) If the same test fails on different patches more than 'x' number of times
> we should do something drastic. Let us decide on 'x' and what the drastic
> measure is.
Sure. That number is 0.
If it fails more than 0 times on different
On 8 May 2015, at 13:16, Jeff Darcy wrote:
> Perhaps the change that's needed
> is to make the fixing of likely-spurious test failures a higher
> priority than adding new features.
YES! A million times Yes.
We need to move this project to operating with _0 regression
failures_ as the normal st
On 8 May 2015, at 15:52, Shyam wrote:
> Shyam
> P.S: Sending this to the devel list for those not looking at the IRC ATM
Excellent, thanks. :)
+ Justin
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients
Due to compile failure @
changelog-helpers.c:374:9: error: implicit declaration of function
'CHANGELOG_GET_ENCODING' [-Werror=implicit-function-declaration]
CHANGELOG_GET_ENCODING (fd, buffer, 1024, encoding, elen)
Introduced by:
http://review.gluster.org/#/c/9572/ (master)
http://revi
On 05/08/2015 08:16 AM, Jeff Darcy wrote:
Here are some of the things that I can think of: 0) Maintainers
should also maintain tests that are in their component.
It is not possible for me as glusterd co-maintainer to 'maintain'
tests that are added under tests/bugs/glusterd. Most of them don't
> The deluge of regression failures is a direct consequence of last minute
> merges during (extended) feature freeze. We did well to contain this. Great
> stuff!
> If we want to avoid this we should not accept (large) feature merges just
> before feature freeze.
I would add that we shouldn't accep
On Thursday 07 May 2015 10:50 AM, Sachin Pandit wrote:
- Original Message -
From: "Vijay Bellur"
To: "Pranith Kumar Karampuri" , "Gluster Devel"
, "Rafi Kavungal
Chundattu Parambil" , "Aravinda" , "Sachin
Pandit" ,
"Raghavendra Bhat" , "Kotresh Hiremath Ravishankar"
Sent: Wednesday
> I think we should remove "if it is a known bad test treat it as success"
> code in some time and never add it again in future.
I disagree. We were in a cycle where a fix for one bad regression test
would be blocked because of others, so it was impossible to make any
progress at all. The cycle
On 05/08/2015 04:28 AM, Niels de Vos wrote:
On Thu, May 07, 2015 at 02:15:20PM -0400, Jeff Darcy wrote:
Last week, those of us who were together in Bangalore had a meeting to
discuss the GlusterFS 4.0 plan. Once we'd covered what's already in
the plan[1] we had a very productive brainstorming s
On 05/08/2015 08:45 AM, Pranith Kumar Karampuri wrote:
Do you guys have any ideas in keeping the regression failures under
control?
I sent a patch to append the commands being run in the .t files to
gluster logs @ http://review.gluster.org/#/c/10667/
While it certainly doesn't help check re
On 05/08/2015 03:47 PM, Atin Mukherjee wrote:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8782/consoleFull
Failed test case : tests/bugs/replicate/bug-976800.t
I've added it in the etherpad as well.
Thanks Atin! I see that the test doesn't disable flush-behind, which can
l
Fixed a similar crash in dht_getxattr_cbk here:
http://review.gluster.org/#/c/10467/
Susant
- Original Message -
From: "Paul Guo"
To: gluster-devel@gluster.org
Sent: Friday, 8 May, 2015 3:25:01 PM
Subject: [Gluster-devel] gluster crashes in dht_getxattr_cbk() due to null
pointer d
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8782/consoleFull
Failed test case : tests/bugs/replicate/bug-976800.t
I've added it in the etherpad as well.
--
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.g
Hi,
gdb debugging shows the rootcause seems to be quite straightforward. The
gluster version is 3.4.5 and the stack:
#0 0x7eff735fe354 in dht_getxattr_cbk (frame=0x7eff775b6360,
cookie=, this=, op_ret=optimized out>, op_errno=0,
xattr=, xdata=0x0) at dht-common.c:2043
2043
There are no sleeps between polls in the event poll thread, ref:
event_dispatch_poll(...).
I am not sure if we are referring to the same 'poll'. I haven't gotten a chance
to look into
this. I will try adding logs when I get back to this.
~kp
- Original Message -
> Krishnan Parthasarathi
Hi All,
As we all know, our regression tests are killing us. An average, one
regression will take approximately two and half hours to complete the
run. So i guess this is the right time to think about enhancing our
regression.
Proposal 1:
Create a new option for the daemons to specify that it is
On 05/07/2015 01:56 PM, Pranith Kumar Karampuri wrote:
Sorry wrong test. Correct test is: tests/bugs/quota/bug-1035576.t
(http://build.gluster.org/job/rackspace-regression-2GB-triggered/8329/consoleFull)
Tests 20 and 21 failed because /mnt/glusterfs/0/a/f creation failed as
seen from the log
On Thu, May 07, 2015 at 02:15:20PM -0400, Jeff Darcy wrote:
> Last week, those of us who were together in Bangalore had a meeting to
> discuss the GlusterFS 4.0 plan. Once we'd covered what's already in
> the plan[1] we had a very productive brainstorming session on what else
> we might want to co
On Thu, May 07, 2015 at 02:44:08PM -0400, Jeff Darcy wrote:
>
> > I believe the right way to express this is: retire the Gluster NFS
> > (gnfs) server. (Ganesha does NFSv3, and will continue to do NFSv3, as
> > well as 4, 4.1, 4.2, and pNFS.)
>
> Personally I'd like to go further and say that an
On Thu, May 07, 2015 at 12:04:17PM -0700, Joe Julian wrote:
> On 05/07/2015 11:15 AM, Jeff Darcy wrote:
> >Last week, those of us who were together in Bangalore had a meeting to
> >discuss the GlusterFS 4.0 plan. Once we'd covered what's already in
> >the plan[1] we had a very productive brainstor
Atin Mukherjee wrote:
> Please look at glusterd_stop_volume () in
> xlators/mgmt/glusterd/src/glusterd-volume-ops.c
I see oomeone thought it was a good idea to duplicate
GLUSTERFS_GET_AUX_MOUNT_PIDFILE in cli/src/cli.h and
xlators/mgmt/glusterd/src/glusterd.h...
In this problem, the gluterfs
59 matches
Mail list logo