I've changed the smoke.sh script to archive the logs on failures. The
archives will be saved into /d/logs/smoke and will be available for
download from [1].
I've also moved to location of the regression log archives from
/d/logs to /d/logs/regression. These will now be available for
download at
- Original Message -
From: Jeff Darcy jda...@redhat.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Tuesday, May 20, 2014 10:08:12 PM
Subject: Re: [Gluster-devel] Split-brain present and future in afr
1. Better protection for
Hi guys,
Do you reckon we should get that Mac Mini in the Westford
lab set up to automatically test Gluster builds each
night or something?
If so, we should probably take/claim ownership of it,
upgrade the memory in it, and (possibly) see if it can be
put in the DMZ.
Thoughts?
+ Justin
--
Constantly filtering requests to use either N or N+1 bricks is going to be
complicated and hard to debug. Every data-structure allocation or loop
based on replica count will have to be examined, and many will have to be
modified. That's a *lot* of places. This also overlaps
Hi Pranith,
You don't have an account on build.gluster.org yet do you?
It's where the current (not my stuff) regression tests are run.
This is the script presently used to build the regression
tests:
$ more /opt/qa/build.sh
#!/bin/bash
set -e
SRC=$(pwd);
rpm -qa | grep glusterfs |
Turns out this change isn't working as I had thought it would. Vijay
helped me identify the problem and I've done another change.
Hopefully, it works now.
~kaushal
On Fri, May 23, 2014 at 2:35 PM, Kaushal M kshlms...@gmail.com wrote:
I've changed the smoke.sh script to archive the logs on
On Fri, May 23, 2014 at 6:02 PM, Justin Clift jus...@gluster.org wrote:
Hi Pranith,
You don't have an account on build.gluster.org yet do you?
It's where the current (not my stuff) regression tests are run.
This is the script presently used to build the regression
tests:
$ more
Do you reckon we should get that Mac Mini in the Westford
lab set up to automatically test Gluster builds each
night or something?
If so, we should probably take/claim ownership of it,
upgrade the memory in it, and (possibly) see if it can be
put in the DMZ.
Up to you guys, it would be
On 23/05/2014, at 10:17 AM, Pranith Kumar Karampuri wrote:
snip
2) That would need more bricks, more processes, more ports.
Meh to more ports. We should be moving to a model (maybe in 4.x?)
where we use less ports. Preferably just one or two in total if its
feasible from a network layer.
One of the things holding up our data classification efforts (which include
tiering but also other stuff as well) has been the extension of the same
conceptual model from the I/O path to the configuration subsystem and
ultimately to the user experience. How does an administrator define a
10 matches
Mail list logo