hi,
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11757/consoleFull
has the logs. Could you please look into it.
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
Yep will have a look
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Joseph Fernandes josfe...@redhat.com, Gluster Devel
gluster-devel@gluster.org
Sent: Wednesday, July 1, 2015 1:44:44 PM
Subject: spurious failures
On Wed, Jul 1, 2015 at 11:32 AM, Krishnan Parthasarathi kpart...@redhat.com
wrote:
Yeah this followed by glusterd restart should help
But frankly, i was hoping that 'rm' the file isn't a neat way to fix this
issue
Why is rm not a neat way? Is it because the container deployment tool
On 07/01/2015 03:03 PM, Deepak Shetty wrote:
On Wed, Jul 1, 2015 at 11:32 AM, Krishnan Parthasarathi kpart...@redhat.com
wrote:
Yeah this followed by glusterd restart should help
But frankly, i was hoping that 'rm' the file isn't a neat way to fix this
issue
Why is rm not a neat way?
On Wed, Jul 1, 2015 at 9:39 AM, Atin Mukherjee amukh...@redhat.com wrote:
On 05/06/2015 12:31 PM, Humble Devassy Chirammal wrote:
Hi All,
Docker images of GlusterFS 3.6 for Fedora ( 21) and CentOS (7) are now
available at docker hub ( https://registry.hub.docker.com/u/gluster/ ).
Hi,
The new marker xlator uses syncop framework to update quota-size in the
background, it uses one synctask per write FOP.
If there are 100 parallel writes with all different inodes but on the
same directory '/dir', there will be ~100 txn waiting in queue to
acquire a lock on on its parent
On Jul 1, 2015 18:42, Raghavendra Talur raghavendra.ta...@gmail.com
wrote:
On Wed, Jul 1, 2015 at 3:18 PM, Joseph Fernandes josfe...@redhat.com
wrote:
Hi All,
TEST 4-5 are failing i.e the following
TEST $CLI volume start $V0
TEST $CLI volume attach-tier $V0 replica 2
On 07/01/2015 08:53 AM, Niels de Vos wrote:
On Tue, Jun 30, 2015 at 11:48:20PM +0530, Ravishankar N wrote:
On 06/22/2015 03:22 PM, Ravishankar N wrote:
On 06/22/2015 01:41 PM, Miklos Szeredi wrote:
On Sun, Jun 21, 2015 at 6:20 PM, Niels de Vos nde...@redhat.com wrote:
Hi,
it seems that
On Wed, Jul 1, 2015 at 11:51 AM, Krishnan Parthasarathi kpart...@redhat.com
wrote:
We do have a way to tackle this situation from the code. Raghavendra
Talur will be sending a patch shortly.
We should fix it by undoing what daemon-refactoring did, that broke the
lazy creation
of uuid
Yeah this followed by glusterd restart should help
But frankly, i was hoping that 'rm' the file isn't a neat way to fix this
issue
Why is rm not a neat way? Is it because the container deployment tool
needs to
know about gluster internals? But isn't a Dockerfile dealing with details
of the
Hi,
Recently, my company needed to change our hostnames used in the Gluster
Pool.
In a first moment, we have two Gluster Nodes called storage1 and storage2.
Our volumes used two bricks: storage1:/MYVOLYME and storage2:/MYVOLUME. We
put the storage1 and storage2 IPs in the /etc/hosts file of our
+ Infra, can any one of you just take a look at it?
On 07/02/2015 09:53 AM, Anuradha Talur wrote:
Hi,
I'm unable to send patches to r.g.o, also not able to login.
I'm getting the following errors respectively:
1)
Permission denied (publickey).
fatal: Could not read from remote repository.
Me too. Earlier (past week or so) this error used to last for about
15-20 minutes, but today seems to be it's day.
Venky
On Thu, Jul 2, 2015 at 9:53 AM, Anuradha Talur ata...@redhat.com wrote:
Hi,
I'm unable to send patches to r.g.o, also not able to login.
I'm getting the following
Yes, we could take synctask size as an argument for synctask_create.
The increase in synctask threads is not really a problem, it can't
grow more than 16 (SYNCENV_PROC_MAX).
- Original Message -
On 07/02/2015 10:40 AM, Krishnan Parthasarathi wrote:
- Original Message -
Hi,
I'm unable to send patches to r.g.o, also not able to login.
I'm getting the following errors respectively:
1)
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
2) Internal server error
One solution I can think of is to have the responsibility of lock migration
process spread between both client and rebalance process. A rough algo is
outlined below:
1. We should've a static identifier for client process (something like
process-uuid of mount process - lets call it client-uuid) in
I get the following error:
error: unpack failed: error No space left on device
fatal: Unpack error, check server log
Pranith
On 07/02/2015 09:58 AM, Atin Mukherjee wrote:
+ Infra, can any one of you just take a look at it?
On 07/02/2015 09:53 AM, Anuradha Talur wrote:
Hi,
I'm unable to send
On Tue, Jun 30, 2015 at 11:48:20PM +0530, Ravishankar N wrote:
On 06/22/2015 03:22 PM, Ravishankar N wrote:
On 06/22/2015 01:41 PM, Miklos Szeredi wrote:
On Sun, Jun 21, 2015 at 6:20 PM, Niels de Vos nde...@redhat.com wrote:
Hi,
it seems that there could be a reasonable benefit for
We do have a way to tackle this situation from the code. Raghavendra
Talur will be sending a patch shortly.
We should fix it by undoing what daemon-refactoring did, that broke the lazy
creation
of uuid for a node. Fixing it elsewhere is just masking the real cause.
Meanwhile 'rm' is the stop
On 07/01/2015 11:51 AM, Krishnan Parthasarathi wrote:
We do have a way to tackle this situation from the code. Raghavendra
Talur will be sending a patch shortly.
We should fix it by undoing what daemon-refactoring did, that broke the lazy
creation
of uuid for a node. Fixing it elsewhere is
20 matches
Mail list logo