[Gluster-devel] spurious failures tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t

2015-07-01 Thread Pranith Kumar Karampuri
hi, http://build.gluster.org/job/rackspace-regression-2GB-triggered/11757/consoleFull has the logs. Could you please look into it. Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] spurious failures tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t

2015-07-01 Thread Joseph Fernandes
Yep will have a look - Original Message - From: Pranith Kumar Karampuri pkara...@redhat.com To: Joseph Fernandes josfe...@redhat.com, Gluster Devel gluster-devel@gluster.org Sent: Wednesday, July 1, 2015 1:44:44 PM Subject: spurious failures

Re: [Gluster-devel] [Gluster-users] Gluster Docker images are available at docker hub

2015-07-01 Thread Deepak Shetty
On Wed, Jul 1, 2015 at 11:32 AM, Krishnan Parthasarathi kpart...@redhat.com wrote: Yeah this followed by glusterd restart should help But frankly, i was hoping that 'rm' the file isn't a neat way to fix this issue Why is rm not a neat way? Is it because the container deployment tool

Re: [Gluster-devel] [Gluster-users] Gluster Docker images are available at docker hub

2015-07-01 Thread Atin Mukherjee
On 07/01/2015 03:03 PM, Deepak Shetty wrote: On Wed, Jul 1, 2015 at 11:32 AM, Krishnan Parthasarathi kpart...@redhat.com wrote: Yeah this followed by glusterd restart should help But frankly, i was hoping that 'rm' the file isn't a neat way to fix this issue Why is rm not a neat way?

Re: [Gluster-devel] [Gluster-users] Gluster Docker images are available at docker hub

2015-07-01 Thread Deepak Shetty
On Wed, Jul 1, 2015 at 9:39 AM, Atin Mukherjee amukh...@redhat.com wrote: On 05/06/2015 12:31 PM, Humble Devassy Chirammal wrote: Hi All, Docker images of GlusterFS 3.6 for Fedora ( 21) and CentOS (7) are now available at docker hub ( https://registry.hub.docker.com/u/gluster/ ).

[Gluster-devel] Huge memory consumption with quota-marker

2015-07-01 Thread Vijaikumar M
Hi, The new marker xlator uses syncop framework to update quota-size in the background, it uses one synctask per write FOP. If there are 100 parallel writes with all different inodes but on the same directory '/dir', there will be ~100 txn waiting in queue to acquire a lock on on its parent

Re: [Gluster-devel] spurious failures tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t

2015-07-01 Thread Raghavendra Talur
On Jul 1, 2015 18:42, Raghavendra Talur raghavendra.ta...@gmail.com wrote: On Wed, Jul 1, 2015 at 3:18 PM, Joseph Fernandes josfe...@redhat.com wrote: Hi All, TEST 4-5 are failing i.e the following TEST $CLI volume start $V0 TEST $CLI volume attach-tier $V0 replica 2

Re: [Gluster-devel] Progress on adding support for SEEK_DATA and SEEK_HOLE

2015-07-01 Thread Xavier Hernandez
On 07/01/2015 08:53 AM, Niels de Vos wrote: On Tue, Jun 30, 2015 at 11:48:20PM +0530, Ravishankar N wrote: On 06/22/2015 03:22 PM, Ravishankar N wrote: On 06/22/2015 01:41 PM, Miklos Szeredi wrote: On Sun, Jun 21, 2015 at 6:20 PM, Niels de Vos nde...@redhat.com wrote: Hi, it seems that

Re: [Gluster-devel] [Gluster-users] Gluster Docker images are available at docker hub

2015-07-01 Thread Raghavendra Talur
On Wed, Jul 1, 2015 at 11:51 AM, Krishnan Parthasarathi kpart...@redhat.com wrote: We do have a way to tackle this situation from the code. Raghavendra Talur will be sending a patch shortly. We should fix it by undoing what daemon-refactoring did, that broke the lazy creation of uuid

Re: [Gluster-devel] [Gluster-users] Gluster Docker images are available at docker hub

2015-07-01 Thread Humble Devassy Chirammal
Yeah this followed by glusterd restart should help But frankly, i was hoping that 'rm' the file isn't a neat way to fix this issue Why is rm not a neat way? Is it because the container deployment tool needs to know about gluster internals? But isn't a Dockerfile dealing with details of the

[Gluster-devel] Problems when using different hostnames in a bricks and a peer

2015-07-01 Thread Rarylson Freitas
Hi, Recently, my company needed to change our hostnames used in the Gluster Pool. In a first moment, we have two Gluster Nodes called storage1 and storage2. Our volumes used two bricks: storage1:/MYVOLYME and storage2:/MYVOLUME. We put the storage1 and storage2 IPs in the /etc/hosts file of our

Re: [Gluster-devel] Unable to send patches to review.gluster.org

2015-07-01 Thread Atin Mukherjee
+ Infra, can any one of you just take a look at it? On 07/02/2015 09:53 AM, Anuradha Talur wrote: Hi, I'm unable to send patches to r.g.o, also not able to login. I'm getting the following errors respectively: 1) Permission denied (publickey). fatal: Could not read from remote repository.

Re: [Gluster-devel] Unable to send patches to review.gluster.org

2015-07-01 Thread Venky Shankar
Me too. Earlier (past week or so) this error used to last for about 15-20 minutes, but today seems to be it's day. Venky On Thu, Jul 2, 2015 at 9:53 AM, Anuradha Talur ata...@redhat.com wrote: Hi, I'm unable to send patches to r.g.o, also not able to login. I'm getting the following

Re: [Gluster-devel] Huge memory consumption with quota-marker

2015-07-01 Thread Krishnan Parthasarathi
Yes, we could take synctask size as an argument for synctask_create. The increase in synctask threads is not really a problem, it can't grow more than 16 (SYNCENV_PROC_MAX). - Original Message - On 07/02/2015 10:40 AM, Krishnan Parthasarathi wrote: - Original Message -

[Gluster-devel] Unable to send patches to review.gluster.org

2015-07-01 Thread Anuradha Talur
Hi, I'm unable to send patches to r.g.o, also not able to login. I'm getting the following errors respectively: 1) Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. 2) Internal server error

Re: [Gluster-devel] Lock migration as a part of rebalance

2015-07-01 Thread Raghavendra G
One solution I can think of is to have the responsibility of lock migration process spread between both client and rebalance process. A rough algo is outlined below: 1. We should've a static identifier for client process (something like process-uuid of mount process - lets call it client-uuid) in

Re: [Gluster-devel] Unable to send patches to review.gluster.org

2015-07-01 Thread Pranith Kumar Karampuri
I get the following error: error: unpack failed: error No space left on device fatal: Unpack error, check server log Pranith On 07/02/2015 09:58 AM, Atin Mukherjee wrote: + Infra, can any one of you just take a look at it? On 07/02/2015 09:53 AM, Anuradha Talur wrote: Hi, I'm unable to send

[Gluster-devel] Progress on adding support for SEEK_DATA and SEEK_HOLE

2015-07-01 Thread Niels de Vos
On Tue, Jun 30, 2015 at 11:48:20PM +0530, Ravishankar N wrote: On 06/22/2015 03:22 PM, Ravishankar N wrote: On 06/22/2015 01:41 PM, Miklos Szeredi wrote: On Sun, Jun 21, 2015 at 6:20 PM, Niels de Vos nde...@redhat.com wrote: Hi, it seems that there could be a reasonable benefit for

Re: [Gluster-devel] [Gluster-users] Gluster Docker images are available at docker hub

2015-07-01 Thread Krishnan Parthasarathi
We do have a way to tackle this situation from the code. Raghavendra Talur will be sending a patch shortly. We should fix it by undoing what daemon-refactoring did, that broke the lazy creation of uuid for a node. Fixing it elsewhere is just masking the real cause. Meanwhile 'rm' is the stop

Re: [Gluster-devel] [Gluster-users] Gluster Docker images are available at docker hub

2015-07-01 Thread Anand Nekkunti
On 07/01/2015 11:51 AM, Krishnan Parthasarathi wrote: We do have a way to tackle this situation from the code. Raghavendra Talur will be sending a patch shortly. We should fix it by undoing what daemon-refactoring did, that broke the lazy creation of uuid for a node. Fixing it elsewhere is