Hi,
I have created a distributed volume on my gluster node. I have attached this
volume to a VM on openstack. The size is 1 GB. I have written files close to 1
GB onto the
Volume. But when I do a ls inside the brick directory , the volume is present
only on one gluster server brick. But it
What do gluster vol info and gluster vol status give you?
On Tue, 2014-06-03 at 07:21 +, yalla.gnan.ku...@accenture.com
wrote:
Hi,
I have created a distributed volume on my gluster node. I have attached
this volume to a VM on openstack. The size is 1 GB. I have written files
close
root@secondary:/export/sdd1/brick# gluster volume info
Volume Name: dst
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: primary:/export/sdd1/brick
Brick2: secondary:/export/sdd1/brick
-Original Message-
From: Franco Broi
I agree as well. We shouldn't be deleting any data without the
explicit consent of the user.
The approach proposed by MS is better than the earlier approach.
~kaushal
On Tue, Jun 3, 2014 at 1:02 AM, M S Vishwanath Bhat msvb...@gmail.com wrote:
On 2 June 2014 20:22, Vijay Bellur
Ok, what you have is a single large file (must be filesystem image??).
Gluster will not stripe files, it writes different whole files to
different bricks.
On Tue, 2014-06-03 at 07:29 +, yalla.gnan.ku...@accenture.com
wrote:
root@secondary:/export/sdd1/brick# gluster volume info
Volume
I have created distributed volume, created a 1 GB volume on it, and attached
it to the VM and created a filesystem on it. How to verify that the files in
the vm are
distributed across both the bricks on two servers ?
-Original Message-
From: Franco Broi
You have only 1 file on the gluster volume, the 1GB disk image/volume
that you created. This disk image is attached to the VM as a file
system, not the gluster volume. So whatever you do in the VM's file
system, affects just the 1 disk image. The files, directories etc. you
created, are inside the
Hi,
So , in which scenario, does the distributed volumes have files on both the
bricks ?
-Original Message-
From: Kaushal M [mailto:kshlms...@gmail.com]
Sent: Tuesday, June 03, 2014 1:19 PM
To: Gnan Kumar, Yalla
Cc: Franco Broi; gluster-users@gluster.org
Subject: Re: [Gluster-users]
On Tue, 2014-06-03 at 08:20 +, yalla.gnan.ku...@accenture.com
wrote:
Hi,
So , in which scenario, does the distributed volumes have files on both the
bricks ?
If you make more than 1 file.
-Original Message-
From: Kaushal M [mailto:kshlms...@gmail.com]
Sent: Tuesday,
On 06/03/2014 01:50 PM, yalla.gnan.ku...@accenture.com wrote:
Hi,
So , in which scenario, does the distributed volumes have files on both the
bricks ?
Reading the documentation for various volume types [1] can be useful to
obtain answers for questions of this nature.
-Vijay
[1]
Hi All,
I recently upgraded a Gluster 3.3.1 installation to Gluster 3.4.
It was a straight forward upgrade using Yum.
The OS is CentOS 6.3.
The main purpose of the upgrade was to get ACL Support on NFS exports.
But it doesn't seem to be working.
I mounted the gluster volume using the following
I guess Gluster 3.5 has fixed the NFS-ACL issues and getfacl/setfacl
works there.
Regards,
Santosh
On 06/03/2014 05:10 PM, Indivar Nair wrote:
Hi All,
I recently upgraded a Gluster 3.3.1 installation to Gluster 3.4.
It was a straight forward upgrade using Yum.
The OS is CentOS 6.3.
The main
Hi,
As Santhosh mentioned 3.5 support POSIX ACL configuration through NFS
mount, i.e. setfacl and getfacl commands work through NFS mount.
Also, can I do an in-place upgrade from 3.4 to 3.5 by just replacing the
Gluster RPMs?
This may help
Thanks Humble.
On Tue, Jun 3, 2014 at 6:16 PM, Humble Devassy Chirammal
humble.deva...@gmail.com wrote:
Hi,
As Santhosh mentioned 3.5 support POSIX ACL configuration through NFS
mount, i.e. setfacl and getfacl commands work through NFS mount.
Also, can I do an in-place upgrade from 3.4
On 03/06/2014, at 3:14 AM, Pranith Kumar Karampuri wrote:
From: Andrew Lau and...@andrewklau.com
Sent: Tuesday, June 3, 2014 6:42:44 AM
snip
Ah, that makes sense as it was the only volume which had that ping
timeout setting. I also did see the timeout messages in the logs when
I was checking.
Hi List,
after updateing from Ubuntu Precise to Ubuntu Trusty my GlusterFS 3.4.2
clients have a lot of this warnings in the log:
[2014-06-03 11:11:24.266842] W
[client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0-gv5-client-2:
remote operation failed: No data available
[2014-06-03
- Original Message -
From: Justin Clift jus...@gluster.org
To: Ben Turner btur...@redhat.com
Cc: James purplei...@gmail.com, gluster-users@gluster.org, Gluster
Devel gluster-de...@gluster.org
Sent: Thursday, May 29, 2014 6:12:40 PM
Subject: Re: [Gluster-users] [Gluster-devel] Need
gluster volume set volume-name cluster.self-heal-daemon off
would disable glustershd performing automatic healing.
Pranith
Hi,
Thanks for the tip. We'll try that and see if it helps.
Regards,
Laurent
___
Gluster-users mailing list
On 03/06/2014, at 9:05 PM, Ben Turner wrote:
From: Justin Clift jus...@gluster.org
Sent: Thursday, May 29, 2014 6:12:40 PM
snip
Excellent Ben! Please send feedback to gluster-devel. :)
So far so good on 3.4.4, sorry for the delay here. I had to fix my
downstream test suites to run
Franco,
Thanks for providing the logs. I just copied over the logs to my
machine. Most of the logs I see are related to No such File or
Directory I wonder what lead to this. Do you have any idea?
Pranith
On 06/02/2014 02:48 PM, Franco Broi wrote:
Hi Pranith
Here's a listing of the
On Wed, 2014-06-04 at 07:28 +0530, Pranith Kumar Karampuri wrote:
Franco,
Thanks for providing the logs. I just copied over the logs to my
machine. Most of the logs I see are related to No such File or
Directory I wonder what lead to this. Do you have any idea?
No but I'm just
This is a really good initiative Lala.
Anything that helps Operations folks always gets my vote :)
I've added a few items to the etherpad.
Cheers,
PC
- Original Message -
From: Lalatendu Mohanty lmoha...@redhat.com
To: gluster-users@gluster.org, gluster-de...@gluster.org
hi Franco,
CC Devs who work on DHT to comment.
Pranith
On 06/04/2014 07:39 AM, Franco Broi wrote:
On Wed, 2014-06-04 at 07:28 +0530, Pranith Kumar Karampuri wrote:
Franco,
Thanks for providing the logs. I just copied over the logs to my
machine. Most of the logs I see are related
On Tue, Jun 3, 2014 at 11:35 PM, Justin Clift jus...@gluster.org wrote:
On 03/06/2014, at 3:14 AM, Pranith Kumar Karampuri wrote:
From: Andrew Lau and...@andrewklau.com
Sent: Tuesday, June 3, 2014 6:42:44 AM
snip
Ah, that makes sense as it was the only volume which had that ping
timeout
On 06/04/2014 08:07 AM, Susant Palai wrote:
Pranith can you send the client and bricks logs.
I have the logs. But I believe for this issue of directory not listing
entries, it would help more if we have the contents of that directory on
all the directories in the bricks + their hash values in
On 06/04/2014 01:35 AM, Ben Turner wrote:
- Original Message -
From: Justin Clift jus...@gluster.org
To: Ben Turner btur...@redhat.com
Cc: James purplei...@gmail.com, gluster-users@gluster.org, Gluster Devel
gluster-de...@gluster.org
Sent: Thursday, May 29, 2014 6:12:40 PM
Subject:
From the logs it seems files are present on data(21,22,23,24) which are on
nas6 while missing on data(17,18,19,20) which are on nas5 (interesting). There
is an existing issue where directories does not show up on mount point if they
are not present on first_up_subvol(longest living brick) and
27 matches
Mail list logo