Re: [Gluster-users] glusterfsd process spinning

2014-06-03 Thread Susant Palai
>From the logs it seems files are present on data(21,22,23,24) which are on 
>nas6 while missing on data(17,18,19,20) which are on nas5 (interesting). There 
>is an existing issue where directories does not show up on mount point if they 
>are not present on first_up_subvol(longest living brick) and the current issue 
>looks more similar. Well will look at the client logs for more information.

Susant.

- Original Message -
From: "Franco Broi" 
To: "Pranith Kumar Karampuri" 
Cc: "Susant Palai" , gluster-users@gluster.org, "Raghavendra 
Gowdappa" , kdhan...@redhat.com, vsomy...@redhat.com, 
nbala...@redhat.com
Sent: Wednesday, 4 June, 2014 10:32:37 AM
Subject: Re: [Gluster-users] glusterfsd process spinning

On Wed, 2014-06-04 at 10:19 +0530, Pranith Kumar Karampuri wrote: 
> On 06/04/2014 08:07 AM, Susant Palai wrote:
> > Pranith can you send the client and bricks logs.
> I have the logs. But I believe for this issue of directory not listing 
> entries, it would help more if we have the contents of that directory on 
> all the directories in the bricks + their hash values in the xattrs.

Strange thing is, all the invisible files are on the one server (nas6),
the other seems ok. I did rm -Rf of /data2/franco/dir* and was left with
this one directory - there were many hundreds which were removed
successfully.

I've attached listings and xattr dumps.

Cheers,

Volume Name: data2
Type: Distribute
Volume ID: d958423f-bd25-49f1-81f8-f12e4edc6823
Status: Started
Number of Bricks: 8
Transport-type: tcp
Bricks:
Brick1: nas5-10g:/data17/gvol
Brick2: nas5-10g:/data18/gvol
Brick3: nas5-10g:/data19/gvol
Brick4: nas5-10g:/data20/gvol
Brick5: nas6-10g:/data21/gvol
Brick6: nas6-10g:/data22/gvol
Brick7: nas6-10g:/data23/gvol
Brick8: nas6-10g:/data24/gvol
Options Reconfigured:
nfs.drc: on
cluster.min-free-disk: 5%
network.frame-timeout: 10800
nfs.export-volumes: on
nfs.disable: on
cluster.readdir-optimize: on

Gluster process PortOnline  Pid
--
Brick nas5-10g:/data17/gvol 49152   Y   6553
Brick nas5-10g:/data18/gvol 49153   Y   6564
Brick nas5-10g:/data19/gvol 49154   Y   6575
Brick nas5-10g:/data20/gvol 49155   Y   6586
Brick nas6-10g:/data21/gvol 49160   Y   20608
Brick nas6-10g:/data22/gvol 49161   Y   20613
Brick nas6-10g:/data23/gvol 49162   Y   20614
Brick nas6-10g:/data24/gvol 49163   Y   20621
 
Task Status of Volume data2
--
There are no active volume tasks



> 
> Pranith
> >
> > Thanks,
> > Susant~
> >
> > - Original Message -
> > From: "Pranith Kumar Karampuri" 
> > To: "Franco Broi" 
> > Cc: gluster-users@gluster.org, "Raghavendra Gowdappa" 
> > , spa...@redhat.com, kdhan...@redhat.com, 
> > vsomy...@redhat.com, nbala...@redhat.com
> > Sent: Wednesday, 4 June, 2014 7:53:41 AM
> > Subject: Re: [Gluster-users] glusterfsd process spinning
> >
> > hi Franco,
> >CC Devs who work on DHT to comment.
> >
> > Pranith
> >
> > On 06/04/2014 07:39 AM, Franco Broi wrote:
> >> On Wed, 2014-06-04 at 07:28 +0530, Pranith Kumar Karampuri wrote:
> >>> Franco,
> >>>  Thanks for providing the logs. I just copied over the logs to my
> >>> machine. Most of the logs I see are related to "No such File or
> >>> Directory" I wonder what lead to this. Do you have any idea?
> >> No but I'm just looking at my 3.5 Gluster volume and it has a directory
> >> that looks empty but can't be deleted. When I look at the directories on
> >> the servers there are definitely files in there.
> >>
> >> [franco@charlie1 franco]$ rmdir /data2/franco/dir1226/dir25
> >> rmdir: failed to remove `/data2/franco/dir1226/dir25': Directory not empty
> >> [franco@charlie1 franco]$ ls -la  /data2/franco/dir1226/dir25
> >> total 8
> >> drwxrwxr-x 2 franco support 60 May 21 03:58 .
> >> drwxrwxr-x 3 franco support 24 Jun  4 09:37 ..
> >>
> >> [root@nas6 ~]# ls -la /data*/gvol/franco/dir1226/dir25
> >> /data21/gvol/franco/dir1226/dir25:
> >> total 2081
> >> drwxrwxr-x 13 1348 200 13 May 21 03:58 .
> >> drwxrwxr-x  3 1348 200  3 May 21 03:58 ..
> >> drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13017
> >> drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13018
> >> drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13020
> >> drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13021
> >> drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13022
> >> drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13024
> >> drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13027
> >> drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13028
> >> drwxrwxr-x  2 1348 200  2 May 16 12:06 dir13029
> >> drwxrwxr-x  2 1348 200  2 May 16 12:06 dir13031
> >> drwxrwxr-x  2 1348 200  

Re: [Gluster-users] [Gluster-devel] Need testers for GlusterFS 3.4.4

2014-06-03 Thread Pranith Kumar Karampuri


On 06/04/2014 01:35 AM, Ben Turner wrote:

- Original Message -

From: "Justin Clift" 
To: "Ben Turner" 
Cc: "James" , gluster-users@gluster.org, "Gluster Devel" 

Sent: Thursday, May 29, 2014 6:12:40 PM
Subject: Re: [Gluster-users] [Gluster-devel] Need testers for GlusterFS 3.4.4

On 29/05/2014, at 8:04 PM, Ben Turner wrote:

From: "James" 
Sent: Wednesday, May 28, 2014 5:21:21 PM
On Wed, May 28, 2014 at 5:02 PM, Justin Clift  wrote:

Hi all,

Are there any Community members around who can test the GlusterFS 3.4.4
beta (rpms are available)?

I've provided all the tools and how-to to do this yourself. Should
probably take about ~20 min.

Old example:

https://ttboj.wordpress.com/2014/01/16/testing-glusterfs-during-glusterfest/

Same process should work, except base your testing on the latest
vagrant article:

https://ttboj.wordpress.com/2014/05/13/vagrant-on-fedora-with-libvirt-reprise/

If you haven't set it up already.

I can help out here, I'll have a chance to run through some stuff this
weekend.  Where should I post feedback?


Excellent Ben!  Please send feedback to gluster-devel. :)

So far so good on 3.4.4, sorry for the delay here.  I had to fix my downstream 
test suites to run outside of RHS / downstream gluster.  I did basic sanity 
testing on glusterfs mounts including:

FSSANITY_TEST_LIST: arequal bonnie glusterfs_build compile_kernel dbench dd 
ffsb fileop fsx fs_mark iozone locks ltp multiple_files posix_compliance 
postmark read_large rpc syscallbench tiobench

I am starting on NFS now, I'll have results tonight or tomorrow morning.  I'll 
look updating the component scripts to work and run them as well.

Thanks a lot for this ben.

Justin, Ben,
 Do you think we can automate running of these scripts without a 
lot of human intervention? If yes, how can I help?


We can use that just before making any release in future :-).

Pranith



-b
  

+ Justin

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfsd process spinning

2014-06-03 Thread Franco Broi
On Wed, 2014-06-04 at 10:19 +0530, Pranith Kumar Karampuri wrote: 
> On 06/04/2014 08:07 AM, Susant Palai wrote:
> > Pranith can you send the client and bricks logs.
> I have the logs. But I believe for this issue of directory not listing 
> entries, it would help more if we have the contents of that directory on 
> all the directories in the bricks + their hash values in the xattrs.

Strange thing is, all the invisible files are on the one server (nas6),
the other seems ok. I did rm -Rf of /data2/franco/dir* and was left with
this one directory - there were many hundreds which were removed
successfully.

I've attached listings and xattr dumps.

Cheers,

Volume Name: data2
Type: Distribute
Volume ID: d958423f-bd25-49f1-81f8-f12e4edc6823
Status: Started
Number of Bricks: 8
Transport-type: tcp
Bricks:
Brick1: nas5-10g:/data17/gvol
Brick2: nas5-10g:/data18/gvol
Brick3: nas5-10g:/data19/gvol
Brick4: nas5-10g:/data20/gvol
Brick5: nas6-10g:/data21/gvol
Brick6: nas6-10g:/data22/gvol
Brick7: nas6-10g:/data23/gvol
Brick8: nas6-10g:/data24/gvol
Options Reconfigured:
nfs.drc: on
cluster.min-free-disk: 5%
network.frame-timeout: 10800
nfs.export-volumes: on
nfs.disable: on
cluster.readdir-optimize: on

Gluster process PortOnline  Pid
--
Brick nas5-10g:/data17/gvol 49152   Y   6553
Brick nas5-10g:/data18/gvol 49153   Y   6564
Brick nas5-10g:/data19/gvol 49154   Y   6575
Brick nas5-10g:/data20/gvol 49155   Y   6586
Brick nas6-10g:/data21/gvol 49160   Y   20608
Brick nas6-10g:/data22/gvol 49161   Y   20613
Brick nas6-10g:/data23/gvol 49162   Y   20614
Brick nas6-10g:/data24/gvol 49163   Y   20621
 
Task Status of Volume data2
--
There are no active volume tasks



> 
> Pranith
> >
> > Thanks,
> > Susant~
> >
> > - Original Message -
> > From: "Pranith Kumar Karampuri" 
> > To: "Franco Broi" 
> > Cc: gluster-users@gluster.org, "Raghavendra Gowdappa" 
> > , spa...@redhat.com, kdhan...@redhat.com, 
> > vsomy...@redhat.com, nbala...@redhat.com
> > Sent: Wednesday, 4 June, 2014 7:53:41 AM
> > Subject: Re: [Gluster-users] glusterfsd process spinning
> >
> > hi Franco,
> >CC Devs who work on DHT to comment.
> >
> > Pranith
> >
> > On 06/04/2014 07:39 AM, Franco Broi wrote:
> >> On Wed, 2014-06-04 at 07:28 +0530, Pranith Kumar Karampuri wrote:
> >>> Franco,
> >>>  Thanks for providing the logs. I just copied over the logs to my
> >>> machine. Most of the logs I see are related to "No such File or
> >>> Directory" I wonder what lead to this. Do you have any idea?
> >> No but I'm just looking at my 3.5 Gluster volume and it has a directory
> >> that looks empty but can't be deleted. When I look at the directories on
> >> the servers there are definitely files in there.
> >>
> >> [franco@charlie1 franco]$ rmdir /data2/franco/dir1226/dir25
> >> rmdir: failed to remove `/data2/franco/dir1226/dir25': Directory not empty
> >> [franco@charlie1 franco]$ ls -la  /data2/franco/dir1226/dir25
> >> total 8
> >> drwxrwxr-x 2 franco support 60 May 21 03:58 .
> >> drwxrwxr-x 3 franco support 24 Jun  4 09:37 ..
> >>
> >> [root@nas6 ~]# ls -la /data*/gvol/franco/dir1226/dir25
> >> /data21/gvol/franco/dir1226/dir25:
> >> total 2081
> >> drwxrwxr-x 13 1348 200 13 May 21 03:58 .
> >> drwxrwxr-x  3 1348 200  3 May 21 03:58 ..
> >> drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13017
> >> drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13018
> >> drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13020
> >> drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13021
> >> drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13022
> >> drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13024
> >> drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13027
> >> drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13028
> >> drwxrwxr-x  2 1348 200  2 May 16 12:06 dir13029
> >> drwxrwxr-x  2 1348 200  2 May 16 12:06 dir13031
> >> drwxrwxr-x  2 1348 200  3 May 16 12:06 dir13032
> >>
> >> /data22/gvol/franco/dir1226/dir25:
> >> total 2084
> >> drwxrwxr-x 13 1348 200 13 May 21 03:58 .
> >> drwxrwxr-x  3 1348 200  3 May 21 03:58 ..
> >> drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13017
> >> drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13018
> >> drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13020
> >> drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13021
> >> drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13022
> >> .
> >>
> >> Maybe Gluster is losing track of the files??
> >>
> >>> Pranith
> >>>
> >>> On 06/02/2014 02:48 PM, Franco Broi wrote:
>  Hi Pranith
> 
>  Here's a listing of the brick logs, looks very odd especially the size
>  of the log for data10.
> 
> 

Re: [Gluster-users] glusterfsd process spinning

2014-06-03 Thread Pranith Kumar Karampuri


On 06/04/2014 08:07 AM, Susant Palai wrote:

Pranith can you send the client and bricks logs.
I have the logs. But I believe for this issue of directory not listing 
entries, it would help more if we have the contents of that directory on 
all the directories in the bricks + their hash values in the xattrs.


Pranith


Thanks,
Susant~

- Original Message -
From: "Pranith Kumar Karampuri" 
To: "Franco Broi" 
Cc: gluster-users@gluster.org, "Raghavendra Gowdappa" , 
spa...@redhat.com, kdhan...@redhat.com, vsomy...@redhat.com, nbala...@redhat.com
Sent: Wednesday, 4 June, 2014 7:53:41 AM
Subject: Re: [Gluster-users] glusterfsd process spinning

hi Franco,
   CC Devs who work on DHT to comment.

Pranith

On 06/04/2014 07:39 AM, Franco Broi wrote:

On Wed, 2014-06-04 at 07:28 +0530, Pranith Kumar Karampuri wrote:

Franco,
 Thanks for providing the logs. I just copied over the logs to my
machine. Most of the logs I see are related to "No such File or
Directory" I wonder what lead to this. Do you have any idea?

No but I'm just looking at my 3.5 Gluster volume and it has a directory
that looks empty but can't be deleted. When I look at the directories on
the servers there are definitely files in there.

[franco@charlie1 franco]$ rmdir /data2/franco/dir1226/dir25
rmdir: failed to remove `/data2/franco/dir1226/dir25': Directory not empty
[franco@charlie1 franco]$ ls -la  /data2/franco/dir1226/dir25
total 8
drwxrwxr-x 2 franco support 60 May 21 03:58 .
drwxrwxr-x 3 franco support 24 Jun  4 09:37 ..

[root@nas6 ~]# ls -la /data*/gvol/franco/dir1226/dir25
/data21/gvol/franco/dir1226/dir25:
total 2081
drwxrwxr-x 13 1348 200 13 May 21 03:58 .
drwxrwxr-x  3 1348 200  3 May 21 03:58 ..
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13017
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13018
drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13020
drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13021
drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13022
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13024
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13027
drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13028
drwxrwxr-x  2 1348 200  2 May 16 12:06 dir13029
drwxrwxr-x  2 1348 200  2 May 16 12:06 dir13031
drwxrwxr-x  2 1348 200  3 May 16 12:06 dir13032

/data22/gvol/franco/dir1226/dir25:
total 2084
drwxrwxr-x 13 1348 200 13 May 21 03:58 .
drwxrwxr-x  3 1348 200  3 May 21 03:58 ..
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13017
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13018
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13020
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13021
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13022
.

Maybe Gluster is losing track of the files??


Pranith

On 06/02/2014 02:48 PM, Franco Broi wrote:

Hi Pranith

Here's a listing of the brick logs, looks very odd especially the size
of the log for data10.

[root@nas3 bricks]# ls -ltrh
total 2.6G
-rw--- 1 root root 381K May 13 12:15 data12-gvol.log-20140511
-rw--- 1 root root 430M May 13 12:15 data11-gvol.log-20140511
-rw--- 1 root root 328K May 13 12:15 data9-gvol.log-20140511
-rw--- 1 root root 2.0M May 13 12:15 data10-gvol.log-20140511
-rw--- 1 root root0 May 18 03:43 data10-gvol.log-20140525
-rw--- 1 root root0 May 18 03:43 data11-gvol.log-20140525
-rw--- 1 root root0 May 18 03:43 data12-gvol.log-20140525
-rw--- 1 root root0 May 18 03:43 data9-gvol.log-20140525
-rw--- 1 root root0 May 25 03:19 data10-gvol.log-20140601
-rw--- 1 root root0 May 25 03:19 data11-gvol.log-20140601
-rw--- 1 root root0 May 25 03:19 data9-gvol.log-20140601
-rw--- 1 root root  98M May 26 03:04 data12-gvol.log-20140518
-rw--- 1 root root0 Jun  1 03:37 data10-gvol.log
-rw--- 1 root root0 Jun  1 03:37 data11-gvol.log
-rw--- 1 root root0 Jun  1 03:37 data12-gvol.log
-rw--- 1 root root0 Jun  1 03:37 data9-gvol.log
-rw--- 1 root root 1.8G Jun  2 16:35 data10-gvol.log-20140518
-rw--- 1 root root 279M Jun  2 16:35 data9-gvol.log-20140518
-rw--- 1 root root 328K Jun  2 16:35 data12-gvol.log-20140601
-rw--- 1 root root 8.3M Jun  2 16:35 data11-gvol.log-20140518

Too big to post everything.

Cheers,

On Sun, 2014-06-01 at 22:00 -0400, Pranith Kumar Karampuri wrote:

- Original Message -

From: "Pranith Kumar Karampuri" 
To: "Franco Broi" 
Cc: gluster-users@gluster.org
Sent: Monday, June 2, 2014 7:01:34 AM
Subject: Re: [Gluster-users] glusterfsd process spinning



- Original Message -

From: "Franco Broi" 
To: "Pranith Kumar Karampuri" 
Cc: gluster-users@gluster.org
Sent: Sunday, June 1, 2014 10:53:51 AM
Subject: Re: [Gluster-users] glusterfsd process spinning


The volume is almost completely idle now and the CPU for the brick
process has returned to normal. I've included the profile and I think it
shows the latency for the bad brick (data12) is unusually high, probably
indicating the filesystem is at fault after all??

I am not sure if we can believe the outputs now that you sa

Re: [Gluster-users] Brick on just one host constantly going offline

2014-06-03 Thread Andrew Lau
On Tue, Jun 3, 2014 at 11:35 PM, Justin Clift  wrote:
> On 03/06/2014, at 3:14 AM, Pranith Kumar Karampuri wrote:
>>> From: "Andrew Lau" 
>>> Sent: Tuesday, June 3, 2014 6:42:44 AM
> 
>>> Ah, that makes sense as it was the only volume which had that ping
>>> timeout setting. I also did see the timeout messages in the logs when
>>> I was checking. So is this merged in 3.5.1 ?
>>
>> Yes! http://review.gluster.org/7570
>
>
> Would you have time/inclination to test the 3.5.1 beta?
>
>   http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.1beta1/
>
> We're really needing people to test it and report back. :)
>

I'll take it for a spin, will it be an easy process to upgrade to the
GA release from beta? Contemplating whether it's worth living life on
the edge and deploying with beta :)

> Regards and best wishes,
>
> Justin Clift
>
> --
> Open Source and Standards @ Red Hat
>
> twitter.com/realjustinclift
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfsd process spinning

2014-06-03 Thread Pranith Kumar Karampuri

hi Franco,
 CC Devs who work on DHT to comment.

Pranith

On 06/04/2014 07:39 AM, Franco Broi wrote:

On Wed, 2014-06-04 at 07:28 +0530, Pranith Kumar Karampuri wrote:

Franco,
Thanks for providing the logs. I just copied over the logs to my
machine. Most of the logs I see are related to "No such File or
Directory" I wonder what lead to this. Do you have any idea?

No but I'm just looking at my 3.5 Gluster volume and it has a directory
that looks empty but can't be deleted. When I look at the directories on
the servers there are definitely files in there.

[franco@charlie1 franco]$ rmdir /data2/franco/dir1226/dir25
rmdir: failed to remove `/data2/franco/dir1226/dir25': Directory not empty
[franco@charlie1 franco]$ ls -la  /data2/franco/dir1226/dir25
total 8
drwxrwxr-x 2 franco support 60 May 21 03:58 .
drwxrwxr-x 3 franco support 24 Jun  4 09:37 ..

[root@nas6 ~]# ls -la /data*/gvol/franco/dir1226/dir25
/data21/gvol/franco/dir1226/dir25:
total 2081
drwxrwxr-x 13 1348 200 13 May 21 03:58 .
drwxrwxr-x  3 1348 200  3 May 21 03:58 ..
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13017
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13018
drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13020
drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13021
drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13022
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13024
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13027
drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13028
drwxrwxr-x  2 1348 200  2 May 16 12:06 dir13029
drwxrwxr-x  2 1348 200  2 May 16 12:06 dir13031
drwxrwxr-x  2 1348 200  3 May 16 12:06 dir13032

/data22/gvol/franco/dir1226/dir25:
total 2084
drwxrwxr-x 13 1348 200 13 May 21 03:58 .
drwxrwxr-x  3 1348 200  3 May 21 03:58 ..
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13017
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13018
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13020
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13021
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13022
.

Maybe Gluster is losing track of the files??


Pranith

On 06/02/2014 02:48 PM, Franco Broi wrote:

Hi Pranith

Here's a listing of the brick logs, looks very odd especially the size
of the log for data10.

[root@nas3 bricks]# ls -ltrh
total 2.6G
-rw--- 1 root root 381K May 13 12:15 data12-gvol.log-20140511
-rw--- 1 root root 430M May 13 12:15 data11-gvol.log-20140511
-rw--- 1 root root 328K May 13 12:15 data9-gvol.log-20140511
-rw--- 1 root root 2.0M May 13 12:15 data10-gvol.log-20140511
-rw--- 1 root root0 May 18 03:43 data10-gvol.log-20140525
-rw--- 1 root root0 May 18 03:43 data11-gvol.log-20140525
-rw--- 1 root root0 May 18 03:43 data12-gvol.log-20140525
-rw--- 1 root root0 May 18 03:43 data9-gvol.log-20140525
-rw--- 1 root root0 May 25 03:19 data10-gvol.log-20140601
-rw--- 1 root root0 May 25 03:19 data11-gvol.log-20140601
-rw--- 1 root root0 May 25 03:19 data9-gvol.log-20140601
-rw--- 1 root root  98M May 26 03:04 data12-gvol.log-20140518
-rw--- 1 root root0 Jun  1 03:37 data10-gvol.log
-rw--- 1 root root0 Jun  1 03:37 data11-gvol.log
-rw--- 1 root root0 Jun  1 03:37 data12-gvol.log
-rw--- 1 root root0 Jun  1 03:37 data9-gvol.log
-rw--- 1 root root 1.8G Jun  2 16:35 data10-gvol.log-20140518
-rw--- 1 root root 279M Jun  2 16:35 data9-gvol.log-20140518
-rw--- 1 root root 328K Jun  2 16:35 data12-gvol.log-20140601
-rw--- 1 root root 8.3M Jun  2 16:35 data11-gvol.log-20140518

Too big to post everything.

Cheers,

On Sun, 2014-06-01 at 22:00 -0400, Pranith Kumar Karampuri wrote:

- Original Message -

From: "Pranith Kumar Karampuri" 
To: "Franco Broi" 
Cc: gluster-users@gluster.org
Sent: Monday, June 2, 2014 7:01:34 AM
Subject: Re: [Gluster-users] glusterfsd process spinning



- Original Message -

From: "Franco Broi" 
To: "Pranith Kumar Karampuri" 
Cc: gluster-users@gluster.org
Sent: Sunday, June 1, 2014 10:53:51 AM
Subject: Re: [Gluster-users] glusterfsd process spinning


The volume is almost completely idle now and the CPU for the brick
process has returned to normal. I've included the profile and I think it
shows the latency for the bad brick (data12) is unusually high, probably
indicating the filesystem is at fault after all??

I am not sure if we can believe the outputs now that you say the brick
returned to normal. Next time it is acting up, do the same procedure and
post the result.

On second thought may be its not a bad idea to inspect the log files of the 
bricks in nas3. Could you post them.

Pranith


Pranith

On Sun, 2014-06-01 at 01:01 -0400, Pranith Kumar Karampuri wrote:

Franco,
  Could you do the following to get more information:

"gluster volume profile  start"

Wait for some time, this will start gathering what operations are coming
to
all the bricks"
Now execute "gluster volume profile  info" >
/file/you/should/reply/to/this/mail/with

Then execute:
gluster volume profile  stop

Lets see if this throws any light

Re: [Gluster-users] [Gluster-devel] [RFC] GlusterFS Operations Guide

2014-06-03 Thread Paul Cuzner
This is a really good initiative Lala. 

Anything that helps Operations folks always gets my vote :) 

I've added a few items to the etherpad. 

Cheers, 

PC 

- Original Message -

> From: "Lalatendu Mohanty" 
> To: gluster-users@gluster.org, gluster-de...@gluster.org
> Sent: Friday, 30 May, 2014 11:33:35 PM
> Subject: Re: [Gluster-users] [Gluster-devel] [RFC] GlusterFS Operations Guide

> On 05/30/2014 04:50 PM, Lalatendu Mohanty wrote:

> > I think it is time to create an operations/ops guide for GlusterFS.
> > Operations guide should address issues which administrators face while
> > running/maintaining GlusterFS storage nodes. Openstack project has an
> > operations guide [2] which try to address similar issues and it is pretty
> > cool.
> 

> > IMO these are the typical/example questions which operations guide should
> > try
> > to address.
> 

> > * Maintenance, Failures, and Debugging
> 

> > > * What are steps for planned maintenance for GlusterFS node?
> > 
> 

> > > * Steps for replacing a failed node?
> > 
> 

> > > * Steps to decommission a brick?
> > 
> 

> > * Logging and Monitoring
> 

> > > * Where are the log files?
> > 
> 

> > > * How to find-out if self-heal is working properly?
> > 
> 

> > > * Which log files to monitor for detecting failures?
> > 
> 

> > Operating guide needs good amount of work, hence we all need to come
> > together
> > for this. You can contribute for this by either of the following
> 

> > * Let know others about the questions you want to get answered in the
> > operating guide. ( I have set-up a etherpad for this [1])
> 
> > * Answer the questions/issues raised by others.
> 

> > Comments, suggestions?
> 
> > Should this be part of gluster code base i.e. /doc or somewhere else?
> 

> > [1] http://titanpad.com/op-guide
> 
> > [2] http://docs.openstack.org/ops/oreilly-openstack-ops-guide.pdf
> 

> > Thanks,
> 
> > Lala
> 
> > #lalatenduM on freenode
> 

> > ___
> 
> > Gluster-devel mailing list gluster-de...@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 

> Resending after fixing few typos.

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterfsd process spinning

2014-06-03 Thread Franco Broi
On Wed, 2014-06-04 at 07:28 +0530, Pranith Kumar Karampuri wrote: 
> Franco,
>Thanks for providing the logs. I just copied over the logs to my 
> machine. Most of the logs I see are related to "No such File or 
> Directory" I wonder what lead to this. Do you have any idea?

No but I'm just looking at my 3.5 Gluster volume and it has a directory
that looks empty but can't be deleted. When I look at the directories on
the servers there are definitely files in there.

[franco@charlie1 franco]$ rmdir /data2/franco/dir1226/dir25
rmdir: failed to remove `/data2/franco/dir1226/dir25': Directory not empty
[franco@charlie1 franco]$ ls -la  /data2/franco/dir1226/dir25
total 8
drwxrwxr-x 2 franco support 60 May 21 03:58 .
drwxrwxr-x 3 franco support 24 Jun  4 09:37 ..

[root@nas6 ~]# ls -la /data*/gvol/franco/dir1226/dir25
/data21/gvol/franco/dir1226/dir25:
total 2081
drwxrwxr-x 13 1348 200 13 May 21 03:58 .
drwxrwxr-x  3 1348 200  3 May 21 03:58 ..
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13017
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13018
drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13020
drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13021
drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13022
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13024
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13027
drwxrwxr-x  2 1348 200  3 May 16 12:05 dir13028
drwxrwxr-x  2 1348 200  2 May 16 12:06 dir13029
drwxrwxr-x  2 1348 200  2 May 16 12:06 dir13031
drwxrwxr-x  2 1348 200  3 May 16 12:06 dir13032

/data22/gvol/franco/dir1226/dir25:
total 2084
drwxrwxr-x 13 1348 200 13 May 21 03:58 .
drwxrwxr-x  3 1348 200  3 May 21 03:58 ..
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13017
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13018
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13020
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13021
drwxrwxr-x  2 1348 200  2 May 16 12:05 dir13022
.

Maybe Gluster is losing track of the files??

> 
> Pranith
> 
> On 06/02/2014 02:48 PM, Franco Broi wrote:
> > Hi Pranith
> >
> > Here's a listing of the brick logs, looks very odd especially the size
> > of the log for data10.
> >
> > [root@nas3 bricks]# ls -ltrh
> > total 2.6G
> > -rw--- 1 root root 381K May 13 12:15 data12-gvol.log-20140511
> > -rw--- 1 root root 430M May 13 12:15 data11-gvol.log-20140511
> > -rw--- 1 root root 328K May 13 12:15 data9-gvol.log-20140511
> > -rw--- 1 root root 2.0M May 13 12:15 data10-gvol.log-20140511
> > -rw--- 1 root root0 May 18 03:43 data10-gvol.log-20140525
> > -rw--- 1 root root0 May 18 03:43 data11-gvol.log-20140525
> > -rw--- 1 root root0 May 18 03:43 data12-gvol.log-20140525
> > -rw--- 1 root root0 May 18 03:43 data9-gvol.log-20140525
> > -rw--- 1 root root0 May 25 03:19 data10-gvol.log-20140601
> > -rw--- 1 root root0 May 25 03:19 data11-gvol.log-20140601
> > -rw--- 1 root root0 May 25 03:19 data9-gvol.log-20140601
> > -rw--- 1 root root  98M May 26 03:04 data12-gvol.log-20140518
> > -rw--- 1 root root0 Jun  1 03:37 data10-gvol.log
> > -rw--- 1 root root0 Jun  1 03:37 data11-gvol.log
> > -rw--- 1 root root0 Jun  1 03:37 data12-gvol.log
> > -rw--- 1 root root0 Jun  1 03:37 data9-gvol.log
> > -rw--- 1 root root 1.8G Jun  2 16:35 data10-gvol.log-20140518
> > -rw--- 1 root root 279M Jun  2 16:35 data9-gvol.log-20140518
> > -rw--- 1 root root 328K Jun  2 16:35 data12-gvol.log-20140601
> > -rw--- 1 root root 8.3M Jun  2 16:35 data11-gvol.log-20140518
> >
> > Too big to post everything.
> >
> > Cheers,
> >
> > On Sun, 2014-06-01 at 22:00 -0400, Pranith Kumar Karampuri wrote:
> >> - Original Message -
> >>> From: "Pranith Kumar Karampuri" 
> >>> To: "Franco Broi" 
> >>> Cc: gluster-users@gluster.org
> >>> Sent: Monday, June 2, 2014 7:01:34 AM
> >>> Subject: Re: [Gluster-users] glusterfsd process spinning
> >>>
> >>>
> >>>
> >>> - Original Message -
>  From: "Franco Broi" 
>  To: "Pranith Kumar Karampuri" 
>  Cc: gluster-users@gluster.org
>  Sent: Sunday, June 1, 2014 10:53:51 AM
>  Subject: Re: [Gluster-users] glusterfsd process spinning
> 
> 
>  The volume is almost completely idle now and the CPU for the brick
>  process has returned to normal. I've included the profile and I think it
>  shows the latency for the bad brick (data12) is unusually high, probably
>  indicating the filesystem is at fault after all??
> >>> I am not sure if we can believe the outputs now that you say the brick
> >>> returned to normal. Next time it is acting up, do the same procedure and
> >>> post the result.
> >> On second thought may be its not a bad idea to inspect the log files of 
> >> the bricks in nas3. Could you post them.
> >>
> >> Pranith
> >>
> >>> Pranith
>  On Sun, 2014-06-01 at 01:01 -0400, Pranith Kumar Karampuri wrote:
> > Franco,
> >  Could you do the following to get more information:
> >
> > "gluster volume profile  start"
> >
> 

Re: [Gluster-users] glusterfsd process spinning

2014-06-03 Thread Pranith Kumar Karampuri

Franco,
  Thanks for providing the logs. I just copied over the logs to my 
machine. Most of the logs I see are related to "No such File or 
Directory" I wonder what lead to this. Do you have any idea?


Pranith

On 06/02/2014 02:48 PM, Franco Broi wrote:

Hi Pranith

Here's a listing of the brick logs, looks very odd especially the size
of the log for data10.

[root@nas3 bricks]# ls -ltrh
total 2.6G
-rw--- 1 root root 381K May 13 12:15 data12-gvol.log-20140511
-rw--- 1 root root 430M May 13 12:15 data11-gvol.log-20140511
-rw--- 1 root root 328K May 13 12:15 data9-gvol.log-20140511
-rw--- 1 root root 2.0M May 13 12:15 data10-gvol.log-20140511
-rw--- 1 root root0 May 18 03:43 data10-gvol.log-20140525
-rw--- 1 root root0 May 18 03:43 data11-gvol.log-20140525
-rw--- 1 root root0 May 18 03:43 data12-gvol.log-20140525
-rw--- 1 root root0 May 18 03:43 data9-gvol.log-20140525
-rw--- 1 root root0 May 25 03:19 data10-gvol.log-20140601
-rw--- 1 root root0 May 25 03:19 data11-gvol.log-20140601
-rw--- 1 root root0 May 25 03:19 data9-gvol.log-20140601
-rw--- 1 root root  98M May 26 03:04 data12-gvol.log-20140518
-rw--- 1 root root0 Jun  1 03:37 data10-gvol.log
-rw--- 1 root root0 Jun  1 03:37 data11-gvol.log
-rw--- 1 root root0 Jun  1 03:37 data12-gvol.log
-rw--- 1 root root0 Jun  1 03:37 data9-gvol.log
-rw--- 1 root root 1.8G Jun  2 16:35 data10-gvol.log-20140518
-rw--- 1 root root 279M Jun  2 16:35 data9-gvol.log-20140518
-rw--- 1 root root 328K Jun  2 16:35 data12-gvol.log-20140601
-rw--- 1 root root 8.3M Jun  2 16:35 data11-gvol.log-20140518

Too big to post everything.

Cheers,

On Sun, 2014-06-01 at 22:00 -0400, Pranith Kumar Karampuri wrote:

- Original Message -

From: "Pranith Kumar Karampuri" 
To: "Franco Broi" 
Cc: gluster-users@gluster.org
Sent: Monday, June 2, 2014 7:01:34 AM
Subject: Re: [Gluster-users] glusterfsd process spinning



- Original Message -

From: "Franco Broi" 
To: "Pranith Kumar Karampuri" 
Cc: gluster-users@gluster.org
Sent: Sunday, June 1, 2014 10:53:51 AM
Subject: Re: [Gluster-users] glusterfsd process spinning


The volume is almost completely idle now and the CPU for the brick
process has returned to normal. I've included the profile and I think it
shows the latency for the bad brick (data12) is unusually high, probably
indicating the filesystem is at fault after all??

I am not sure if we can believe the outputs now that you say the brick
returned to normal. Next time it is acting up, do the same procedure and
post the result.

On second thought may be its not a bad idea to inspect the log files of the 
bricks in nas3. Could you post them.

Pranith


Pranith

On Sun, 2014-06-01 at 01:01 -0400, Pranith Kumar Karampuri wrote:

Franco,
 Could you do the following to get more information:

"gluster volume profile  start"

Wait for some time, this will start gathering what operations are coming
to
all the bricks"
Now execute "gluster volume profile  info" >
/file/you/should/reply/to/this/mail/with

Then execute:
gluster volume profile  stop

Lets see if this throws any light on the problem at hand

Pranith
- Original Message -

From: "Franco Broi" 
To: gluster-users@gluster.org
Sent: Sunday, June 1, 2014 9:02:48 AM
Subject: [Gluster-users] glusterfsd process spinning

Hi

I've been suffering from continual problems with my gluster filesystem
slowing down due to what I thought was congestion on a single brick
being caused by a problem with the underlying filesystem running slow
but I've just noticed that the glusterfsd process for that particular
brick is running at 100%+, even when the filesystem is almost idle.

I've done a couple of straces of the brick and another on the same
server, does the high number of futex errors give any clues as to what
might be wrong?

% time seconds  usecs/call callserrors syscall
-- --- --- - - 
45.580.027554   0191665 20772 futex
28.260.017084   0137133   readv
26.040.015743   0 66259   epoll_wait
   0.130.77   323   writev
   0.000.00   0 1   epoll_ctl
-- --- --- - - 
100.000.060458395081 20772 total

% time seconds  usecs/call callserrors syscall
-- --- --- - - 
99.250.334020 133  2516   epoll_wait
   0.400.001347   0  409026 futex
   0.350.001192   0  5064   readv
   0.000.00   020   writev
-- --- --- - - 
100.000.336559 1169026 total



Cheers,

___

Re: [Gluster-users] [Gluster-devel] Need testers for GlusterFS 3.4.4

2014-06-03 Thread Justin Clift
On 03/06/2014, at 9:05 PM, Ben Turner wrote:
>> From: "Justin Clift" 
>> Sent: Thursday, May 29, 2014 6:12:40 PM

>> Excellent Ben!  Please send feedback to gluster-devel. :)
> 
> So far so good on 3.4.4, sorry for the delay here.  I had to fix my 
> downstream test suites to run outside of RHS / downstream gluster.  I did 
> basic sanity testing on glusterfs mounts including:
> 
> FSSANITY_TEST_LIST: arequal bonnie glusterfs_build compile_kernel dbench dd 
> ffsb fileop fsx fs_mark iozone locks ltp multiple_files posix_compliance 
> postmark read_large rpc syscallbench tiobench
> 
> I am starting on NFS now, I'll have results tonight or tomorrow morning.  
> I'll look updating the component scripts to work and run them as well.


Thanks Ben. :)

+ Justin

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Unavailability during self-heal for large volumes

2014-06-03 Thread Laurent Chouinard
> gluster volume set  cluster.self-heal-daemon off
> would disable glustershd performing automatic healing.
> 
> Pranith
 
Hi,

Thanks for the tip. We'll try that and see if it helps.

Regards,

Laurent
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Need testers for GlusterFS 3.4.4

2014-06-03 Thread Ben Turner
- Original Message -
> From: "Justin Clift" 
> To: "Ben Turner" 
> Cc: "James" , gluster-users@gluster.org, "Gluster 
> Devel" 
> Sent: Thursday, May 29, 2014 6:12:40 PM
> Subject: Re: [Gluster-users] [Gluster-devel] Need testers for GlusterFS 3.4.4
> 
> On 29/05/2014, at 8:04 PM, Ben Turner wrote:
> >> From: "James" 
> >> Sent: Wednesday, May 28, 2014 5:21:21 PM
> >> On Wed, May 28, 2014 at 5:02 PM, Justin Clift  wrote:
> >>> Hi all,
> >>> 
> >>> Are there any Community members around who can test the GlusterFS 3.4.4
> >>> beta (rpms are available)?
> >> 
> >> I've provided all the tools and how-to to do this yourself. Should
> >> probably take about ~20 min.
> >> 
> >> Old example:
> >> 
> >> https://ttboj.wordpress.com/2014/01/16/testing-glusterfs-during-glusterfest/
> >> 
> >> Same process should work, except base your testing on the latest
> >> vagrant article:
> >> 
> >> https://ttboj.wordpress.com/2014/05/13/vagrant-on-fedora-with-libvirt-reprise/
> >> 
> >> If you haven't set it up already.
> > 
> > I can help out here, I'll have a chance to run through some stuff this
> > weekend.  Where should I post feedback?
> 
> 
> Excellent Ben!  Please send feedback to gluster-devel. :)

So far so good on 3.4.4, sorry for the delay here.  I had to fix my downstream 
test suites to run outside of RHS / downstream gluster.  I did basic sanity 
testing on glusterfs mounts including:

FSSANITY_TEST_LIST: arequal bonnie glusterfs_build compile_kernel dbench dd 
ffsb fileop fsx fs_mark iozone locks ltp multiple_files posix_compliance 
postmark read_large rpc syscallbench tiobench

I am starting on NFS now, I'll have results tonight or tomorrow morning.  I'll 
look updating the component scripts to work and run them as well.

-b
 
> + Justin
> 
> --
> Open Source and Standards @ Red Hat
> 
> twitter.com/realjustinclift
> 
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] REMOVEXATTR warnings in client log

2014-06-03 Thread André Bauer
Hi List,

after updateing from Ubuntu Precise to Ubuntu Trusty my GlusterFS 3.4.2
clients have a lot of this warnings in the log:

[2014-06-03 11:11:24.266842] W
[client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0-gv5-client-2:
remote operation failed: No data available
[2014-06-03 11:11:24.266891] W
[client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0-gv5-client-3:
remote operation failed: No data available
[2014-06-03 11:11:24.267277] W [fuse-bridge.c:1172:fuse_err_cbk]
0-glusterfs-fuse: 49105: REMOVEXATTR() /2014-06/myfile.zip => -1 (No
data available)

The log fills whit this warniongs when files are created.

Is this a big problem?

Evrything seems to work fine so far...



-- 
Mit freundlichen Grüßen

André Bauer

MAGIX Software GmbH
André Bauer
Administrator
Postfach 200914
01194 Dresden

Tel. Support Deutschland: 0900/1771115 (1,24 Euro/Min.)
Tel. Support Österreich:  0900/454571 (1,56 Euro/Min.)
Tel. Support Schweiz: 0900/454571 (1,50 CHF/Min.)

Email: mailto:aba...@magix.net
Web:   http://www.magix.com

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Erhard Rein,
Michael Keith, Tilman Herberger
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful.

MAGIX does not warrant that any attachments are free from viruses or
other defects and accepts no liability for any losses resulting from
infected email transmissions. Please note that any views expressed in
this email may be those of the originator and do not necessarily
represent the agenda of the company.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Brick on just one host constantly going offline

2014-06-03 Thread Justin Clift
On 03/06/2014, at 3:14 AM, Pranith Kumar Karampuri wrote:
>> From: "Andrew Lau" 
>> Sent: Tuesday, June 3, 2014 6:42:44 AM

>> Ah, that makes sense as it was the only volume which had that ping
>> timeout setting. I also did see the timeout messages in the logs when
>> I was checking. So is this merged in 3.5.1 ?
> 
> Yes! http://review.gluster.org/7570


Would you have time/inclination to test the 3.5.1 beta?

  http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.1beta1/

We're really needing people to test it and report back. :)

Regards and best wishes,

Justin Clift

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS ACL Support in Gluster 3.4

2014-06-03 Thread Indivar Nair
Thanks Humble.


On Tue, Jun 3, 2014 at 6:16 PM, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:

> Hi,
>
> As Santhosh mentioned 3.5 support POSIX ACL configuration through NFS
> mount, i.e. setfacl and getfacl commands work through NFS mount.
>
> >
> Also, can I do an in-place upgrade from 3.4 to 3.5 by just replacing the
> Gluster RPMs?
> >
>
> This may help
> http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5
>
> --Humble
>
>
>
> On Tue, Jun 3, 2014 at 5:36 PM, Indivar Nair 
> wrote:
>
>> Hi Santosh,
>>
>> Are you referring to this bug -
>> https://bugzilla.redhat.com/show_bug.cgi?id=1035218 -?
>> In my case, both setfacl and getfacl aren't working.
>> Will this fix work in my case too?
>>
>> Also, can I do an in-place upgrade from 3.4 to 3.5 by just replacing the
>> Gluster RPMs?
>>
>> Regards,
>>
>>
>> Indivar Nair
>>
>>
>>
>>
>> On Tue, Jun 3, 2014 at 5:26 PM, Santosh Pradhan 
>> wrote:
>>
>>>  I guess Gluster 3.5 has fixed the NFS-ACL issues and getfacl/setfacl
>>> works there.
>>>
>>> Regards,
>>> Santosh
>>>
>>>
>>> On 06/03/2014 05:10 PM, Indivar Nair wrote:
>>>
>>>  Hi All,
>>>
>>>  I recently upgraded a Gluster 3.3.1 installation to Gluster 3.4.
>>>  It was a straight forward upgrade using Yum.
>>>  The OS is CentOS 6.3.
>>>
>>>  The main purpose of the upgrade was to get ACL Support on NFS exports.
>>>  But it doesn't seem to be working.
>>>
>>> I mounted the gluster volume using the following options -
>>>
>>>  mount -t nfs -o vers=3,mountproto=tcp,acl :/volume /mnt
>>>
>>>  The getfacl or setfacl commands does not work on any dir/files on this
>>> mount.
>>>
>>>The plan is to re-export the NFS Mounts using Samba+CTDB.
>>>  NFS mounts seem to give better performance than Gluster Mounts.
>>>
>>>  Am I missing something?
>>>
>>>  Regards,
>>>
>>>
>>>  Indivar Nair
>>>
>>>
>>>
>>>
>>> ___
>>> Gluster-users mailing 
>>> listGluster-users@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS ACL Support in Gluster 3.4

2014-06-03 Thread Humble Devassy Chirammal
Hi,

As Santhosh mentioned 3.5 support POSIX ACL configuration through NFS
mount, i.e. setfacl and getfacl commands work through NFS mount.

>
Also, can I do an in-place upgrade from 3.4 to 3.5 by just replacing the
Gluster RPMs?
>

This may help
http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5

--Humble



On Tue, Jun 3, 2014 at 5:36 PM, Indivar Nair 
wrote:

> Hi Santosh,
>
> Are you referring to this bug -
> https://bugzilla.redhat.com/show_bug.cgi?id=1035218 -?
> In my case, both setfacl and getfacl aren't working.
> Will this fix work in my case too?
>
> Also, can I do an in-place upgrade from 3.4 to 3.5 by just replacing the
> Gluster RPMs?
>
> Regards,
>
>
> Indivar Nair
>
>
>
>
> On Tue, Jun 3, 2014 at 5:26 PM, Santosh Pradhan 
> wrote:
>
>>  I guess Gluster 3.5 has fixed the NFS-ACL issues and getfacl/setfacl
>> works there.
>>
>> Regards,
>> Santosh
>>
>>
>> On 06/03/2014 05:10 PM, Indivar Nair wrote:
>>
>>  Hi All,
>>
>>  I recently upgraded a Gluster 3.3.1 installation to Gluster 3.4.
>>  It was a straight forward upgrade using Yum.
>>  The OS is CentOS 6.3.
>>
>>  The main purpose of the upgrade was to get ACL Support on NFS exports.
>>  But it doesn't seem to be working.
>>
>> I mounted the gluster volume using the following options -
>>
>>  mount -t nfs -o vers=3,mountproto=tcp,acl :/volume /mnt
>>
>>  The getfacl or setfacl commands does not work on any dir/files on this
>> mount.
>>
>>The plan is to re-export the NFS Mounts using Samba+CTDB.
>>  NFS mounts seem to give better performance than Gluster Mounts.
>>
>>  Am I missing something?
>>
>>  Regards,
>>
>>
>>  Indivar Nair
>>
>>
>>
>>
>> ___
>> Gluster-users mailing 
>> listGluster-users@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS ACL Support in Gluster 3.4

2014-06-03 Thread Santosh Pradhan

Hi Indivar,

On 06/03/2014 05:36 PM, Indivar Nair wrote:

Hi Santosh,

Are you referring to this bug - 
https://bugzilla.redhat.com/show_bug.cgi?id=1035218 -?

In my case, both setfacl and getfacl aren't working.
Will this fix work in my case too?


This is not the only FIX, but yes, one of the main FIX. Along with that, 
there were more issues which were fixed in 3.5. I can say NFS ACL works 
in 3.5 :)




Also, can I do an in-place upgrade from 3.4 to 3.5 by just replacing 
the Gluster RPMs?


Yes, Just stop the running gluster processes, and upgrade the rpms, that 
should work. I am not pretty sure about this upgrade/installation part 
which I would leave others to comment on. But I guess it should just work.


Best R,
Santosh



Regards,


Indivar Nair




On Tue, Jun 3, 2014 at 5:26 PM, Santosh Pradhan > wrote:


I guess Gluster 3.5 has fixed the NFS-ACL issues and
getfacl/setfacl works there.

Regards,
Santosh


On 06/03/2014 05:10 PM, Indivar Nair wrote:

Hi All,

I recently upgraded a Gluster 3.3.1 installation to Gluster 3.4.
It was a straight forward upgrade using Yum.
The OS is CentOS 6.3.

The main purpose of the upgrade was to get ACL Support on NFS
exports.
But it doesn't seem to be working.

I mounted the gluster volume using the following options -

mount -t nfs -o vers=3,mountproto=tcp,acl
:/volume /mnt

The getfacl or setfacl commands does not work on any dir/files on
this mount.

The plan is to re-export the NFS Mounts using Samba+CTDB.
NFS mounts seem to give better performance than Gluster Mounts.

Am I missing something?

Regards,


Indivar Nair




___
Gluster-users mailing list
Gluster-users@gluster.org  
http://supercolony.gluster.org/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS ACL Support in Gluster 3.4

2014-06-03 Thread Indivar Nair
Hi Santosh,

Are you referring to this bug -
https://bugzilla.redhat.com/show_bug.cgi?id=1035218 -?
In my case, both setfacl and getfacl aren't working.
Will this fix work in my case too?

Also, can I do an in-place upgrade from 3.4 to 3.5 by just replacing the
Gluster RPMs?

Regards,


Indivar Nair




On Tue, Jun 3, 2014 at 5:26 PM, Santosh Pradhan  wrote:

>  I guess Gluster 3.5 has fixed the NFS-ACL issues and getfacl/setfacl
> works there.
>
> Regards,
> Santosh
>
>
> On 06/03/2014 05:10 PM, Indivar Nair wrote:
>
>  Hi All,
>
>  I recently upgraded a Gluster 3.3.1 installation to Gluster 3.4.
>  It was a straight forward upgrade using Yum.
>  The OS is CentOS 6.3.
>
>  The main purpose of the upgrade was to get ACL Support on NFS exports.
>  But it doesn't seem to be working.
>
> I mounted the gluster volume using the following options -
>
>  mount -t nfs -o vers=3,mountproto=tcp,acl :/volume /mnt
>
>  The getfacl or setfacl commands does not work on any dir/files on this
> mount.
>
>The plan is to re-export the NFS Mounts using Samba+CTDB.
>  NFS mounts seem to give better performance than Gluster Mounts.
>
>  Am I missing something?
>
>  Regards,
>
>
>  Indivar Nair
>
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS ACL Support in Gluster 3.4

2014-06-03 Thread Santosh Pradhan
I guess Gluster 3.5 has fixed the NFS-ACL issues and getfacl/setfacl 
works there.


Regards,
Santosh

On 06/03/2014 05:10 PM, Indivar Nair wrote:

Hi All,

I recently upgraded a Gluster 3.3.1 installation to Gluster 3.4.
It was a straight forward upgrade using Yum.
The OS is CentOS 6.3.

The main purpose of the upgrade was to get ACL Support on NFS exports.
But it doesn't seem to be working.

I mounted the gluster volume using the following options -

mount -t nfs -o vers=3,mountproto=tcp,acl :/volume /mnt

The getfacl or setfacl commands does not work on any dir/files on this 
mount.


The plan is to re-export the NFS Mounts using Samba+CTDB.
NFS mounts seem to give better performance than Gluster Mounts.

Am I missing something?

Regards,


Indivar Nair




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] NFS ACL Support in Gluster 3.4

2014-06-03 Thread Indivar Nair
Hi All,

I recently upgraded a Gluster 3.3.1 installation to Gluster 3.4.
It was a straight forward upgrade using Yum.
The OS is CentOS 6.3.

The main purpose of the upgrade was to get ACL Support on NFS exports.
But it doesn't seem to be working.

I mounted the gluster volume using the following options -

mount -t nfs -o vers=3,mountproto=tcp,acl :/volume /mnt

The getfacl or setfacl commands does not work on any dir/files on this
mount.

The plan is to re-export the NFS Mounts using Samba+CTDB.
NFS mounts seem to give better performance than Gluster Mounts.

Am I missing something?

Regards,


Indivar Nair
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Distributed volumes

2014-06-03 Thread Vijay Bellur

On 06/03/2014 01:50 PM, yalla.gnan.ku...@accenture.com wrote:

Hi,

So , in which scenario, does the distributed volumes have files on both the 
bricks ?




Reading the documentation for various volume types [1] can be useful to 
obtain answers for questions of this nature.


-Vijay

[1] 
https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_setting_volumes.md


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Distributed volumes

2014-06-03 Thread Franco Broi
On Tue, 2014-06-03 at 08:20 +, yalla.gnan.ku...@accenture.com
wrote: 
> Hi,
> 
> So , in which scenario, does the distributed volumes have files on both the 
> bricks ?

If you make more than 1 file.

> 
> 
> -Original Message-
> From: Kaushal M [mailto:kshlms...@gmail.com] 
> Sent: Tuesday, June 03, 2014 1:19 PM
> To: Gnan Kumar, Yalla
> Cc: Franco Broi; gluster-users@gluster.org
> Subject: Re: [Gluster-users] Distributed volumes
> 
> You have only 1 file on the gluster volume, the 1GB disk image/volume that 
> you created. This disk image is attached to the VM as a file system, not the 
> gluster volume. So whatever you do in the VM's file system, affects just the 
> 1 disk image. The files, directories etc. you created, are inside the disk 
> image. So you still have just one file on the gluster volume, not many as you 
> are assuming.
> 
> 
> 
> On Tue, Jun 3, 2014 at 1:09 PM,   wrote:
> > I have created distributed volume,  created a 1 GB volume on it, and 
> > attached it to the VM and created a filesystem on it.  How to verify that 
> > the files in the vm are distributed across both the bricks on two servers ?
> >
> >
> >
> > -Original Message-
> > From: Franco Broi [mailto:franco.b...@iongeo.com]
> > Sent: Tuesday, June 03, 2014 1:04 PM
> > To: Gnan Kumar, Yalla
> > Cc: gluster-users@gluster.org
> > Subject: Re: [Gluster-users] Distributed volumes
> >
> >
> > Ok, what you have is a single large file (must be filesystem image??).
> > Gluster will not stripe files, it writes different whole files to different 
> > bricks.
> >
> > On Tue, 2014-06-03 at 07:29 +, yalla.gnan.ku...@accenture.com
> > wrote:
> >> root@secondary:/export/sdd1/brick# gluster volume  info
> >>
> >> Volume Name: dst
> >> Type: Distribute
> >> Status: Started
> >> Number of Bricks: 2
> >> Transport-type: tcp
> >> Bricks:
> >> Brick1: primary:/export/sdd1/brick
> >> Brick2: secondary:/export/sdd1/brick
> >>
> >>
> >>
> >> -Original Message-
> >> From: Franco Broi [mailto:franco.b...@iongeo.com]
> >> Sent: Tuesday, June 03, 2014 12:56 PM
> >> To: Gnan Kumar, Yalla
> >> Cc: gluster-users@gluster.org
> >> Subject: Re: [Gluster-users] Distributed volumes
> >>
> >>
> >> What do gluster vol info and gluster vol status give you?
> >>
> >> On Tue, 2014-06-03 at 07:21 +, yalla.gnan.ku...@accenture.com
> >> wrote:
> >> > Hi,
> >> >
> >> > I have created a distributed volume on my   gluster node. I have 
> >> > attached this volume to a VM on openstack. The size is 1 GB. I have 
> >> > written files close to 1 GB onto the
> >> > Volume.   But when I do a ls inside the brick directory , the volume is 
> >> > present only on one gluster server brick. But it is empty on another 
> >> > server brick.  Files are meant to be
> >> > spread across both the bricks according to distributed volume definition.
> >> >
> >> > On the VM:
> >> > --
> >> >
> >> > # ls -al
> >> > total 1013417
> >> > drwxr-xr-x3 root root  4096 Jun  1 22:03 .
> >> > drwxrwxr-x3 root root  1024 Jun  1 21:24 ..
> >> > -rw---1 root root 31478251520 Jun  1 21:52 file
> >> > -rw---1 root root 157391257600 Jun  1 21:54 file1
> >> > -rw---1 root root 629565030400 Jun  1 21:55 file2
> >> > -rw---1 root root 708260659200 Jun  1 21:59 file3
> >> > -rw---1 root root 6295650304 Jun  1 22:01 file4
> >> > -rw---1 root root 39333801984 Jun  1 22:01 file5
> >> > -rw---1 root root 7864320 Jun  1 22:04 file6
> >> > drwx--2 root root 16384 Jun  1 21:24 lost+found
> >> > --
> >> > # du -sch *
> >> > 20.0M   file
> >> > 100.0M  file1
> >> > 400.0M  file2
> >> > 454.0M  file3
> >> > 4.0Mfile4
> >> > 11.6M   file5
> >> > 0   file6
> >> > 16.0K   lost+found
> >> > 989.7M  total
> >> > 
> >> >
> >> >
> >> > On the gluster server nodes:
> >> > ---
> >> > root@primary:/export/sdd1/brick# ll total 12 drwxr-xr-x 2 root root 
> >> > 4096 Jun  2 04:08 ./ drwxr-xr-x 4 root root
> >> > 4096 May 27 08:42 ../ root@primary:/export/sdd1/brick#
> >> > --
> >> >
> >> > root@secondary:/export/sdd1/brick# ll total 1046536
> >> > drwxr-xr-x 2 root root   4096 Jun  2 08:51 ./
> >> > drwxr-xr-x 4 root root   4096 May 27 08:43 ../
> >> > -rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35
> >> > volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
> >> > root@secondary:/export/sdd1/brick#
> >> > -
> >> >
> >> >
> >> > Thanks
> >> > Kumar
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > -Original Message-
> >> > From: Franco Broi [mailto:franco.b...@iongeo.com]
> >> > Sent: Monday, June 02, 2014 6:35 PM
> >> > To: Gnan Kumar, Yalla
> >> > Cc: gluster-users@gluster.org
> >> > Subject: Re: [Gluster-users] Distributed volumes
> >> >
> >> > Just do an ls on the bricks, the paths are

Re: [Gluster-users] Distributed volumes

2014-06-03 Thread yalla.gnan.kumar
Hi,

So , in which scenario, does the distributed volumes have files on both the 
bricks ?


-Original Message-
From: Kaushal M [mailto:kshlms...@gmail.com] 
Sent: Tuesday, June 03, 2014 1:19 PM
To: Gnan Kumar, Yalla
Cc: Franco Broi; gluster-users@gluster.org
Subject: Re: [Gluster-users] Distributed volumes

You have only 1 file on the gluster volume, the 1GB disk image/volume that you 
created. This disk image is attached to the VM as a file system, not the 
gluster volume. So whatever you do in the VM's file system, affects just the 1 
disk image. The files, directories etc. you created, are inside the disk image. 
So you still have just one file on the gluster volume, not many as you are 
assuming.



On Tue, Jun 3, 2014 at 1:09 PM,   wrote:
> I have created distributed volume,  created a 1 GB volume on it, and 
> attached it to the VM and created a filesystem on it.  How to verify that the 
> files in the vm are distributed across both the bricks on two servers ?
>
>
>
> -Original Message-
> From: Franco Broi [mailto:franco.b...@iongeo.com]
> Sent: Tuesday, June 03, 2014 1:04 PM
> To: Gnan Kumar, Yalla
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Distributed volumes
>
>
> Ok, what you have is a single large file (must be filesystem image??).
> Gluster will not stripe files, it writes different whole files to different 
> bricks.
>
> On Tue, 2014-06-03 at 07:29 +, yalla.gnan.ku...@accenture.com
> wrote:
>> root@secondary:/export/sdd1/brick# gluster volume  info
>>
>> Volume Name: dst
>> Type: Distribute
>> Status: Started
>> Number of Bricks: 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: primary:/export/sdd1/brick
>> Brick2: secondary:/export/sdd1/brick
>>
>>
>>
>> -Original Message-
>> From: Franco Broi [mailto:franco.b...@iongeo.com]
>> Sent: Tuesday, June 03, 2014 12:56 PM
>> To: Gnan Kumar, Yalla
>> Cc: gluster-users@gluster.org
>> Subject: Re: [Gluster-users] Distributed volumes
>>
>>
>> What do gluster vol info and gluster vol status give you?
>>
>> On Tue, 2014-06-03 at 07:21 +, yalla.gnan.ku...@accenture.com
>> wrote:
>> > Hi,
>> >
>> > I have created a distributed volume on my   gluster node. I have attached 
>> > this volume to a VM on openstack. The size is 1 GB. I have written files 
>> > close to 1 GB onto the
>> > Volume.   But when I do a ls inside the brick directory , the volume is 
>> > present only on one gluster server brick. But it is empty on another 
>> > server brick.  Files are meant to be
>> > spread across both the bricks according to distributed volume definition.
>> >
>> > On the VM:
>> > --
>> >
>> > # ls -al
>> > total 1013417
>> > drwxr-xr-x3 root root  4096 Jun  1 22:03 .
>> > drwxrwxr-x3 root root  1024 Jun  1 21:24 ..
>> > -rw---1 root root 31478251520 Jun  1 21:52 file
>> > -rw---1 root root 157391257600 Jun  1 21:54 file1
>> > -rw---1 root root 629565030400 Jun  1 21:55 file2
>> > -rw---1 root root 708260659200 Jun  1 21:59 file3
>> > -rw---1 root root 6295650304 Jun  1 22:01 file4
>> > -rw---1 root root 39333801984 Jun  1 22:01 file5
>> > -rw---1 root root 7864320 Jun  1 22:04 file6
>> > drwx--2 root root 16384 Jun  1 21:24 lost+found
>> > --
>> > # du -sch *
>> > 20.0M   file
>> > 100.0M  file1
>> > 400.0M  file2
>> > 454.0M  file3
>> > 4.0Mfile4
>> > 11.6M   file5
>> > 0   file6
>> > 16.0K   lost+found
>> > 989.7M  total
>> > 
>> >
>> >
>> > On the gluster server nodes:
>> > ---
>> > root@primary:/export/sdd1/brick# ll total 12 drwxr-xr-x 2 root root 
>> > 4096 Jun  2 04:08 ./ drwxr-xr-x 4 root root
>> > 4096 May 27 08:42 ../ root@primary:/export/sdd1/brick#
>> > --
>> >
>> > root@secondary:/export/sdd1/brick# ll total 1046536
>> > drwxr-xr-x 2 root root   4096 Jun  2 08:51 ./
>> > drwxr-xr-x 4 root root   4096 May 27 08:43 ../
>> > -rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35
>> > volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
>> > root@secondary:/export/sdd1/brick#
>> > -
>> >
>> >
>> > Thanks
>> > Kumar
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > -Original Message-
>> > From: Franco Broi [mailto:franco.b...@iongeo.com]
>> > Sent: Monday, June 02, 2014 6:35 PM
>> > To: Gnan Kumar, Yalla
>> > Cc: gluster-users@gluster.org
>> > Subject: Re: [Gluster-users] Distributed volumes
>> >
>> > Just do an ls on the bricks, the paths are the same as the mounted 
>> > filesystem.
>> >
>> > On Mon, 2014-06-02 at 12:26 +, yalla.gnan.ku...@accenture.com
>> > wrote:
>> > > Hi All,
>> > >
>> > >
>> > >
>> > > I have created a distributed volume of 1 GB ,  using two bricks 
>> > > from two different servers.
>> > >
>> > > I have written 7 files whose sizes are a total of  1 GB.
>> > >
>> > > How can I check that files ar

Re: [Gluster-users] Distributed volumes

2014-06-03 Thread Kaushal M
You have only 1 file on the gluster volume, the 1GB disk image/volume
that you created. This disk image is attached to the VM as a file
system, not the gluster volume. So whatever you do in the VM's file
system, affects just the 1 disk image. The files, directories etc. you
created, are inside the disk image. So you still have just one file on
the gluster volume, not many as you are assuming.



On Tue, Jun 3, 2014 at 1:09 PM,   wrote:
> I have created distributed volume,  created a 1 GB volume on it, and attached 
> it to the VM and created a filesystem on it.  How to verify that the files in 
> the vm are
> distributed across both the bricks on two servers ?
>
>
>
> -Original Message-
> From: Franco Broi [mailto:franco.b...@iongeo.com]
> Sent: Tuesday, June 03, 2014 1:04 PM
> To: Gnan Kumar, Yalla
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Distributed volumes
>
>
> Ok, what you have is a single large file (must be filesystem image??).
> Gluster will not stripe files, it writes different whole files to different 
> bricks.
>
> On Tue, 2014-06-03 at 07:29 +, yalla.gnan.ku...@accenture.com
> wrote:
>> root@secondary:/export/sdd1/brick# gluster volume  info
>>
>> Volume Name: dst
>> Type: Distribute
>> Status: Started
>> Number of Bricks: 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: primary:/export/sdd1/brick
>> Brick2: secondary:/export/sdd1/brick
>>
>>
>>
>> -Original Message-
>> From: Franco Broi [mailto:franco.b...@iongeo.com]
>> Sent: Tuesday, June 03, 2014 12:56 PM
>> To: Gnan Kumar, Yalla
>> Cc: gluster-users@gluster.org
>> Subject: Re: [Gluster-users] Distributed volumes
>>
>>
>> What do gluster vol info and gluster vol status give you?
>>
>> On Tue, 2014-06-03 at 07:21 +, yalla.gnan.ku...@accenture.com
>> wrote:
>> > Hi,
>> >
>> > I have created a distributed volume on my   gluster node. I have attached 
>> > this volume to a VM on openstack. The size is 1 GB. I have written files 
>> > close to 1 GB onto the
>> > Volume.   But when I do a ls inside the brick directory , the volume is 
>> > present only on one gluster server brick. But it is empty on another 
>> > server brick.  Files are meant to be
>> > spread across both the bricks according to distributed volume definition.
>> >
>> > On the VM:
>> > --
>> >
>> > # ls -al
>> > total 1013417
>> > drwxr-xr-x3 root root  4096 Jun  1 22:03 .
>> > drwxrwxr-x3 root root  1024 Jun  1 21:24 ..
>> > -rw---1 root root 31478251520 Jun  1 21:52 file
>> > -rw---1 root root 157391257600 Jun  1 21:54 file1
>> > -rw---1 root root 629565030400 Jun  1 21:55 file2
>> > -rw---1 root root 708260659200 Jun  1 21:59 file3
>> > -rw---1 root root 6295650304 Jun  1 22:01 file4
>> > -rw---1 root root 39333801984 Jun  1 22:01 file5
>> > -rw---1 root root 7864320 Jun  1 22:04 file6
>> > drwx--2 root root 16384 Jun  1 21:24 lost+found
>> > --
>> > # du -sch *
>> > 20.0M   file
>> > 100.0M  file1
>> > 400.0M  file2
>> > 454.0M  file3
>> > 4.0Mfile4
>> > 11.6M   file5
>> > 0   file6
>> > 16.0K   lost+found
>> > 989.7M  total
>> > 
>> >
>> >
>> > On the gluster server nodes:
>> > ---
>> > root@primary:/export/sdd1/brick# ll
>> > total 12
>> > drwxr-xr-x 2 root root 4096 Jun  2 04:08 ./ drwxr-xr-x 4 root root
>> > 4096 May 27 08:42 ../ root@primary:/export/sdd1/brick#
>> > --
>> >
>> > root@secondary:/export/sdd1/brick# ll total 1046536
>> > drwxr-xr-x 2 root root   4096 Jun  2 08:51 ./
>> > drwxr-xr-x 4 root root   4096 May 27 08:43 ../
>> > -rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35
>> > volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
>> > root@secondary:/export/sdd1/brick#
>> > -
>> >
>> >
>> > Thanks
>> > Kumar
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > -Original Message-
>> > From: Franco Broi [mailto:franco.b...@iongeo.com]
>> > Sent: Monday, June 02, 2014 6:35 PM
>> > To: Gnan Kumar, Yalla
>> > Cc: gluster-users@gluster.org
>> > Subject: Re: [Gluster-users] Distributed volumes
>> >
>> > Just do an ls on the bricks, the paths are the same as the mounted 
>> > filesystem.
>> >
>> > On Mon, 2014-06-02 at 12:26 +, yalla.gnan.ku...@accenture.com
>> > wrote:
>> > > Hi All,
>> > >
>> > >
>> > >
>> > > I have created a distributed volume of 1 GB ,  using two bricks from
>> > > two different servers.
>> > >
>> > > I have written 7 files whose sizes are a total of  1 GB.
>> > >
>> > > How can I check that files are distributed on both the bricks ?
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > Thanks
>> > >
>> > > Kumar
>> > >
>> > >
>> > >
>> > >
>> > > 
>> > > __
>> > >
>> > >
>> > > This message is for the designated recipient only and may contain
>> > > privil

Re: [Gluster-users] Distributed volumes

2014-06-03 Thread yalla.gnan.kumar
I have created distributed volume,  created a 1 GB volume on it, and attached 
it to the VM and created a filesystem on it.  How to verify that the files in 
the vm are
distributed across both the bricks on two servers ?



-Original Message-
From: Franco Broi [mailto:franco.b...@iongeo.com] 
Sent: Tuesday, June 03, 2014 1:04 PM
To: Gnan Kumar, Yalla
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Distributed volumes


Ok, what you have is a single large file (must be filesystem image??).
Gluster will not stripe files, it writes different whole files to different 
bricks. 

On Tue, 2014-06-03 at 07:29 +, yalla.gnan.ku...@accenture.com
wrote: 
> root@secondary:/export/sdd1/brick# gluster volume  info
> 
> Volume Name: dst
> Type: Distribute
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: primary:/export/sdd1/brick
> Brick2: secondary:/export/sdd1/brick
> 
> 
> 
> -Original Message-
> From: Franco Broi [mailto:franco.b...@iongeo.com]
> Sent: Tuesday, June 03, 2014 12:56 PM
> To: Gnan Kumar, Yalla
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Distributed volumes
> 
> 
> What do gluster vol info and gluster vol status give you?
> 
> On Tue, 2014-06-03 at 07:21 +, yalla.gnan.ku...@accenture.com
> wrote: 
> > Hi,
> > 
> > I have created a distributed volume on my   gluster node. I have attached 
> > this volume to a VM on openstack. The size is 1 GB. I have written files 
> > close to 1 GB onto the
> > Volume.   But when I do a ls inside the brick directory , the volume is 
> > present only on one gluster server brick. But it is empty on another server 
> > brick.  Files are meant to be
> > spread across both the bricks according to distributed volume definition.
> > 
> > On the VM:
> > --
> > 
> > # ls -al
> > total 1013417
> > drwxr-xr-x3 root root  4096 Jun  1 22:03 .
> > drwxrwxr-x3 root root  1024 Jun  1 21:24 ..
> > -rw---1 root root 31478251520 Jun  1 21:52 file
> > -rw---1 root root 157391257600 Jun  1 21:54 file1
> > -rw---1 root root 629565030400 Jun  1 21:55 file2
> > -rw---1 root root 708260659200 Jun  1 21:59 file3
> > -rw---1 root root 6295650304 Jun  1 22:01 file4
> > -rw---1 root root 39333801984 Jun  1 22:01 file5
> > -rw---1 root root 7864320 Jun  1 22:04 file6
> > drwx--2 root root 16384 Jun  1 21:24 lost+found
> > --
> > # du -sch *
> > 20.0M   file
> > 100.0M  file1
> > 400.0M  file2
> > 454.0M  file3
> > 4.0Mfile4
> > 11.6M   file5
> > 0   file6
> > 16.0K   lost+found
> > 989.7M  total
> > 
> > 
> > 
> > On the gluster server nodes:
> > ---
> > root@primary:/export/sdd1/brick# ll
> > total 12
> > drwxr-xr-x 2 root root 4096 Jun  2 04:08 ./ drwxr-xr-x 4 root root
> > 4096 May 27 08:42 ../ root@primary:/export/sdd1/brick#
> > --
> > 
> > root@secondary:/export/sdd1/brick# ll total 1046536
> > drwxr-xr-x 2 root root   4096 Jun  2 08:51 ./
> > drwxr-xr-x 4 root root   4096 May 27 08:43 ../
> > -rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35
> > volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
> > root@secondary:/export/sdd1/brick#
> > -
> > 
> > 
> > Thanks
> > Kumar
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > -Original Message-
> > From: Franco Broi [mailto:franco.b...@iongeo.com]
> > Sent: Monday, June 02, 2014 6:35 PM
> > To: Gnan Kumar, Yalla
> > Cc: gluster-users@gluster.org
> > Subject: Re: [Gluster-users] Distributed volumes
> > 
> > Just do an ls on the bricks, the paths are the same as the mounted 
> > filesystem.
> > 
> > On Mon, 2014-06-02 at 12:26 +, yalla.gnan.ku...@accenture.com
> > wrote:
> > > Hi All,
> > >
> > >
> > >
> > > I have created a distributed volume of 1 GB ,  using two bricks from 
> > > two different servers.
> > >
> > > I have written 7 files whose sizes are a total of  1 GB.
> > >
> > > How can I check that files are distributed on both the bricks ?
> > >
> > >
> > >
> > >
> > >
> > > Thanks
> > >
> > > Kumar
> > >
> > >
> > >
> > >
> > > 
> > > __
> > >
> > >
> > > This message is for the designated recipient only and may contain 
> > > privileged, proprietary, or otherwise confidential information. If 
> > > you have received it in error, please notify the sender immediately 
> > > and delete the original. Any other use of the e-mail by you is prohibited.
> > > Where allowed by local law, electronic communications with Accenture 
> > > and its affiliates, including e-mail and instant messaging 
> > > (including content), may be scanned by our systems for the purposes 
> > > of information security and assessment of internal compliance with 
> > > Accenture policy.
> > > _

Re: [Gluster-users] Distributed volumes

2014-06-03 Thread Franco Broi

Ok, what you have is a single large file (must be filesystem image??).
Gluster will not stripe files, it writes different whole files to
different bricks. 

On Tue, 2014-06-03 at 07:29 +, yalla.gnan.ku...@accenture.com
wrote: 
> root@secondary:/export/sdd1/brick# gluster volume  info
> 
> Volume Name: dst
> Type: Distribute
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: primary:/export/sdd1/brick
> Brick2: secondary:/export/sdd1/brick
> 
> 
> 
> -Original Message-
> From: Franco Broi [mailto:franco.b...@iongeo.com] 
> Sent: Tuesday, June 03, 2014 12:56 PM
> To: Gnan Kumar, Yalla
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Distributed volumes
> 
> 
> What do gluster vol info and gluster vol status give you?
> 
> On Tue, 2014-06-03 at 07:21 +, yalla.gnan.ku...@accenture.com
> wrote: 
> > Hi,
> > 
> > I have created a distributed volume on my   gluster node. I have attached 
> > this volume to a VM on openstack. The size is 1 GB. I have written files 
> > close to 1 GB onto the
> > Volume.   But when I do a ls inside the brick directory , the volume is 
> > present only on one gluster server brick. But it is empty on another server 
> > brick.  Files are meant to be
> > spread across both the bricks according to distributed volume definition.
> > 
> > On the VM:
> > --
> > 
> > # ls -al
> > total 1013417
> > drwxr-xr-x3 root root  4096 Jun  1 22:03 .
> > drwxrwxr-x3 root root  1024 Jun  1 21:24 ..
> > -rw---1 root root 31478251520 Jun  1 21:52 file
> > -rw---1 root root 157391257600 Jun  1 21:54 file1
> > -rw---1 root root 629565030400 Jun  1 21:55 file2
> > -rw---1 root root 708260659200 Jun  1 21:59 file3
> > -rw---1 root root 6295650304 Jun  1 22:01 file4
> > -rw---1 root root 39333801984 Jun  1 22:01 file5
> > -rw---1 root root 7864320 Jun  1 22:04 file6
> > drwx--2 root root 16384 Jun  1 21:24 lost+found
> > --
> > # du -sch *
> > 20.0M   file
> > 100.0M  file1
> > 400.0M  file2
> > 454.0M  file3
> > 4.0Mfile4
> > 11.6M   file5
> > 0   file6
> > 16.0K   lost+found
> > 989.7M  total
> > 
> > 
> > 
> > On the gluster server nodes:
> > ---
> > root@primary:/export/sdd1/brick# ll
> > total 12
> > drwxr-xr-x 2 root root 4096 Jun  2 04:08 ./ drwxr-xr-x 4 root root 
> > 4096 May 27 08:42 ../ root@primary:/export/sdd1/brick#
> > --
> > 
> > root@secondary:/export/sdd1/brick# ll
> > total 1046536
> > drwxr-xr-x 2 root root   4096 Jun  2 08:51 ./
> > drwxr-xr-x 4 root root   4096 May 27 08:43 ../
> > -rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35 
> > volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
> > root@secondary:/export/sdd1/brick#
> > -
> > 
> > 
> > Thanks
> > Kumar
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > -Original Message-
> > From: Franco Broi [mailto:franco.b...@iongeo.com]
> > Sent: Monday, June 02, 2014 6:35 PM
> > To: Gnan Kumar, Yalla
> > Cc: gluster-users@gluster.org
> > Subject: Re: [Gluster-users] Distributed volumes
> > 
> > Just do an ls on the bricks, the paths are the same as the mounted 
> > filesystem.
> > 
> > On Mon, 2014-06-02 at 12:26 +, yalla.gnan.ku...@accenture.com
> > wrote:
> > > Hi All,
> > >
> > >
> > >
> > > I have created a distributed volume of 1 GB ,  using two bricks from 
> > > two different servers.
> > >
> > > I have written 7 files whose sizes are a total of  1 GB.
> > >
> > > How can I check that files are distributed on both the bricks ?
> > >
> > >
> > >
> > >
> > >
> > > Thanks
> > >
> > > Kumar
> > >
> > >
> > >
> > >
> > > 
> > > __
> > >
> > >
> > > This message is for the designated recipient only and may contain 
> > > privileged, proprietary, or otherwise confidential information. If 
> > > you have received it in error, please notify the sender immediately 
> > > and delete the original. Any other use of the e-mail by you is prohibited.
> > > Where allowed by local law, electronic communications with Accenture 
> > > and its affiliates, including e-mail and instant messaging 
> > > (including content), may be scanned by our systems for the purposes 
> > > of information security and assessment of internal compliance with 
> > > Accenture policy.
> > > 
> > > __
> > > 
> > >
> > > www.accenture.com
> > > ___
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> > 
> > 
> > 
> > 
> > 
> > 
> > This message is for the designated recipient only and may contain 
> > privileged, propr

Re: [Gluster-users] [Gluster-devel] autodelete in snapshots

2014-06-03 Thread Kaushal M
I agree as well. We shouldn't be deleting any data without the
explicit consent of the user.

The approach proposed by MS is better than the earlier approach.

~kaushal

On Tue, Jun 3, 2014 at 1:02 AM, M S Vishwanath Bhat  wrote:
>
>
>
> On 2 June 2014 20:22, Vijay Bellur  wrote:
>>
>> On 04/23/2014 05:50 AM, Vijay Bellur wrote:
>>>
>>> On 04/20/2014 11:42 PM, Lalatendu Mohanty wrote:

 On 04/16/2014 11:39 AM, Avra Sengupta wrote:
>
> The whole purpose of introducing the soft-limit is, that at any point
> of time the number of
> snaps should not exceed the hard limit. If we trigger auto-delete on
> hitting hard-limit, then
> the purpose itself is lost, because at that point we would be taking a
> snap, making the limit
> hard-limit + 1, and then triggering auto-delete, which violates the
> sanctity of the hard-limit.
> Also what happens when we are at hard-limit + 1, and another snap is
> issued, while auto-delete
> is yet to process the first delete. At that point we end up at
> hard-limit + 1. Also what happens
> if for a particular snap the auto-delete fails.
>
> We should see the hard-limit, as something set by the admin keeping in
> mind the resource consumption
> and at no-point should we cross this limit, come what may. If we hit
> this limit, the create command
> should fail asking the user to delete snaps using the "snapshot
> delete" command.
>
> The two options Raghavendra mentioned are applicable for the
> soft-limit only, in which cases on
> hitting the soft-limit
>
> 1. Trigger auto-delete
>
> or
>
> 2. Log a warning-message, for the user saying the number of snaps is
> exceeding the snap-limit and
> display the number of available snaps
>
> Now which of these should happen also depends on the user, because the
> auto-delete option
> is configurable.
>
> So if the auto-delete option is set as true, auto-delete should be
> triggered and the above message
> should also be logged.
>
> But if the option is set as false, only the message should be logged.
>
> This is the behaviour as designed. Adding Rahul, and Seema in the
> mail, to reflect upon the
> behaviour as well.
>
> Regards,
> Avra


 This sounds correct. However we need to make sure that the usage or
 documentation around this should be good enough , so that users
 understand the each of the limits correctly.

>>>
>>> It might be better to avoid the usage of the term "soft-limit".
>>> soft-limit as used in quota and other places generally has an alerting
>>> connotation. Something like "auto-deletion-limit" might be better.
>>>
>>
>> I still see references to "soft-limit" and auto deletion seems to get
>> triggered upon reaching soft-limit.
>>
>> Why is the ability to auto delete not configurable? It does seem pretty
>> nasty to go about deleting snapshots without obtaining explicit consent from
>> the user.
>
>
> I agree with Vijay here. It's not good to delete a snap (even though it is
> oldest) without the explicit consent from user.
>
> FYI It took me more than 2 weeks to figure out that my snaps were getting
> autodeleted after reaching "soft-limit". For all I know I had not done
> anything and my snap restore were failing.
>
> I propose to remove the terms "soft" and "hard" limit. I believe there
> should be a limit (just "limit") after which all snapshot creates should
> fail with proper error messages. And there can be a water-mark after which
> user should get warning messages. So below is my proposal.
>
> auto-delete + snap-limit:  If the snap-limit is set to n, next snap create
> (n+1th) will succeed only if if auto-delete is set to on/true/1 and oldest
> snap will get deleted automatically. If autodelete is set to off/false/0 ,
> (n+1)th snap create will fail with proper error message from gluster CLI
> command.  But again by default autodelete should be off.
>
> snap-water-mark: This should come in picture only if autodelete is turned
> off. It should not have any meaning if auto-delete is turned ON. Basically
> it's usage is to give the user warning that limit almost being reached and
> it is time for admin to decide which snaps should be deleted (or which
> should be kept)
>
> *my two cents*
>
> -MS
>
>>
>>
>> Cheers,
>>
>> Vijay
>>
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Distributed volumes

2014-06-03 Thread yalla.gnan.kumar
root@secondary:/export/sdd1/brick# gluster volume  info

Volume Name: dst
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: primary:/export/sdd1/brick
Brick2: secondary:/export/sdd1/brick



-Original Message-
From: Franco Broi [mailto:franco.b...@iongeo.com] 
Sent: Tuesday, June 03, 2014 12:56 PM
To: Gnan Kumar, Yalla
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Distributed volumes


What do gluster vol info and gluster vol status give you?

On Tue, 2014-06-03 at 07:21 +, yalla.gnan.ku...@accenture.com
wrote: 
> Hi,
> 
> I have created a distributed volume on my   gluster node. I have attached 
> this volume to a VM on openstack. The size is 1 GB. I have written files 
> close to 1 GB onto the
> Volume.   But when I do a ls inside the brick directory , the volume is 
> present only on one gluster server brick. But it is empty on another server 
> brick.  Files are meant to be
> spread across both the bricks according to distributed volume definition.
> 
> On the VM:
> --
> 
> # ls -al
> total 1013417
> drwxr-xr-x3 root root  4096 Jun  1 22:03 .
> drwxrwxr-x3 root root  1024 Jun  1 21:24 ..
> -rw---1 root root 31478251520 Jun  1 21:52 file
> -rw---1 root root 157391257600 Jun  1 21:54 file1
> -rw---1 root root 629565030400 Jun  1 21:55 file2
> -rw---1 root root 708260659200 Jun  1 21:59 file3
> -rw---1 root root 6295650304 Jun  1 22:01 file4
> -rw---1 root root 39333801984 Jun  1 22:01 file5
> -rw---1 root root 7864320 Jun  1 22:04 file6
> drwx--2 root root 16384 Jun  1 21:24 lost+found
> --
> # du -sch *
> 20.0M   file
> 100.0M  file1
> 400.0M  file2
> 454.0M  file3
> 4.0Mfile4
> 11.6M   file5
> 0   file6
> 16.0K   lost+found
> 989.7M  total
> 
> 
> 
> On the gluster server nodes:
> ---
> root@primary:/export/sdd1/brick# ll
> total 12
> drwxr-xr-x 2 root root 4096 Jun  2 04:08 ./ drwxr-xr-x 4 root root 
> 4096 May 27 08:42 ../ root@primary:/export/sdd1/brick#
> --
> 
> root@secondary:/export/sdd1/brick# ll
> total 1046536
> drwxr-xr-x 2 root root   4096 Jun  2 08:51 ./
> drwxr-xr-x 4 root root   4096 May 27 08:43 ../
> -rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35 
> volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
> root@secondary:/export/sdd1/brick#
> -
> 
> 
> Thanks
> Kumar
> 
> 
> 
> 
> 
> 
> 
> 
> -Original Message-
> From: Franco Broi [mailto:franco.b...@iongeo.com]
> Sent: Monday, June 02, 2014 6:35 PM
> To: Gnan Kumar, Yalla
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Distributed volumes
> 
> Just do an ls on the bricks, the paths are the same as the mounted filesystem.
> 
> On Mon, 2014-06-02 at 12:26 +, yalla.gnan.ku...@accenture.com
> wrote:
> > Hi All,
> >
> >
> >
> > I have created a distributed volume of 1 GB ,  using two bricks from 
> > two different servers.
> >
> > I have written 7 files whose sizes are a total of  1 GB.
> >
> > How can I check that files are distributed on both the bricks ?
> >
> >
> >
> >
> >
> > Thanks
> >
> > Kumar
> >
> >
> >
> >
> > 
> > __
> >
> >
> > This message is for the designated recipient only and may contain 
> > privileged, proprietary, or otherwise confidential information. If 
> > you have received it in error, please notify the sender immediately 
> > and delete the original. Any other use of the e-mail by you is prohibited.
> > Where allowed by local law, electronic communications with Accenture 
> > and its affiliates, including e-mail and instant messaging 
> > (including content), may be scanned by our systems for the purposes 
> > of information security and assessment of internal compliance with 
> > Accenture policy.
> > 
> > __
> > 
> >
> > www.accenture.com
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> 
> 
> 
> This message is for the designated recipient only and may contain privileged, 
> proprietary, or otherwise confidential information. If you have received it 
> in error, please notify the sender immediately and delete the original. Any 
> other use of the e-mail by you is prohibited. Where allowed by local law, 
> electronic communications with Accenture and its affiliates, including e-mail 
> and instant messaging (including content), may be scanned by our systems for 
> the purposes of information security and assessment of internal compliance 
> with Accenture policy.
> _

Re: [Gluster-users] Distributed volumes

2014-06-03 Thread Franco Broi

What do gluster vol info and gluster vol status give you?

On Tue, 2014-06-03 at 07:21 +, yalla.gnan.ku...@accenture.com
wrote: 
> Hi,
> 
> I have created a distributed volume on my   gluster node. I have attached 
> this volume to a VM on openstack. The size is 1 GB. I have written files 
> close to 1 GB onto the
> Volume.   But when I do a ls inside the brick directory , the volume is 
> present only on one gluster server brick. But it is empty on another server 
> brick.  Files are meant to be
> spread across both the bricks according to distributed volume definition.
> 
> On the VM:
> --
> 
> # ls -al
> total 1013417
> drwxr-xr-x3 root root  4096 Jun  1 22:03 .
> drwxrwxr-x3 root root  1024 Jun  1 21:24 ..
> -rw---1 root root 31478251520 Jun  1 21:52 file
> -rw---1 root root 157391257600 Jun  1 21:54 file1
> -rw---1 root root 629565030400 Jun  1 21:55 file2
> -rw---1 root root 708260659200 Jun  1 21:59 file3
> -rw---1 root root 6295650304 Jun  1 22:01 file4
> -rw---1 root root 39333801984 Jun  1 22:01 file5
> -rw---1 root root 7864320 Jun  1 22:04 file6
> drwx--2 root root 16384 Jun  1 21:24 lost+found
> --
> # du -sch *
> 20.0M   file
> 100.0M  file1
> 400.0M  file2
> 454.0M  file3
> 4.0Mfile4
> 11.6M   file5
> 0   file6
> 16.0K   lost+found
> 989.7M  total
> 
> 
> 
> On the gluster server nodes:
> ---
> root@primary:/export/sdd1/brick# ll
> total 12
> drwxr-xr-x 2 root root 4096 Jun  2 04:08 ./
> drwxr-xr-x 4 root root 4096 May 27 08:42 ../
> root@primary:/export/sdd1/brick#
> --
> 
> root@secondary:/export/sdd1/brick# ll
> total 1046536
> drwxr-xr-x 2 root root   4096 Jun  2 08:51 ./
> drwxr-xr-x 4 root root   4096 May 27 08:43 ../
> -rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35 
> volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
> root@secondary:/export/sdd1/brick#
> -
> 
> 
> Thanks
> Kumar
> 
> 
> 
> 
> 
> 
> 
> 
> -Original Message-
> From: Franco Broi [mailto:franco.b...@iongeo.com]
> Sent: Monday, June 02, 2014 6:35 PM
> To: Gnan Kumar, Yalla
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Distributed volumes
> 
> Just do an ls on the bricks, the paths are the same as the mounted filesystem.
> 
> On Mon, 2014-06-02 at 12:26 +, yalla.gnan.ku...@accenture.com
> wrote:
> > Hi All,
> >
> >
> >
> > I have created a distributed volume of 1 GB ,  using two bricks from
> > two different servers.
> >
> > I have written 7 files whose sizes are a total of  1 GB.
> >
> > How can I check that files are distributed on both the bricks ?
> >
> >
> >
> >
> >
> > Thanks
> >
> > Kumar
> >
> >
> >
> >
> > __
> >
> >
> > This message is for the designated recipient only and may contain
> > privileged, proprietary, or otherwise confidential information. If you
> > have received it in error, please notify the sender immediately and
> > delete the original. Any other use of the e-mail by you is prohibited.
> > Where allowed by local law, electronic communications with Accenture
> > and its affiliates, including e-mail and instant messaging (including
> > content), may be scanned by our systems for the purposes of
> > information security and assessment of internal compliance with
> > Accenture policy.
> > __
> > 
> >
> > www.accenture.com
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> 
> 
> 
> This message is for the designated recipient only and may contain privileged, 
> proprietary, or otherwise confidential information. If you have received it 
> in error, please notify the sender immediately and delete the original. Any 
> other use of the e-mail by you is prohibited. Where allowed by local law, 
> electronic communications with Accenture and its affiliates, including e-mail 
> and instant messaging (including content), may be scanned by our systems for 
> the purposes of information security and assessment of internal compliance 
> with Accenture policy.
> __
> 
> www.accenture.com


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Distributed volumes

2014-06-03 Thread yalla.gnan.kumar
Hi,

I have created a distributed volume on my   gluster node. I have attached this 
volume to a VM on openstack. The size is 1 GB. I have written files close to 1 
GB onto the
Volume.   But when I do a ls inside the brick directory , the volume is present 
only on one gluster server brick. But it is empty on another server brick.  
Files are meant to be
spread across both the bricks according to distributed volume definition.

On the VM:
--

# ls -al
total 1013417
drwxr-xr-x3 root root  4096 Jun  1 22:03 .
drwxrwxr-x3 root root  1024 Jun  1 21:24 ..
-rw---1 root root 31478251520 Jun  1 21:52 file
-rw---1 root root 157391257600 Jun  1 21:54 file1
-rw---1 root root 629565030400 Jun  1 21:55 file2
-rw---1 root root 708260659200 Jun  1 21:59 file3
-rw---1 root root 6295650304 Jun  1 22:01 file4
-rw---1 root root 39333801984 Jun  1 22:01 file5
-rw---1 root root 7864320 Jun  1 22:04 file6
drwx--2 root root 16384 Jun  1 21:24 lost+found
--
# du -sch *
20.0M   file
100.0M  file1
400.0M  file2
454.0M  file3
4.0Mfile4
11.6M   file5
0   file6
16.0K   lost+found
989.7M  total



On the gluster server nodes:
---
root@primary:/export/sdd1/brick# ll
total 12
drwxr-xr-x 2 root root 4096 Jun  2 04:08 ./
drwxr-xr-x 4 root root 4096 May 27 08:42 ../
root@primary:/export/sdd1/brick#
--

root@secondary:/export/sdd1/brick# ll
total 1046536
drwxr-xr-x 2 root root   4096 Jun  2 08:51 ./
drwxr-xr-x 4 root root   4096 May 27 08:43 ../
-rw-rw-rw- 1  108  115 1073741824 Jun  2 09:35 
volume-0ec560be-997f-46da-9ec8-e9d6627f2de1
root@secondary:/export/sdd1/brick#
-


Thanks
Kumar








-Original Message-
From: Franco Broi [mailto:franco.b...@iongeo.com]
Sent: Monday, June 02, 2014 6:35 PM
To: Gnan Kumar, Yalla
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Distributed volumes

Just do an ls on the bricks, the paths are the same as the mounted filesystem.

On Mon, 2014-06-02 at 12:26 +, yalla.gnan.ku...@accenture.com
wrote:
> Hi All,
>
>
>
> I have created a distributed volume of 1 GB ,  using two bricks from
> two different servers.
>
> I have written 7 files whose sizes are a total of  1 GB.
>
> How can I check that files are distributed on both the bricks ?
>
>
>
>
>
> Thanks
>
> Kumar
>
>
>
>
> __
>
>
> This message is for the designated recipient only and may contain
> privileged, proprietary, or otherwise confidential information. If you
> have received it in error, please notify the sender immediately and
> delete the original. Any other use of the e-mail by you is prohibited.
> Where allowed by local law, electronic communications with Accenture
> and its affiliates, including e-mail and instant messaging (including
> content), may be scanned by our systems for the purposes of
> information security and assessment of internal compliance with
> Accenture policy.
> __
> 
>
> www.accenture.com
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users






This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise confidential information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the e-mail by you is prohibited. Where allowed by local law, electronic 
communications with Accenture and its affiliates, including e-mail and instant 
messaging (including content), may be scanned by our systems for the purposes 
of information security and assessment of internal compliance with Accenture 
policy.
__

www.accenture.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users