Re: [Gluster-users] Cannot rename files with root squashing and r-x folder group permissions

2014-07-17 Thread Saurabh Jain
Hello Dave, 

  It is a known issue from 3.4.0. One of the reasons behind the issue is this,
"It is failing only when the rename operation will cause the file to move from 
one brick to another due to DHT. Actually the file is not moved physically but 
a link file is created in the target brick."

Thanks,
Saurabh

- Original Message -
> From: "Raghavendra Bhat" 
> To: gluster-users@gluster.org
> Sent: Friday, July 18, 2014 10:32:24 AM
> Subject: Re: [Gluster-users] Cannot rename files with root squashing and r-x 
> folder group permissions
> 
> On Thursday 17 July 2014 03:46 PM, David Raffelt wrote:
> 
> 
> 
> Hi Raghavendra,
> I'm don't quite understand the issue. Yes, a rebalance was was performed in
> December last year when I added a brick. However, we have only just come
> across this (reproducible) problem upon upgrading to 3.5.
> 
> Is there anything I can do to try and correct the issue? Perhaps turn off
> root squashing while running " gluster volume rebalance VOLNAME fix-layout
> start"?
> 
> Cheers,
> Dave
> 
> 
> Hi Dave,
> 
> For now, you can turn off root-squashing. I am still trying to root cause the
> issue. Will update ASAP with my finding.
> 
> Regards,
> Raghavendra Bhat
> 
> 
> 
> 
> 
> 
> 
> On 17 July 2014 17:31, Raghavendra Bhat < rab...@redhat.com > wrote:
> 
> 
> 
> On Wednesday 16 July 2014 10:18 AM, David Raffelt wrote:
> 
> 
> 
> Hi Raghavendra,
> No
> Thanks
> Dave
> 
> 
> 
> As per the cmd_log_history file (a hidden file present in the log directory
> which stores the CLI commands executed on that peer), rebalance seems to be
> running (or was run).
> 
> [2013-12-17 03:08:59.081232] : volume rebalance data start : SUCCESS
> [2013-12-17 03:09:14.631826] : volume rebalance data status : SUCCESS
> [2013-12-17 03:09:22.761097] : volume rebalance data status : SUCCESS
> [2013-12-17 03:09:27.748014] : volume rebalance data status : SUCCESS
> [2013-12-17 03:09:28.839242] : volume rebalance data status : SUCCESS
> [2013-12-17 03:10:39.982747] : volume rebalance data status : SUCCESS
> [2013-12-17 03:14:30.919676] : volume rebalance data status : SUCCESS
> [2013-12-17 03:14:33.772300] : volume rebalance data status : SUCCESS
> [2013-12-17 03:29:14.467954] : volume rebalance data status : SUCCESS
> [2013-12-17 03:29:43.303852] : volume rebalance data status : SUCCESS
> [2013-12-17 03:30:04.309054] : volume rebalance data status : SUCCESS
> [2013-12-17 04:35:45.631119] : volume rebalance data status : SUCCESS
> 
> 
> I think this is what has happened. As part of rebalance layout might have
> changed for some directories and distribute tries to repair it by doing a
> self-heal when a lookup is performed on the directory. Distribute performs
> self-heal as root. But when the requests from that client comes to brick
> process, the requests from root are changed by default to nfsnobody (uid:
> 65534) and that uid does not have permissions to do some modifications (in
> this case self-heal) on the directory which brick thinks is owned by root.
> So self-heal does not happen properly and because of that some operations
> performed (in this case rename of a file within that directory)
> fails.
> 
> Dave,
> Please let me know if I have missed anything. This is my observation based on
> the log files.
> 
> CCing Raghavendra G who might be able to clarify whether this is what
> happened.
> 
> Regards,
> Raghavendra Bhat
> 
> 
> 
> 
> 
> On 16 July 2014 14:47, Raghavendra Bhat < rab...@redhat.com > wrote:
> 
> 
> 
> On Tuesday 15 July 2014 01:57 PM, David Raffelt wrote:
> 
> 
> 
> Hi Raghavendra,
> Thanks for looking into this. Attached are the log files from the 3 peers.
> The glusterfs server is running on "Beauty". All 3 peers mount the native
> gluster client on /home. Each peer has a direct connection to each other,
> addressable via the /etc/hosts file.
> 
> Note that I do not see any new output in the log when this error occurs. Also
> note that I tried to replicate this issue on Ubuntu 14.04 with a single
> brick and could not replicate it.
> 
> Below is some more output that might help.
> Thanks!
> Dave
> 
> 
> 
> dave@beauty:~$ glusterfs --version
> glusterfs 3.5git built on Jun 30 2014 15:58:19
> Repository revision: git:// git.gluster.com/glusterfs.git
> Copyright (c) 2006-2013 Red Hat, Inc. < http://www.redhat.com/ >
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> It is licensed to you under your choice of the GNU Lesser
> General Public License, version 3 or any later version (LGPLv3
> or later), or the GNU General Public License, version 2 (GPLv2),
> in all cases as published by the Free Software Foundation.
> 
> 
> dave@beauty:~$ uname -r
> 3.15.4-1-ARCH
> 
> 
> dave@beauty:~$ sudo gluster volume info
> Volume Name: data
> Type: Distribute
> Volume ID: 1d5948c7-9b7a-40ca-8aa7-85c74bcef3bc
> Status: Started
> Number of Bricks: 3
> Transport-type: tcp
> Bricks:
> Brick1: beauty:/export/beauty
> Brick2: beast:/export/beast
> Brick3: benji:/export/benji
> Options Reconfigured:

Re: [Gluster-users] How to properly disable NFS ?

2012-06-11 Thread Saurabh Jain
Hello Philippe,


   Requesting you to please update about the following information,
   a. Glusterfs version
   b. volume type.
   c. does this recur everytime you try to do disable nfs.

   Also, if one tries nfs.disable "on" the nfs process itself goes down and the 
nfs.log should report "shutting down"

Thanks,
Saurabh

- Original Message -
From: "Philippe Muller" 
To: gluster-users@gluster.org
Sent: Monday, June 11, 2012 2:38:37 PM
Subject: [Gluster-users] How to properly disable NFS ?


Hi, 

I tried to disable the NFS service for a volume : 
gluster volume set distributed nfs.disable on 

It stops the NFS service: 
[2012-06-11 11:05:30.142564] I [glusterd-utils.c:1003:glusterd_service_stop] 
0-: Stopping gluster nfs running in pid: 15490 

But since I disabled NFS, I get this error message in all glusterd logs, every 
3 seconds : 
[2012-06-11 11:07:26.026984] I [socket.c:1798:socket_event_handler] 
0-transport: disconnecting now 
[2012-06-11 11:07:29.027195] I [socket.c:1798:socket_event_handler] 
0-transport: disconnecting now 
[2012-06-11 11:07:32.027447] I [socket.c:1798:socket_event_handler] 
0-transport: disconnecting now 

Did I missed something to properly disable NFS ? 

Thanks! 

Philippe 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] can't delete files and directories from windows NFS client

2012-02-15 Thread Saurabh Jain
yeah sure, give me some time  to look into this issue and I will get back to 
you.

Regards,
Saurabh

From: Kazuyuki Morita [k.mor...@ntt.com]
Sent: Thursday, February 16, 2012 12:22 PM
To: Saurabh Jain
Cc: gluster-users@gluster.org
Subject: RE: [Gluster-users] can't delete files and directories from windows 
NFS client

Hello Saurabh,

  Thank you for your response. We will check Bugzilla periodically.

We would like to ask you a favor.
  We are also troubled by glusterfs's another bug.
  Can you make a case in Bugzilla about the bug if you can reproduce?
  See below for the bug.

---e-mail: possible memory leak when 'kill -HUP glusterfs'---

Subject: possible memory leak when 'kill -HUP glusterfs'
Date: Mon, 23 Jan 2012 18:31:36 +0900
From: Kenta Takahashi 
To: gluster-de...@nongnu.org

Hi,

We've found possible memory leak when kill -HUP the glusterfs process.

ENVIRONMENT:
OS: RHEL6 x86_64
GlusterFS: 3.2.5

HOW TO REPRODUCE:
1. install glusterfs
2. start glusterd
3. create and start a volume
4. repeat 'killall -HUP glusterfs'

Then you can see the glusterfs process keeps increasing its memory space until 
entire RAM/Swap space exhausted, and die (killed?) silently (no logs generated).

We are using 'kill -HUP glusterfs' processes when logrotate done like below:
https://github.com/gluster/glusterfs/blob/master/extras/glusterfs-logrotate

Any ideas to workaround/fix this ?

--
Kenta Takahashi
knt.takaha...@ntt.com

-

Thanks & Regards,

--
Kazuyuki Morita
k.mor...@ntt.com



From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Saurabh Jain
Sent: Wednesday, February 15, 2012 8:41 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] can't delete files and directories from windows 
NFS client

Hello Kazuyuki,

   We have reproduced this issue with windows 7 and I have filed a bug to 
further triage the problem,
   https://bugzilla.redhat.com/show_bug.cgi?id=790781

Thanks & Regards,
Saurabh
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] can't delete files and directories from windows NFS client

2012-02-15 Thread Saurabh Jain
Hello Kazuyuki,

   We have reproduced this issue with windows 7 and I have filed a bug to 
further triage the problem,
   https://bugzilla.redhat.com/show_bug.cgi?id=790781

Thanks & Regards,
Saurabh
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Quota problems with Gluster3.3b2

2012-01-23 Thread Saurabh Jain
Hello Daniel, 


 I did try again with 3.3beta2 and gluster-object(UFO) to upload a file while 
quota is enabled and I am able to do it successfully,


Requesting you again to do the following, 
1. On the mount point try create some files/directories manually while quota 
enabled/disabled.
2. share the information from /var/log/messages and glusterfs logs from  the 
glusterfs mount.
3. It will be advisable to try things on the latest glusterfs code and try to 
use UFO on top of that.

Thanks,
Saurabh

From: Daniel Pereira [d.pere...@skillupjapan.co.jp]
Sent: Friday, January 20, 2012 12:02 PM
To: Saurabh Jain
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Quota problems with Gluster3.3b2

  Hello Saurabh,

  Sorry for the long delay getting back to you, and thank you for
replying to me!

  To reproduce this, I'm doing a simple st command like, I'm doing
nothing in parallel:
st -A http://IP:80/auth/v1.0 -U r2:user -K pass upload test manual.txt

  If I do
/usr/local/sbin/gluster volume quota r2 disable
  the command succeeds. But if I do:
/usr/local/sbin/gluster volume quota r2 enable
  the command hangs with the permission error that I described earlier.

  My volume info:
# gluster volume info r2

Volume Name: r2
Type: Distributed-Replicate
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 192.168.4.103:/gluster/disk1
Brick2: 192.168.4.103:/gluster/disk2
Brick3: 192.168.4.103:/gluster/disk3
Brick4: 192.168.4.103:/gluster/disk4
Brick5: 192.168.4.103:/gluster/disk5
Brick6: 192.168.4.103:/gluster/disk6
Brick7: 192.168.4.103:/gluster/disk7
Brick8: 192.168.4.103:/gluster/disk8
Brick9: 192.168.4.103:/gluster/disk9
Brick10: 192.168.4.103:/gluster/disk10
Brick11: 192.168.4.103:/gluster/disk11
Brick12: 192.168.4.103:/gluster/disk12
Options Reconfigured:
performance.cache-size: 6GB
cluster.stripe-block-size: 1MB
features.quota: on

  Thanks in advance,
Daniel

On 1/16/12 9:29 PM, Saurabh Jain wrote:
> Hello Daniel,
>
> I am trying to reproduce the problem, meanwhile I request you to update 
> me with the "volume info" and the sequence of steps you are trying. As, for 
> me it didn't fail when quota is enabled. Also, mention are you trying to run 
> the operations in parallel.
>
>
> Thanks,
> Saurabh
>
>Hi everyone,
>
>I'm playing with Gluster3.3b2, and everything is working fine when
> uploading stuff through swift. However, when I enable quotas on Gluster,
> I randomly get permission errors. Sometimes I can upload files, most
> times I can't.
>
>I'm mounting the partitions with the acl flag, I've tried wiping out
> everything and starting from scratch, same result. As soon as I disable
> quotas everything works great. I don't even need to add any limit-usage
> for the errors to crop up.
>
>Any idea?
>
> Daniel
>
>
>
>Relevant info:
>
> =
>To enable quotas I use the following commands:
>
> # /usr/local/sbin/gluster volume quota r2 enable
> Enabling quota has been successful
>
> # /usr/local/sbin/gluster volume quota r2 list
> Limit not set on any directory
>
> # /usr/local/sbin/gluster volume quota r2 limit-usage /test 10GB
> limit set on /test
>
> # /usr/local/sbin/gluster volume quota r2 list
>   path  limit_set size
> --
> /test  10GB   88.0KB
>
> # /usr/local/sbin/gluster volume quota r2 disable
> Disabling quota will delete all the quota configuration. Do you want to
> continue? (y/n) y
> Disabling quota has been successful
>
> =
>Directory listing:
> ls -la *
> test:
> total 184
> drwxrwxrwx 2 user user 24576 Jan 13 12:07 .
> drwxrwxrwx 5 user user 24576 Jan 13 12:03 ..
> -rw--- 1 user user 82735 Jan 13 12:07 manual.txt
>
> tmp:
> total 96
> drwxrwxrwx 2 user user 24576 Jan 13 12:07 .
> drwxrwxrwx 5 user user 24576 Jan 13 12:03 ..
>
> ==
> Gluster logs:
> Unsuccessful write:
>
> [2012-01-13 12:06:27.97140] I [afr-common.c:1225:afr_launch_self_heal]
> 0-r2-replicate-4: background  entry self-heal triggered. path: /tmp
> [2012-01-13 12:06:27.97704] I
> [afr-self-heal-common.c:2022:afr_self_heal_completion_cbk]
> 0-r2-replicate-4: background  entry self-heal completed on /tmp
> [2012-01-13 12:06:27.102813] I [afr-common.c:1225:afr_launch_self_heal]
> 0-r2-replicate-4: background  entry self-heal triggered. path: /test
> [2012-01-13 12:06:27.103199] I
> [afr-self-heal-common.c:2022:afr_self_heal_completion_cbk]
> 0-r2-replicate-4: background  entry self-heal completed on /test
> [2012-01-13 12:06:27.10

[Gluster-users] Quota problems with Gluster3.3b2

2012-01-16 Thread Saurabh Jain

Hello Daniel,

   I am trying to reproduce the problem, meanwhile I request you to update me 
with the "volume info" and the sequence of steps you are trying. As, for me it 
didn't fail when quota is enabled. Also, mention are you trying to run the 
operations in parallel.


Thanks,
Saurabh

  Hi everyone,

  I'm playing with Gluster3.3b2, and everything is working fine when
uploading stuff through swift. However, when I enable quotas on Gluster,
I randomly get permission errors. Sometimes I can upload files, most
times I can't.

  I'm mounting the partitions with the acl flag, I've tried wiping out
everything and starting from scratch, same result. As soon as I disable
quotas everything works great. I don't even need to add any limit-usage
for the errors to crop up.

  Any idea?

Daniel



  Relevant info:

=
  To enable quotas I use the following commands:

# /usr/local/sbin/gluster volume quota r2 enable
Enabling quota has been successful

# /usr/local/sbin/gluster volume quota r2 list
Limit not set on any directory

# /usr/local/sbin/gluster volume quota r2 limit-usage /test 10GB
limit set on /test

# /usr/local/sbin/gluster volume quota r2 list
 path  limit_set size
--
/test  10GB   88.0KB

# /usr/local/sbin/gluster volume quota r2 disable
Disabling quota will delete all the quota configuration. Do you want to
continue? (y/n) y
Disabling quota has been successful

=
  Directory listing:
ls -la *
test:
total 184
drwxrwxrwx 2 user user 24576 Jan 13 12:07 .
drwxrwxrwx 5 user user 24576 Jan 13 12:03 ..
-rw--- 1 user user 82735 Jan 13 12:07 manual.txt

tmp:
total 96
drwxrwxrwx 2 user user 24576 Jan 13 12:07 .
drwxrwxrwx 5 user user 24576 Jan 13 12:03 ..

==
Gluster logs:
Unsuccessful write:

[2012-01-13 12:06:27.97140] I [afr-common.c:1225:afr_launch_self_heal]
0-r2-replicate-4: background  entry self-heal triggered. path: /tmp
[2012-01-13 12:06:27.97704] I
[afr-self-heal-common.c:2022:afr_self_heal_completion_cbk]
0-r2-replicate-4: background  entry self-heal completed on /tmp
[2012-01-13 12:06:27.102813] I [afr-common.c:1225:afr_launch_self_heal]
0-r2-replicate-4: background  entry self-heal triggered. path: /test
[2012-01-13 12:06:27.103199] I
[afr-self-heal-common.c:2022:afr_self_heal_completion_cbk]
0-r2-replicate-4: background  entry self-heal completed on /test
[2012-01-13 12:06:27.106876] E
[stat-prefetch.c:695:sp_remove_caches_from_all_fds_opened]
(-->/usr/local/lib/glusterfs/3.3beta2/xlator/mount/fuse.so(fuse_setxattr_resume+0x148)
[0x2acd7b862118]
(-->/usr/local/lib/glusterfs/3.3beta2/xlator/debug/io-stats.so(io_stats_setxattr+0x15f)
[0x2e8cf71f]
(-->/usr/local/lib/glusterfs/3.3beta2/xlator/performance/stat-prefetch.so(sp_setxattr+0x6c)
[0x2e6bc3fc]))) 0-r2-stat-prefetch: invalid argument: inode
[2012-01-13 12:06:27.164168] I
[client3_1-fops.c:1999:client3_1_rename_cbk] 0-r2-client-8: remote
operation failed: Permission denied
[2012-01-13 12:06:27.164211] I
[client3_1-fops.c:1999:client3_1_rename_cbk] 0-r2-client-9: remote
operation failed: Permission denied
[2012-01-13 12:06:27.164227] W [dht-rename.c:480:dht_rename_cbk]
0-r2-dht: /tmp/tmpyhBbAD: rename on r2-replicate-4 failed (Permission
denied)
[2012-01-13 12:06:27.164855] W [fuse-bridge.c:1351:fuse_rename_cbk]
0-glusterfs-fuse: 706: /tmp/tmpyhBbAD -> /test/manual.txt => -1
(Permission denied)
[2012-01-13 12:06:27.166115] I
[client3_1-fops.c:1999:client3_1_rename_cbk] 0-r2-client-8: remote
operation failed: Permission denied
[2012-01-13 12:06:27.166142] I
[client3_1-fops.c:1999:client3_1_rename_cbk] 0-r2-client-9: remote
operation failed: Permission denied
[2012-01-13 12:06:27.166156] W [dht-rename.c:480:dht_rename_cbk]
0-r2-dht: /tmp/tmpyhBbAD: rename on r2-replicate-4 failed (Permission
denied)
[2012-01-13 12:06:27.166763] W [fuse-bridge.c:1351:fuse_rename_cbk]
0-glusterfs-fuse: 707: /tmp/tmpyhBbAD -> /test/manual.txt => -1
(Permission denied)

Successful write:
[2012-01-13 12:07:02.49562] I [afr-common.c:1225:afr_launch_self_heal]
0-r2-replicate-4: background  entry self-heal triggered. path: /test
[2012-01-13 12:07:02.50013] I
[afr-self-heal-common.c:2022:afr_self_heal_completion_cbk]
0-r2-replicate-4: background  entry self-heal completed on /test
[2012-01-13 12:07:02.52255] I [afr-common.c:1225:afr_launch_self_heal]
0-r2-replicate-4: background  entry self-heal triggered. path: /tmp
[2012-01-13 12:07:02.52832] I
[afr-self-heal-common.c:2022:afr_self_heal_completion_cbk]
0-r2-replicate-4: background  entry self-heal completed on /tmp




___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS secondary groups not working.

2011-09-14 Thread Saurabh Jain
Hello Di Pe,


I tried on 3.2.3, it does not happen overthere
[saurabhj@Centos1 nfs-test]$ ls -lah
total 76K
drwxr-xr-x 7 root root   8.0K Sep  8 05:50 .
drwxr-xr-x 9 root root   4.0K Sep  8 05:55 ..
-rw-r--r-- 1 root root  0 Sep  8 01:23 a
-rw-r--r-- 1 root root  0 Sep  8 01:23 b
-rw-r--r-- 1 root root  0 Sep  8 01:23 c
drwxr-xr-x 3 root root   4.0K Sep  6 05:52 
fstest_832a4924d2073113d2422ecfc194abce
drwxr-xr-x 2 root root   4.0K Sep  8 01:23 share
drwxrwsr-x 2 root503 4.0K Sep  8 01:28 share1
drwxrwsr-x 2 root admins 4.0K Sep  8 05:55 share2
drwxrwxr-x 2 root staff  4.0K Sep  9 01:01 share3
[saurabhj@Centos1 nfs-test]$
[saurabhj@Centos1 nfs-test]$
[saurabhj@Centos1 nfs-test]$ cd share2
[saurabhj@Centos1 share2]$ ls -lah
total 44K
drwxrwsr-x 2 root admins 8.0K Sep  8 05:56 .
drwxr-xr-x 7 root root   8.0K Sep  8 05:50 ..
-rwxrwsr-x 1 saurabhj admins0 Sep  8 02:04 test
-rw-r--r-- 1 saurabhj admins0 Sep  8 05:55 test1
-rw-r--r-- 1 saurabhj admins0 Sep  9 01:01 testn
[saurabhj@Centos1 share2]$ touch test3
[saurabhj@Centos1 share2]$ ls -lia
total 48
 1566447115294410427 drwxrwsr-x 2 root admins 8192 Sep  9 01:03 .
   1 drwxr-xr-x 7 root root   8192 Sep  8 05:50 ..
11949810116864770188 -rwxrwsr-x 1 saurabhj admins0 Sep  8 02:04 test
 3472874673268323252 -rw-r--r-- 1 saurabhj admins0 Sep  8 05:55 test1
16864971545077150130 -rw-r--r-- 1 saurabhj admins0 Sep  9 01:03 test3
18402121251818432703 -rw-r--r-- 1 saurabhj admins0 Sep  9 01:01 testn
[saurabhj@Centos1 share2]$

If I am missing something please let me know or if you can provide steps to 
reproduce this issue.

Thanks,
Saurabh
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Issue with Gluster Quota

2011-07-11 Thread Saurabh Jain
Hello Brian,


  I synced my gluster repository back to July 5th and tried quota on a certain 
dir of a distribute and the quota was implemeted properly on that, here are the 
logs,

   [root@centos-qa-client-3 glusterfs]# /root/july6git/inst/sbin/gluster volume 
quota dist list
path  limit_set  size
--
/dir   10485760 10485760


[root@centos-qa-client-3 glusterfs]# /root/july6git/inst/sbin/gluster volume 
info

Volume Name: dist
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp,rdma
Bricks:
Brick1: 10.1.12.134:/mnt/dist
Brick2: 10.1.12.135:/mnt/dist
Options Reconfigured:
features.limit-usage: /dir:10MB
features.quota: on
[root@centos-qa-client-3 glusterfs]# 

requesting you to please inform us about the  to which your 
workspace is synced.

Thanks,
Saurabh

From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of gluster-users-requ...@gluster.org [gluster-users-requ...@gluster.org]
Sent: Friday, July 08, 2011 12:30 AM
To: gluster-users@gluster.org
Subject: Gluster-users Digest, Vol 39, Issue 13

Send Gluster-users mailing list submissions to
gluster-users@gluster.org

To subscribe or unsubscribe via the World Wide Web, visit
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
or, via email, send a message with subject or body 'help' to
gluster-users-requ...@gluster.org

You can reach the person managing the list at
gluster-users-ow...@gluster.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Gluster-users digest..."


Today's Topics:

   1. Re: Issue with Gluster Quota (Brian Smith)
   2. Re: Issues with geo-rep (Carl Chenet)


--

Message: 1
Date: Thu, 07 Jul 2011 13:10:06 -0400
From: Brian Smith 
Subject: Re: [Gluster-users] Issue with Gluster Quota
To: gluster-users@gluster.org
Message-ID: <4e15e86e.6030...@usf.edu>
Content-Type: text/plain; charset=ISO-8859-1

Sorry about that.  I re-populated with an 82MB dump from dd:

[root@gluster1 ~]# gluster volume quota home list
path  limit_set  size
--
/brs   10485760 81965056

[root@gluster1 ~]# getfattr -m . -d -e hex /glusterfs/home/brs
getfattr: Removing leading '/' from absolute path names
# file: glusterfs/home/brs
security.selinux=0x726f6f743a6f626a6563745f723a66696c655f743a733000
trusted.gfid=0x1bbcb9a08bf64406b440f3bb3ad334ed
trusted.glusterfs.dht=0x00017fff
trusted.glusterfs.quota.----0001.contri=0x6000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size=0x6000

[root@gluster2 ~]# getfattr -m . -d -e hex /glusterfs/home/brs
getfattr: Removing leading '/' from absolute path names
# file: glusterfs/home/brs
security.selinux=0x726f6f743a6f626a6563745f723a66696c655f743a733000
trusted.gfid=0x1bbcb9a08bf64406b440f3bb3ad334ed
trusted.glusterfs.dht=0x00017ffe
trusted.glusterfs.quota.----0001.contri=0x04e25000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size=0x04e25000

Brian Smith
Senior Systems Administrator
IT Research Computing, University of South Florida
4202 E. Fowler Ave. ENB308
Office Phone: +1 813 974-1467
Organization URL: http://rc.usf.edu

On 07/07/2011 04:50 AM, Mohammed Junaid wrote:
>>
>> [root@gluster1 ~]# getfattr -m . -d -e hex /glusterfs/home/brs
>> getfattr: Removing leading '/' from absolute path names
>> # file: glusterfs/home/brs
>> security.selinux=0x726f6f743a6f626a6563745f723a66696c655f743a733000
>> trusted.gfid=0x1bbcb9a08bf64406b440f3bb3ad334ed
>> trusted.glusterfs.dht=0x00017fff
>>
>> trusted.glusterfs.quota.----0001.contri=0x6000
>> trusted.glusterfs.quota.dirty=0x3000
>> trusted.glusterfs.quota.size=0x6000
>>
>> and
>>
>> [root@gluster2 ~]# getfattr -m . -d -e hex /glusterfs/home/brs
>> getfattr: Removing leading '/' from absolute path names
>> # file: glusterfs/home/brs
>> security.selinux=0x726f6f743a6f626a6563745f723a66696c655f743a733000
>> trusted.gfid=0x1bbcb9a08bf64406b440f3bb3ad334ed
>> trusted.glusterfs.dht=0x00017ffe
>>
>> trusted.glusterfs.quota.----0001.contri=0x2000
>> trusted.glusterfs.quota.dirty=0x3000
>> trusted.glusterfs.quota.size=0x2000
>
>
> trusted.glusterfs.quota.size=0x6000
> trusted.glusterfs.quota.size=0x2000
>
> So, quota adds these values to calculate the size of the directory. They are
> in hex so when I add them the value comes upto 32kb. So I s

Re: [Gluster-users] Quota confusion

2011-06-20 Thread Saurabh Jain
Hi Jiri,


I have checked this on a dist/dist-rep  volume and this is not a problem for me.

Requesting you to run " find . | xargs stat" on the mount point and once this 
is finished collect the quota information from the all servers. Also, please 
share your setup information.

Thanks,
Saurabh

Hi Jiri,

If a limit of 600GB is set on / then Quota translator will allow you to
store 600GB of data under / and any writes after that will not be allowed.
The same rule applies to replicate aswell, here it will consider only one of
the copies of the data for calculating the sizes.


Junaid


On Thu, Jun 16, 2011 at 3:04 AM, Jiri Lunacek wrote:

> Hi all,
>
> I am having trouble understanding how quota works, or how to successfully
> set it up and use it. Some best practices list of quota feature would be
> great.
>
> My problem is this. We have a volume with replicate 2, 2 bricks, quota
> enabled, usage limit on / set to 600GB, gluster packages 3.2.1.
>
>  gluster volume quota virtual_782 list
> on storage server 1 shows:
>
>path  limit_set  size
>
> --
> /644245094400 277136318464
>
> on storage server 2 shows:
>
>path  limit_set  size
>
> --
> /644245094400 277160824832
>
> on machine, which has the volume mounted (which is also a node in the same
> cluster holding no bricks) shows:
>
>path  limit_set  size
>
> --
> / 600GB
>
>
> writes to the share fail with: -1 (Disk quota exceeded)
>
> I am confused what the quota really means. From our setup it seems that the
> quota translator sums used size from all servers and compares it to the set
> quota.
> I'd expect it (and it seems like that from the docs) that it should allow
> (with replica 2) 600GB on each server.
>
> If this is expected behavior I think this is a subject for documentation.
>
> TIA for any clear information in this matter.
> Jiri
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
-- next part --
An HTML attachment was scrubbed...
URL: 


--

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] mysql regression tests fail with gluster-3.1.3

2011-04-28 Thread Saurabh Jain


Hello Jerry,

Sorry for getting back to you late , as we were busy with releases. 

Requesting you to please share the queries you used in your mysql test 
script and the schema of the table(s). This information will help us to further 
debug the problem and find the cause.

Thanks
Saurabh
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users