Re: [Gluster-users] KVM guest I/O errors with xfs backed gluster volumes

2013-10-30 Thread Jacob Yundt
> Jacob - In the first mail you sent on this subject, you mention that you 
> don't see any issues when gluster volume is backed by ext4. Does this still 
> hold true ?
>

Correct, everything works "as expected" when using gluster bricks
backed by ext4 filesystems (on top of LVM).

Let me know if you'd like straces or gluster logs from similar trials with ext4.

-Jacob
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] KVM guest I/O errors with xfs backed gluster volumes

2013-10-30 Thread Bharata B Rao
On Tue, Oct 29, 2013 at 1:21 PM, Anand Avati  wrote:

> Looks like what is happening is that qemu performs ioctls() on the backend
> to query logical_block_size (for direct IO alignment). That works on XFS,
> but fails on FUSE (hence qemu ends up performing IO with default 512
> alignment rather than 4k).
>
> Looks like this might be something we can enhance gluster driver in qemu.
> Note that glusterfs does not have an ioctl() FOP, but we could probably
> wire up a virtual xattr call for this purpose.
>
> Copying Bharata to check if he has other solutions in mind.
>

I see alignment issues and subsequent QEMU failure (pread() failing with
EINVAL) when I use a file from XFS mount point (with sectsz=4k) as a virtio
disk with cache=none QEMU option. However this failure isn't seen when I
have sectsz=512. And all this is w/o gluster. So there seems to be some
alignment issues even w/o gluster, I will debug more and get back.

Jacob - In the first mail you sent on this subject, you mention that you
don't see any issues when gluster volume is backed by ext4. Does this still
hold true ?

Regards,
Bharata.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Strange behaviour with add-brick followed by remove-brick

2013-10-30 Thread Brian Cipriano
Yep, this bug report is exactly what I’ve experienced. Same steps to reproduce. 
Yes, I made sure to allow the “remove-brick status” to report as complete 
before committing.

> It's just allowing the migration of files TO the decommissioned subvolume.

This is exactly the behavior I saw.

Let me know if there’s anything else I can provide!

- brian


On Oct 30, 2013, at 11:46 AM, Lukáš Bezdička  
wrote:

> remove-brick on distribute does not work for me:
> https://bugzilla.redhat.com/show_bug.cgi?id=1024369
> 
> 
> On Wed, Oct 30, 2013 at 4:40 PM, Brian Cipriano  wrote:
> I had the exact same experience recently with a 3.4 distributed cluster I set 
> up. I spent some time on the IRC but couldn’t track it down. Seems 
> remove-brick is broken in 3.3 and 3.4. I guess folks don’t remove bricks very 
> often :) 
> - brian
> 
> 
> 
> 
> 
> On Oct 30, 2013, at 11:21 AM, Lalatendu Mohanty  wrote:
> 
>> On 10/30/2013 08:40 PM, Lalatendu Mohanty wrote:
>>> On 10/30/2013 03:43 PM, B.K.Raghuram wrote:
 I have gluster 3.4.1 on 4 boxes with hostnames n9, n10, n11, n12. I
 did the following sequence of steps and ended up with losing data so
 what did I do wrong?!
 
 - Create a distributed volume with bricks on n9 and n10
 - Started the volume
 - NFS mounted the volume and created 100 files on it. Found that n9
 had 45, n10 had 55
 - Added a brick n11 to this volume
 - Removed a brick n10 from the volume with gluster remove brick 
  start
 - n9 now has 45 files, n10 has 55 files and n11 has 45 files(all the
 same as on n9)
 - Checked status, it shows that no rebalanced files but that n10 had
 scanned 100 files and completed. 0 scanned for all the others
 - I then did a rebalance start force on the vol and found that n9 had
 0 files, n10 had 55 files and n11 had 45 files - weird - looked like
 n9 had been removed but double checked again and found that n10 had
 indeed been removed.
 - did a remove-brick commit. Now same file distribution after that.
 volume info now shows the volume to have n9 and n11 and bricks.
 - did a rebalance start again on the volume. The rebalance-status now
 shows n11 had 45 rebalanced files, all the brick nodes had 45 files
 scanned and all show complete. The file layout after this is n9 has 45
 files and n10 has 55 files. n11 has 0 files!
 - An ls on the nfs mount now shows only 45 files so the other 55 not
 visible because they are on n10 which is not part of the volume!
 
 What have I done wrong in this sequence?
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>> 
>>> I think running rebalnce (force) in between "remove brick start" and 
>>> "remove brick commit" is the issue. Can you please paste your command as 
>>> per the time line of events. That would make it more clear. 
>>> 
>>> Below are the steps, I do to replace a brick and it works for me. 
>>> 
>>> gluster volume add-brick VOLNAME NEW-BRICK
>>> gluster volume remove-brick VOLNAME BRICK start
>>> gluster volume remove-brick VOLNAME BRICK status
>>> gluster volume remove-brick VOLNAME BRICK commit
>> I will also suggest you to use distribute-replicate volumes, so that you 
>> have a replica copy always and it reduces the probability of losing data.
>> 
>> -Lala 
>> 
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Samba vfs_glusterfs Quota Support?

2013-10-30 Thread Ira Cooper
I suspect you are missing the patch needed to make this work.

http://git.samba.org/?p=samba.git;a=commit;h=872a7d61ca769c47890244a1005c1bd445a3bab6
 ; It was put in, in the 3.6.13 timeframe if I'm reading the git history 
correctly.

The bug manifests when the base of the share has a different amount of "Quota 
Allowance" than elsewhere in the tree.

\\foo\ - 5GB quota
\\foo\bar - 2.5GB quota

When you run "dir" in \\foo you get the results from the 5GB quota, and the 
same in \\foo\bar, which is incorrect and highly confusing to users.

https://bugzilla.samba.org/show_bug.cgi?id=9646

Despite my discussion of "multi-volume" it should be the same bug.

Thanks,

-Ira / i...@samba.org

- Original Message -
From: "David Gibbons" 
To: gluster-users@gluster.org
Sent: Wednesday, October 30, 2013 11:04:49 AM
Subject: Re: [Gluster-users] Samba vfs_glusterfs Quota Support?

Thanks all for the pointers. 



What version of Samba are you running? 

Samba version is 3.6.9: 
[root@gfs-a-1 /]# smbd -V 
Version 3.6.9 

Gluster version is 3.4.1 git: 
[root@gfs-a-1 /]# glusterfs --version 
glusterfs 3.4.1 built on Oct 21 2013 09:22:36 


It should be 
# gluster volume set gfsv0 features.quota-deem-statfs on 
[root@gfs-a-1 /]# gluster volume set gfsv0 features.quota-deem-statfs on 
volume set: failed: option : features.quota-deem-statfs does not exist 
Did you mean features.quota-timeout? 

I wonder if the quota-deem-statfs is part a more recent version? 

Cheers, 
Dave 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Upgrade 3.0 to 3.2.x

2013-10-30 Thread Lalatendu Mohanty

On 10/30/2013 09:02 PM, Lysa Milch wrote:

Hello All,

I'm running a gluster 3.0 installation with a distributed volume.  I 
need to upgrade this to gluster 3.2.x with a new 
distributed-replicated volume. I've seen the upgrade doc 
(http://gluster.org/community/documentation/index.php/Gluster_3.0_to_3.2_Upgrade_Guide) 
but that covers moving it to a similar volume type.


Is there a way to upgrade ( and is there documentation ) to upgrade 
and change the volume type?

Thanks!



Regarding changing of volume type, you can do it post upgrade, by using 
add-brick command as per your  requirement. I have pasted a example here 
I changed the volume from distributed to distributed-replicate.


# gluster v info

Volume Name: patchy
Type: Distribute
Volume ID: 3b02338f-949b-459c-9c13-fd4c63d91d31
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: dhcp159-54/bricks/patchy-b11
Brick2: dhcp159-54:/bricks/patchy-b12
Options Reconfigured:
features.quota-deem-statfs: on
features.quota: on

[root@dhcp159-54 ~]# gluster v add-brick patchy replica 2 
dhcp159-54:/bricks/patchy-b13 dhcp159-54:/bricks/patchy-b14

volume add-brick: success

[root@dhcp159-54 ~]# gluster v info

Volume Name: patchy
Type: Distributed-Replicate
Volume ID: 3b02338f-949b-459c-9c13-fd4c63d91d31
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: dhcp159-54:/bricks/patchy-b11
Brick2: dhcp159-54:/bricks/patchy-b13
Brick3: dhcp159-54:/bricks/patchy-b12
Brick4: dhcp159-54:/bricks/patchy-b14
Options Reconfigured:
features.quota-deem-statfs: on
features.quota: on

This email and any attachments may contain confidential and 
proprietary information of Blackboard that is for the sole use of the 
intended recipient. If you are not the intended recipient, disclosure, 
copying, re-distribution or other use of any of this information is 
strictly prohibited. Please immediately notify the sender and delete 
this transmission if you received this email in error.



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Strange behaviour with add-brick followed by remove-brick

2013-10-30 Thread Lukáš Bezdička
remove-brick on distribute does not work for me:
https://bugzilla.redhat.com/show_bug.cgi?id=1024369


On Wed, Oct 30, 2013 at 4:40 PM, Brian Cipriano wrote:

> I had the exact same experience recently with a 3.4 distributed cluster I
> set up. I spent some time on the IRC but couldn’t track it down. Seems
> remove-brick is broken in 3.3 and 3.4. I guess folks don’t remove bricks
> very often :)
>
> - brian
>
>
>
>
>
> On Oct 30, 2013, at 11:21 AM, Lalatendu Mohanty 
> wrote:
>
>  On 10/30/2013 08:40 PM, Lalatendu Mohanty wrote:
>
> On 10/30/2013 03:43 PM, B.K.Raghuram wrote:
>
> I have gluster 3.4.1 on 4 boxes with hostnames n9, n10, n11, n12. I
> did the following sequence of steps and ended up with losing data so
> what did I do wrong?!
>
> - Create a distributed volume with bricks on n9 and n10
> - Started the volume
> - NFS mounted the volume and created 100 files on it. Found that n9
> had 45, n10 had 55
> - Added a brick n11 to this volume
> - Removed a brick n10 from the volume with gluster remove brick 
>  start
> - n9 now has 45 files, n10 has 55 files and n11 has 45 files(all the
> same as on n9)
> - Checked status, it shows that no rebalanced files but that n10 had
> scanned 100 files and completed. 0 scanned for all the others
> - I then did a rebalance start force on the vol and found that n9 had
> 0 files, n10 had 55 files and n11 had 45 files - weird - looked like
> n9 had been removed but double checked again and found that n10 had
> indeed been removed.
> - did a remove-brick commit. Now same file distribution after that.
> volume info now shows the volume to have n9 and n11 and bricks.
> - did a rebalance start again on the volume. The rebalance-status now
> shows n11 had 45 rebalanced files, all the brick nodes had 45 files
> scanned and all show complete. The file layout after this is n9 has 45
> files and n10 has 55 files. n11 has 0 files!
> - An ls on the nfs mount now shows only 45 files so the other 55 not
> visible because they are on n10 which is not part of the volume!
>
> What have I done wrong in this sequence?
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
> I think running rebalnce (force) in between "remove brick start" and
> "remove brick commit" is the issue. Can you please paste your command as
> per the time line of events. That would make it more clear.
>
> Below are the steps, I do to replace a brick and it works for me.
>
>
>1. gluster volume add-brick *VOLNAME NEW-BRICK*
>2. gluster volume remove-brick VOLNAME* BRICK* start
>3. gluster volume remove-brick VOLNAME* BRICK* status
>4. gluster volume remove-brick VOLNAME *BRICK* commit
>
>  I will also suggest you to use distribute-replicate volumes, so that you
> have a replica copy always and it reduces the probability of losing data.
>
> -Lala
>
>  ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Strange behaviour with add-brick followed by remove-brick

2013-10-30 Thread Lalatendu Mohanty

On 10/30/2013 09:10 PM, Brian Cipriano wrote:
I had the exact same experience recently with a 3.4 distributed 
cluster I set up. I spent some time on the IRC but couldn’t track it 
down. Seems remove-brick is broken in 3.3 and 3.4. I guess folks don’t 
remove bricks very often :)


- brian



Brian,

I have tried remove brick couple of times and it worked for me. From 
your experiance  it seems remove brick has a bug. I will suggest you to 
file a bug  or give us steps to reproduce , so that I can reproduce it 
in my environment and file a bug for it.






On Oct 30, 2013, at 11:21 AM, Lalatendu Mohanty > wrote:



On 10/30/2013 08:40 PM, Lalatendu Mohanty wrote:

On 10/30/2013 03:43 PM, B.K.Raghuram wrote:

I have gluster 3.4.1 on 4 boxes with hostnames n9, n10, n11, n12. I
did the following sequence of steps and ended up with losing data so
what did I do wrong?!

- Create a distributed volume with bricks on n9 and n10
- Started the volume
- NFS mounted the volume and created 100 files on it. Found that n9
had 45, n10 had 55
- Added a brick n11 to this volume
- Removed a brick n10 from the volume with gluster remove brick 
 start
- n9 now has 45 files, n10 has 55 files and n11 has 45 files(all the
same as on n9)
- Checked status, it shows that no rebalanced files but that n10 had
scanned 100 files and completed. 0 scanned for all the others
- I then did a rebalance start force on the vol and found that n9 had
0 files, n10 had 55 files and n11 had 45 files - weird - looked like
n9 had been removed but double checked again and found that n10 had
indeed been removed.
- did a remove-brick commit. Now same file distribution after that.
volume info now shows the volume to have n9 and n11 and bricks.
- did a rebalance start again on the volume. The rebalance-status now
shows n11 had 45 rebalanced files, all the brick nodes had 45 files
scanned and all show complete. The file layout after this is n9 has 45
files and n10 has 55 files. n11 has 0 files!
- An ls on the nfs mount now shows only 45 files so the other 55 not
visible because they are on n10 which is not part of the volume!

What have I done wrong in this sequence?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

|
I think running rebalnce (force) in between "remove brick start" and 
"remove brick commit" is the issue. Can you please paste your 
command as per the time line of events. That would make it more clear.


Below are the steps, I do to replace a brick and it works for me.

|

 1. |gluster volume add-brick /|VOLNAME NEW-BRICK|/|
 2. |gluster volume remove-brick |VOLNAME|/|BRICK|/| |start|
 3. |gluster volume remove-brick |VOLNAME|/|BRICK|/||status|
 4. |gluster volume remove-brick |VOLNAME /BRICK/| commit|

I will also suggest you to use distribute-replicate volumes, so that 
you have a replica copy always and it reduces the probability of 
losing data.


-Lala

___
Gluster-users mailing list
Gluster-users@gluster.org 
http://supercolony.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Strange behaviour with add-brick followed by remove-brick

2013-10-30 Thread Brian Cipriano
I had the exact same experience recently with a 3.4 distributed cluster I set 
up. I spent some time on the IRC but couldn’t track it down. Seems remove-brick 
is broken in 3.3 and 3.4. I guess folks don’t remove bricks very often :)

- brian




On Oct 30, 2013, at 11:21 AM, Lalatendu Mohanty  wrote:

> On 10/30/2013 08:40 PM, Lalatendu Mohanty wrote:
>> On 10/30/2013 03:43 PM, B.K.Raghuram wrote:
>>> I have gluster 3.4.1 on 4 boxes with hostnames n9, n10, n11, n12. I
>>> did the following sequence of steps and ended up with losing data so
>>> what did I do wrong?!
>>> 
>>> - Create a distributed volume with bricks on n9 and n10
>>> - Started the volume
>>> - NFS mounted the volume and created 100 files on it. Found that n9
>>> had 45, n10 had 55
>>> - Added a brick n11 to this volume
>>> - Removed a brick n10 from the volume with gluster remove brick 
>>>  start
>>> - n9 now has 45 files, n10 has 55 files and n11 has 45 files(all the
>>> same as on n9)
>>> - Checked status, it shows that no rebalanced files but that n10 had
>>> scanned 100 files and completed. 0 scanned for all the others
>>> - I then did a rebalance start force on the vol and found that n9 had
>>> 0 files, n10 had 55 files and n11 had 45 files - weird - looked like
>>> n9 had been removed but double checked again and found that n10 had
>>> indeed been removed.
>>> - did a remove-brick commit. Now same file distribution after that.
>>> volume info now shows the volume to have n9 and n11 and bricks.
>>> - did a rebalance start again on the volume. The rebalance-status now
>>> shows n11 had 45 rebalanced files, all the brick nodes had 45 files
>>> scanned and all show complete. The file layout after this is n9 has 45
>>> files and n10 has 55 files. n11 has 0 files!
>>> - An ls on the nfs mount now shows only 45 files so the other 55 not
>>> visible because they are on n10 which is not part of the volume!
>>> 
>>> What have I done wrong in this sequence?
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> 
>> I think running rebalnce (force) in between "remove brick start" and "remove 
>> brick commit" is the issue. Can you please paste your command as per the 
>> time line of events. That would make it more clear. 
>> 
>> Below are the steps, I do to replace a brick and it works for me. 
>> 
>> gluster volume add-brick VOLNAME NEW-BRICK
>> gluster volume remove-brick VOLNAME BRICK start
>> gluster volume remove-brick VOLNAME BRICK status
>> gluster volume remove-brick VOLNAME BRICK commit
> I will also suggest you to use distribute-replicate volumes, so that you have 
> a replica copy always and it reduces the probability of losing data.
> 
> -Lala 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Upgrade 3.0 to 3.2.x

2013-10-30 Thread Lysa Milch
Hello All,

I'm running a gluster 3.0 installation with a distributed volume.  I need to 
upgrade this to gluster 3.2.x with a new distributed-replicated volume. I've 
seen the upgrade doc 
(http://gluster.org/community/documentation/index.php/Gluster_3.0_to_3.2_Upgrade_Guide)
 but that covers moving it to a similar volume type.

Is there a way to upgrade ( and is there documentation ) to upgrade and change 
the volume type?
Thanks!

This email and any attachments may contain confidential and proprietary 
information of Blackboard that is for the sole use of the intended recipient. 
If you are not the intended recipient, disclosure, copying, re-distribution or 
other use of any of this information is strictly prohibited. Please immediately 
notify the sender and delete this transmission if you received this email in 
error.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Strange behaviour with add-brick followed by remove-brick

2013-10-30 Thread Lalatendu Mohanty

On 10/30/2013 03:43 PM, B.K.Raghuram wrote:

I have gluster 3.4.1 on 4 boxes with hostnames n9, n10, n11, n12. I
did the following sequence of steps and ended up with losing data so
what did I do wrong?!

- Create a distributed volume with bricks on n9 and n10
- Started the volume
- NFS mounted the volume and created 100 files on it. Found that n9
had 45, n10 had 55
- Added a brick n11 to this volume
- Removed a brick n10 from the volume with gluster remove brick 
 start
- n9 now has 45 files, n10 has 55 files and n11 has 45 files(all the
same as on n9)
- Checked status, it shows that no rebalanced files but that n10 had
scanned 100 files and completed. 0 scanned for all the others
- I then did a rebalance start force on the vol and found that n9 had
0 files, n10 had 55 files and n11 had 45 files - weird - looked like
n9 had been removed but double checked again and found that n10 had
indeed been removed.
- did a remove-brick commit. Now same file distribution after that.
volume info now shows the volume to have n9 and n11 and bricks.
- did a rebalance start again on the volume. The rebalance-status now
shows n11 had 45 rebalanced files, all the brick nodes had 45 files
scanned and all show complete. The file layout after this is n9 has 45
files and n10 has 55 files. n11 has 0 files!
- An ls on the nfs mount now shows only 45 files so the other 55 not
visible because they are on n10 which is not part of the volume!

What have I done wrong in this sequence?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

|
I think running rebalnce (force) in between "remove brick start" and 
"remove brick commit" is the issue. Can you please paste your command as 
per the time line of events. That would make it more clear.


Below are the steps, I do to replace a brick and it works for me.

|

1. |gluster volume add-brick /|VOLNAME NEW-BRICK|/|
2. |gluster volume remove-brick |VOLNAME|/|BRICK|/| |start|
3. |gluster volume remove-brick |VOLNAME|/|BRICK|/||status|
4. |gluster volume remove-brick |VOLNAME /BRICK/| commit|


-Lala
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Strange behaviour with add-brick followed by remove-brick

2013-10-30 Thread Lalatendu Mohanty

On 10/30/2013 08:40 PM, Lalatendu Mohanty wrote:

On 10/30/2013 03:43 PM, B.K.Raghuram wrote:

I have gluster 3.4.1 on 4 boxes with hostnames n9, n10, n11, n12. I
did the following sequence of steps and ended up with losing data so
what did I do wrong?!

- Create a distributed volume with bricks on n9 and n10
- Started the volume
- NFS mounted the volume and created 100 files on it. Found that n9
had 45, n10 had 55
- Added a brick n11 to this volume
- Removed a brick n10 from the volume with gluster remove brick 
 start
- n9 now has 45 files, n10 has 55 files and n11 has 45 files(all the
same as on n9)
- Checked status, it shows that no rebalanced files but that n10 had
scanned 100 files and completed. 0 scanned for all the others
- I then did a rebalance start force on the vol and found that n9 had
0 files, n10 had 55 files and n11 had 45 files - weird - looked like
n9 had been removed but double checked again and found that n10 had
indeed been removed.
- did a remove-brick commit. Now same file distribution after that.
volume info now shows the volume to have n9 and n11 and bricks.
- did a rebalance start again on the volume. The rebalance-status now
shows n11 had 45 rebalanced files, all the brick nodes had 45 files
scanned and all show complete. The file layout after this is n9 has 45
files and n10 has 55 files. n11 has 0 files!
- An ls on the nfs mount now shows only 45 files so the other 55 not
visible because they are on n10 which is not part of the volume!

What have I done wrong in this sequence?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

|
I think running rebalnce (force) in between "remove brick start" and 
"remove brick commit" is the issue. Can you please paste your command 
as per the time line of events. That would make it more clear.


Below are the steps, I do to replace a brick and it works for me.

|

 1. |gluster volume add-brick /|VOLNAME NEW-BRICK|/|
 2. |gluster volume remove-brick |VOLNAME|/|BRICK|/| |start|
 3. |gluster volume remove-brick |VOLNAME|/|BRICK|/||status|
 4. |gluster volume remove-brick |VOLNAME /BRICK/| commit|

I will also suggest you to use distribute-replicate volumes, so that you 
have a replica copy always and it reduces the probability of losing data.


-Lala

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Samba vfs_glusterfs Quota Support?

2013-10-30 Thread David Gibbons
Thanks all for the pointers.

What version of Samba are you running?


Samba version is 3.6.9:
[root@gfs-a-1 /]# smbd -V
Version 3.6.9

Gluster version is 3.4.1 git:
[root@gfs-a-1 /]# glusterfs --version
glusterfs 3.4.1 built on Oct 21 2013 09:22:36


> It should be
> # gluster volume set gfsv0 features.quota-deem-statfs on

[root@gfs-a-1 /]# gluster volume set gfsv0 features.quota-deem-statfs on
volume set: failed: option : features.quota-deem-statfs does not exist
Did you mean features.quota-timeout?

I wonder if the quota-deem-statfs is part a more recent version?

Cheers,
Dave
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Samba vfs_glusterfs Quota Support?

2013-10-30 Thread Lalatendu Mohanty

On 10/30/2013 06:36 PM, David Gibbons wrote:

Hi Lala,

Thank you. I should have been more clear and you are correct, I can't 
write data above the quota. I was referring only to the listing of 
"disk size" in windows/samba land.


Thanks for the tip in quota-deem-statfs. Here are my results with that 
command:

# gluster volume set gfsv0 quota-deem-statfs on
volume set: failed: option : quota-deem-statfs does not exist
Did you mean dump-fd-stats or quota-timeout?

Which Gluster version does that feature setting apply to?



I mostly use the latest code from git to build gluster . The current 
setup I have built on 21st october on fedora 19. I am copying the 
commands below for your refrence . Which version of gluster you are using?


root@dhcp159-54 ~]# gluster v quota patchy enable
Enabling quota has been successful

[root@dhcp159-54 ~]#  gluster volume set patchy quota-deem-statfs on
volume set: success

[root@dhcp159-54 ~]# glusterfs --version
glusterfs 3git built on Oct 21 2013 15:57:26
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

Thanks,
Lala

Cheers,
Dave


On Wed, Oct 30, 2013 at 3:09 AM, Lalatendu Mohanty 
mailto:lmoha...@redhat.com>> wrote:


On 10/23/2013 05:26 PM, David Gibbons wrote:

Hi All,

I'm setting up a gluster cluster that will be accessed via smb. I
was hoping that the quotas. I've configured a quota on the path
itself:

# gluster volume quota gfsv0 list
path  limit_set  size

--
/shares/testsharedave   10GB  8.0KB

And I've configured the share in samba (and can access it fine):
# cat /etc/samba/smb.conf
[testsharedave]
vfs objects = glusterfs
glusterfs:volfile_server = localhost
glusterfs:volume = gfsv0
path = /shares/testsharedave
valid users = dave
guest ok = no
writeable = yes

But windows does not reflect the quota and instead shows the full
size of the gluster volume.

I've reviewed the code in

https://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs/blobs/master/src/vfs_glusterfs.c
 --
which does not appear to support passing gluster quotas to samba.
So I don't think my installation is broken, it seems like maybe
this just isn't supported.

Can anyone speak to whether or not quotas are going to be
implemented in vfs_glusterfs for samba? Or if I'm just crazy and
doing this wrong ;)? I'm definitely willing to help with the code
but don't have much experience with either samba modules or the
gluster API.


Hi David,
Quotas are supported by vfs_glusterfs for samba. I have also set
quota on the volume correctly. If you try to write more data then
the quota on the directory(/shares/testsharedave ), it will not
allow.

But for the clients (i.e. Windows/smb, nfs, fuse) to reflect in
the meta data  information (i.e. properties in Windows) , you have
run below volume set  command on respective volume.

gluster volume set  quota-deem-statfs on

-Lala


Cheers,
Dave



___
Gluster-users mailing list
Gluster-users@gluster.org  
http://supercolony.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org 
http://supercolony.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Samba vfs_glusterfs Quota Support?

2013-10-30 Thread Harshavardhana
David,

It should be

# gluster volume set gfsv0 features.quota-deem-statfs on


On Wed, Oct 30, 2013 at 6:06 AM, David Gibbons wrote:

> Hi Lala,
>
> Thank you. I should have been more clear and you are correct, I can't
> write data above the quota. I was referring only to the listing of "disk
> size" in windows/samba land.
>
> Thanks for the tip in quota-deem-statfs. Here are my results with that
> command:
> # gluster volume set gfsv0 quota-deem-statfs on
> volume set: failed: option : quota-deem-statfs does not exist
> Did you mean dump-fd-stats or quota-timeout?
>
> Which Gluster version does that feature setting apply to?
>
> Cheers,
> Dave
>
>
> On Wed, Oct 30, 2013 at 3:09 AM, Lalatendu Mohanty wrote:
>
>>  On 10/23/2013 05:26 PM, David Gibbons wrote:
>>
>> Hi All,
>>
>>  I'm setting up a gluster cluster that will be accessed via smb. I was
>> hoping that the quotas. I've configured a quota on the path itself:
>>
>>  # gluster volume quota gfsv0 list
>> path  limit_set  size
>>
>> --
>> /shares/testsharedave   10GB8.0KB
>>
>>  And I've configured the share in samba (and can access it fine):
>> # cat /etc/samba/smb.conf
>>  [testsharedave]
>> vfs objects = glusterfs
>> glusterfs:volfile_server = localhost
>> glusterfs:volume = gfsv0
>> path = /shares/testsharedave
>> valid users = dave
>> guest ok = no
>> writeable = yes
>>
>>  But windows does not reflect the quota and instead shows the full size
>> of the gluster volume.
>>
>>  I've reviewed the code in
>> https://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs/blobs/master/src/vfs_glusterfs.c
>>  --
>> which does not appear to support passing gluster quotas to samba. So I
>> don't think my installation is broken, it seems like maybe this just isn't
>> supported.
>>
>>  Can anyone speak to whether or not quotas are going to be implemented
>> in vfs_glusterfs for samba? Or if I'm just crazy and doing this wrong ;)?
>> I'm definitely willing to help with the code but don't have much experience
>> with either samba modules or the gluster API.
>>
>>   Hi David,
>> Quotas are supported by vfs_glusterfs for samba. I have also set quota on
>> the volume correctly. If you try to write more data then the quota on the
>> directory(/shares/testsharedave ), it will not allow.
>>
>> But for the clients (i.e. Windows/smb, nfs, fuse) to reflect in the meta
>> data  information (i.e. properties in Windows) , you have run below volume
>> set  command on respective volume.
>>
>> gluster volume set  quota-deem-statfs on
>>
>> -Lala
>>
>>  Cheers,
>> Dave
>>
>>
>>
>> ___
>> Gluster-users mailing 
>> listGluster-users@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>



-- 
*Religious confuse piety with mere ritual, the virtuous confuse regulation
with outcomes*
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Samba vfs_glusterfs Quota Support?

2013-10-30 Thread Ira Cooper
Hello Dave,

What version of Samba are you running?

There is a bug in some versions of Samba that will result in exactly what you 
are seeing.

-Ira / i...@samba.org

- Original Message -
From: "David Gibbons" 
To: "Lalatendu Mohanty" 
Cc: gluster-users@gluster.org
Sent: Wednesday, October 30, 2013 9:06:38 AM
Subject: Re: [Gluster-users] Samba vfs_glusterfs Quota Support?

Hi Lala, 

Thank you. I should have been more clear and you are correct, I can't write 
data above the quota. I was referring only to the listing of "disk size" in 
windows/samba land. 

Thanks for the tip in quota-deem-statfs. Here are my results with that command: 
# gluster volume set gfsv0 quota-deem-statfs on 
volume set: failed: option : quota-deem-statfs does not exist 
Did you mean dump-fd-stats or quota-timeout? 

Which Gluster version does that feature setting apply to? 

Cheers, 
Dave 


On Wed, Oct 30, 2013 at 3:09 AM, Lalatendu Mohanty < lmoha...@redhat.com > 
wrote: 



On 10/23/2013 05:26 PM, David Gibbons wrote: 



Hi All, 

I'm setting up a gluster cluster that will be accessed via smb. I was hoping 
that the quotas. I've configured a quota on the path itself: 

# gluster volume quota gfsv0 list 
path limit_set size 
--
 
/shares/testsharedave 10GB 8.0KB 

And I've configured the share in samba (and can access it fine): 
# cat /etc/samba/smb.conf 
[testsharedave] 
vfs objects = glusterfs 
glusterfs:volfile_server = localhost 
glusterfs:volume = gfsv0 
path = /shares/testsharedave 
valid users = dave 
guest ok = no 
writeable = yes 

But windows does not reflect the quota and instead shows the full size of the 
gluster volume. 

I've reviewed the code in 
https://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs/blobs/master/src/vfs_glusterfs.c
 -- which does not appear to support passing gluster quotas to samba. So I 
don't think my installation is broken, it seems like maybe this just isn't 
supported. 

Can anyone speak to whether or not quotas are going to be implemented in 
vfs_glusterfs for samba? Or if I'm just crazy and doing this wrong ;)? I'm 
definitely willing to help with the code but don't have much experience with 
either samba modules or the gluster API. 

Hi David, 
Quotas are supported by vfs_glusterfs for samba. I have also set quota on the 
volume correctly. If you try to write more data then the quota on the 
directory(/shares/testsharedave ), it will not allow. 

But for the clients (i.e. Windows/smb, nfs, fuse) to reflect in the meta data 
information (i.e. properties in Windows) , you have run below volume set 
command on respective volume. 

gluster volume set  quota-deem-statfs on 

-Lala 




Cheers, 
Dave 



___
Gluster-users mailing list Gluster-users@gluster.org 
http://supercolony.gluster.org/mailman/listinfo/gluster-users 


___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://supercolony.gluster.org/mailman/listinfo/gluster-users 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Samba vfs_glusterfs Quota Support?

2013-10-30 Thread David Gibbons
Hi Lala,

Thank you. I should have been more clear and you are correct, I can't write
data above the quota. I was referring only to the listing of "disk size" in
windows/samba land.

Thanks for the tip in quota-deem-statfs. Here are my results with that
command:
# gluster volume set gfsv0 quota-deem-statfs on
volume set: failed: option : quota-deem-statfs does not exist
Did you mean dump-fd-stats or quota-timeout?

Which Gluster version does that feature setting apply to?

Cheers,
Dave


On Wed, Oct 30, 2013 at 3:09 AM, Lalatendu Mohanty wrote:

>  On 10/23/2013 05:26 PM, David Gibbons wrote:
>
> Hi All,
>
>  I'm setting up a gluster cluster that will be accessed via smb. I was
> hoping that the quotas. I've configured a quota on the path itself:
>
>  # gluster volume quota gfsv0 list
> path  limit_set  size
>
> --
> /shares/testsharedave   10GB8.0KB
>
>  And I've configured the share in samba (and can access it fine):
> # cat /etc/samba/smb.conf
>  [testsharedave]
> vfs objects = glusterfs
> glusterfs:volfile_server = localhost
> glusterfs:volume = gfsv0
> path = /shares/testsharedave
> valid users = dave
> guest ok = no
> writeable = yes
>
>  But windows does not reflect the quota and instead shows the full size
> of the gluster volume.
>
>  I've reviewed the code in
> https://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs/blobs/master/src/vfs_glusterfs.c
>  --
> which does not appear to support passing gluster quotas to samba. So I
> don't think my installation is broken, it seems like maybe this just isn't
> supported.
>
>  Can anyone speak to whether or not quotas are going to be implemented in
> vfs_glusterfs for samba? Or if I'm just crazy and doing this wrong ;)? I'm
> definitely willing to help with the code but don't have much experience
> with either samba modules or the gluster API.
>
>   Hi David,
> Quotas are supported by vfs_glusterfs for samba. I have also set quota on
> the volume correctly. If you try to write more data then the quota on the
> directory(/shares/testsharedave ), it will not allow.
>
> But for the clients (i.e. Windows/smb, nfs, fuse) to reflect in the meta
> data  information (i.e. properties in Windows) , you have run below volume
> set  command on respective volume.
>
> gluster volume set  quota-deem-statfs on
>
> -Lala
>
>  Cheers,
> Dave
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 3.4 on RHCS with OCF resource agents

2013-10-30 Thread Emir Imamagic

Hello,

by looking at the source code I managed to find answer to my second 
question - yes :)


In order to run multiple glusterd on the same host one needs to provide 
option transport.socket.bind-address in the volfile, e.g. 
/etc/glusterfs/glusterd-1.vol:

---
volume management
type mgmt/glusterd
# don't forget to change the working-directory
option working-directory /var/lib/glusterd-1
option transport-type socket,rdma
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
option transport.socket.read-fail-log off
option transport.socket.bind-address XX.YY.ZZ.AA
end-volume
---
In addition to different address and working directory one has to define 
different PID file when starting the service:

 glusterd -f /etc/glusterfs/glusterd-1.vol -p /var/run/glusterd-1.pid

Finally when using the CLI one has to specify to which glusterd to connect:
 gluster --remote-host=XX.YY.ZZ.AA
After that business as usual by using virtual IPs.

With this knowledge I think I'll be able to make the OCF resource agents 
work. Even if not I can always use service resource agent with two 
modified init.d scripts.


Cheers,
emir

On 30.10.2013. 12:58, Emir Imamagic wrote:

Hello,

Gluster 3.4 provides OCF resource agents which should enable nice
integration with RHCS. We would like to deploy gluster on two RHCS nodes
with storage provided via iSCSI. Volume will be distributed and the idea
is to have RHCS migrate iSCSI LUN & gluster daemon to a single node in
case of failure of one node. Due to a limited storage we cannot go for
replicated volume.

We have a similar setup with earlier version of Gluster (3.0) that
enabled specification of IP address that each gluster daemon should use.
This enabled usage of virtual IPs and integration with RHCS with service
resource agent.

Questions are:
1. Is there any documentation on how to use OCF resource agents? I
checked the source code (glusterd and volume) and the metadata, but
couldn't figure out how to get two glusterd instances on a same node.

2. Is it possible to have multiple glusterd instances on a same machine
using virtual IP?

Thanks in advance



--
Emir Imamagic
SRCE - University of Zagreb University Computing Centre, www.srce.unizg.hr
emir.imama...@srce.hr, tel: +385 1 616 5809, fax: +385 1 616 5559
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] New on the Gluster Blog: The Cloud Evangelist Podcasts

2013-10-30 Thread John Mark Walker
I wanted to let everyone know that if you're a fan of The Cloud Evangelist,
I've added the Gluster-themed podcasts from http://cloudevangelist.org/ to
the Gluster Blog: http://www.gluster.org/author/richard-morrell/

Or you can see them directly from the source:
http://cloudevangelist.org/category/gluster/
___
Announce mailing list
annou...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/announce
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster 3.4 on RHCS with OCF resource agents

2013-10-30 Thread Emir Imamagic

Hello,

Gluster 3.4 provides OCF resource agents which should enable nice 
integration with RHCS. We would like to deploy gluster on two RHCS nodes 
with storage provided via iSCSI. Volume will be distributed and the idea 
is to have RHCS migrate iSCSI LUN & gluster daemon to a single node in 
case of failure of one node. Due to a limited storage we cannot go for 
replicated volume.


We have a similar setup with earlier version of Gluster (3.0) that 
enabled specification of IP address that each gluster daemon should use. 
This enabled usage of virtual IPs and integration with RHCS with service 
resource agent.


Questions are:
1. Is there any documentation on how to use OCF resource agents? I 
checked the source code (glusterd and volume) and the metadata, but 
couldn't figure out how to get two glusterd instances on a same node.


2. Is it possible to have multiple glusterd instances on a same machine 
using virtual IP?


Thanks in advance
--
Emir Imamagic
SRCE - University of Zagreb University Computing Centre, www.srce.unizg.hr
emir.imama...@srce.hr, tel: +385 1 616 5809, fax: +385 1 616 5559
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] A bunch of comments about brick management

2013-10-30 Thread James
Hey there,

I've been madly hacking on cool new puppet-gluster features... In my
lack of sleep, I've put together some comments about gluster add/remove
brick features. Hopefully they are useful, and make sense. These are
sort of "bugs". Have a look, and let me know if I should formally report
any of these...

Cheers...
James

PS: this is also mirrored here:
http://paste.fedoraproject.org/50402/12956713
because email has destroyed formatting :P


All tests are done on gluster 3.4.1, using CentOS 6.4 on vm's.
Firewall has been disabled for testing purposes.
gluster --version
glusterfs 3.4.1 built on Sep 27 2013 13:13:58


### 1) simple operations shouldn't fail
# running the following commands in succession without files:
# gluster volume add-brick examplevol vmx1.example.com:/tmp/foo9
vmx2.example.com:/tmp/foo9
# gluster volume remove-brick examplevol vmx1.example.com:/tmp/foo9
vmx2.example.com:/tmp/foo9 start ... status

shows a failure:

[root@vmx1 ~]# gluster volume add-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9
volume add-brick: success
[root@vmx1 ~]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9 status
Node Rebalanced-files  size
scanned  failures   skipped status run-time in secs
   -  ---   ---
---   ---   ---      --
   localhost00Bytes
0 0not started 0.00
vmx2.example.com00Bytes
0 0not started 0.00
[root@vmx1 ~]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9 start
volume remove-brick start: success
ID: ecbcc2b6-4351-468a-8f53-3a09159e4059
[root@vmx1 ~]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9 status
Node Rebalanced-files  size
scanned  failures   skipped status run-time in secs
   -  ---   ---
---   ---   ---      --
   localhost00Bytes
8 0  completed 0.00
vmx2.example.com00Bytes
0 1 failed 0.00
[root@vmx1 ~]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9 commit
Removing brick(s) can result in data loss. Do you want to Continue?
(y/n) y
volume remove-brick commit: success
[root@vmx1 ~]# 

### 1b) on the other node, the output shows an extra row (also including
the failure)

[root@vmx2 ~]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo9 vmx2.example.com:/tmp/foo9 status
Node Rebalanced-files  size
scanned  failures   skipped status run-time in secs
   -  ---   ---
---   ---   ---      --
   localhost00Bytes
0 0  completed 0.00
   localhost00Bytes
0 0  completed 0.00
vmx1.example.com00Bytes
0 1 failed 0.00


### 2) formatting:

# the "skipped" column doesn't seem to have any data, as a result
formatting is broken...
# this problem is obviously not seen in the more useful --xml output
below. neither is the 'skipped' column.

[root@vmx1 examplevol]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo3 vmx2.example.com:/tmp/foo3 status
Node Rebalanced-files  size
scanned  failures   skipped status run-time in secs
   -  ---   ---
---   ---   ---      --
   localhost00Bytes
8 0  completed 0.00
vmx2.example.com00Bytes
8 0  completed 0.00




  0
  115
  
  
d99cab76-cd7d-4579-80ae-c1e6faff3d1d
2

  localhost
  0
  0
  8
  0
  3
  completed


  vmx2.example.com
  0
  0
  8
  0
  3
  completed


  0
  0
  16
  0
  3
  completed

  



### 3)
[root@vmx1 examplevol]# gluster volume remove-brick examplevol
vmx1.example.com:/tmp/foo3 vmx2.example.com:/tmp/foo3 status
Node Reb

[Gluster-users] Strange behaviour with add-brick followed by remove-brick

2013-10-30 Thread B.K.Raghuram
I have gluster 3.4.1 on 4 boxes with hostnames n9, n10, n11, n12. I
did the following sequence of steps and ended up with losing data so
what did I do wrong?!

- Create a distributed volume with bricks on n9 and n10
- Started the volume
- NFS mounted the volume and created 100 files on it. Found that n9
had 45, n10 had 55
- Added a brick n11 to this volume
- Removed a brick n10 from the volume with gluster remove brick 
 start
- n9 now has 45 files, n10 has 55 files and n11 has 45 files(all the
same as on n9)
- Checked status, it shows that no rebalanced files but that n10 had
scanned 100 files and completed. 0 scanned for all the others
- I then did a rebalance start force on the vol and found that n9 had
0 files, n10 had 55 files and n11 had 45 files - weird - looked like
n9 had been removed but double checked again and found that n10 had
indeed been removed.
- did a remove-brick commit. Now same file distribution after that.
volume info now shows the volume to have n9 and n11 and bricks.
- did a rebalance start again on the volume. The rebalance-status now
shows n11 had 45 rebalanced files, all the brick nodes had 45 files
scanned and all show complete. The file layout after this is n9 has 45
files and n10 has 55 files. n11 has 0 files!
- An ls on the nfs mount now shows only 45 files so the other 55 not
visible because they are on n10 which is not part of the volume!

What have I done wrong in this sequence?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Peer list question

2013-10-30 Thread Lalatendu Mohanty

On 10/30/2013 09:44 AM, Vijay Bellur wrote:

On 10/29/2013 04:12 PM, Robert Hajime Lanning wrote:

On 10/28/13 23:34, B.K.Raghuram wrote:

The problem I am seeing is this. I am using hostnames to add peers
into the pool and the output of gluster peer status reflects this.
However, the output of the --remote-host peer status gives the name of
the current host in the form of an IP address. Why this discrepancy?


Was the current host the one you started the cluster from? (ie. you did
peer probe to the other hosts from the current one?)

If that is the case, this is a known issue.  You just need to run peer
probe from a different peer back to the one in question.  That will
correct the peer listing.

As for having the current host in the peer list, a host does not peer
itself, so the context would be incorrect to list it.

Though, if the current host was not a peer in the cluster, then it would
not list any other peers.



'gluster pool list' in 3.5 contains information about all nodes in the 
cluster (including the host on which command is issued).


If IPs are used to do peer probe , then "gluster pool list" command will 
print IPs of all the nodes except the local host in the output. For the 
node on which command is being executed it will print "localhost".  IMO 
It would be better to print the local host as IP (*localhost)


e.g:

[root@dhcpX~]# gluster pool list
UUIDHostnameState
9d13600a-37a9-4f73-b72c-505da4c90d86xxx.xxx.xxx.xxxConnected
15928c8d-0dc4-4d8e-9f42-f900bf58d80flocalhostConnected

-Lala

-Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Samba vfs_glusterfs Quota Support?

2013-10-30 Thread Lalatendu Mohanty

On 10/23/2013 05:26 PM, David Gibbons wrote:

Hi All,

I'm setting up a gluster cluster that will be accessed via smb. I was 
hoping that the quotas. I've configured a quota on the path itself:


# gluster volume quota gfsv0 list
path  limit_set  size
--
/shares/testsharedave   10GB8.0KB

And I've configured the share in samba (and can access it fine):
# cat /etc/samba/smb.conf
[testsharedave]
vfs objects = glusterfs
glusterfs:volfile_server = localhost
glusterfs:volume = gfsv0
path = /shares/testsharedave
valid users = dave
guest ok = no
writeable = yes

But windows does not reflect the quota and instead shows the full size 
of the gluster volume.


I've reviewed the code in 
https://forge.gluster.org/samba-glusterfs/samba-glusterfs-vfs/blobs/master/src/vfs_glusterfs.c -- 
which does not appear to support passing gluster quotas to samba. So I 
don't think my installation is broken, it seems like maybe this just 
isn't supported.


Can anyone speak to whether or not quotas are going to be implemented 
in vfs_glusterfs for samba? Or if I'm just crazy and doing this wrong 
;)? I'm definitely willing to help with the code but don't have much 
experience with either samba modules or the gluster API.



Hi David,
Quotas are supported by vfs_glusterfs for samba. I have also set quota 
on the volume correctly. If you try to write more data then the quota on 
the directory(/shares/testsharedave ), it will not allow.


But for the clients (i.e. Windows/smb, nfs, fuse) to reflect in the meta 
data  information (i.e. properties in Windows) , you have run below 
volume set  command on respective volume.


gluster volume set  quota-deem-statfs on

-Lala

Cheers,
Dave



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users