On February 25, 2016 8:32:44 PM PST, Kyle Maas
wrote:
>On 02/25/2016 08:20 PM, Ravishankar N wrote:
>> On 02/25/2016 11:36 PM, Kyle Maas wrote:
>>> How can I tell what AFR version a cluster is using for self-heal?
>> If all your servers and clients are 3.7.8, then they are by default
>> running
On 02/26/2016 10:02 AM, Kyle Maas wrote:
On 02/25/2016 08:20 PM, Ravishankar N wrote:
On 02/25/2016 11:36 PM, Kyle Maas wrote:
How can I tell what AFR version a cluster is using for self-heal?
If all your servers and clients are 3.7.8, then they are by default
running afr-v2. Afr-v2 was a re-
regards
Aravinda
On 02/26/2016 12:30 AM, ML mail wrote:
Hi Aravinda,
Many thanks for the steps. I have a few questions about it:
- in your point number 3, can I simply do an "rm -rf
/my/brick/.glusterfs/changelogs" ?
Yes
- in your point number 4, do I need to remove the attributes from eve
On 02/25/2016 08:25 PM, Krutika Dhananjay wrote:
>
>
>
>
> *From: *"Kyle Maas"
> *To: *gluster-users@gluster.org
> *Sent: *Thursday, February 25, 2016 11:36:53 PM
> *Subject: *[Gluster-users] AFR Version used
On 02/25/2016 08:20 PM, Ravishankar N wrote:
> On 02/25/2016 11:36 PM, Kyle Maas wrote:
>> How can I tell what AFR version a cluster is using for self-heal?
> If all your servers and clients are 3.7.8, then they are by default
> running afr-v2. Afr-v2 was a re-write of afr that went in for 3.6.,
>
- Original Message -
> From: "Kyle Maas"
> To: gluster-users@gluster.org
> Sent: Thursday, February 25, 2016 11:36:53 PM
> Subject: [Gluster-users] AFR Version used for self-heal
> How can I tell what AFR version a cluster is using for self-heal?
> The reason I ask is that I have a two-
On 02/25/2016 11:36 PM, Kyle Maas wrote:
How can I tell what AFR version a cluster is using for self-heal?
If all your servers and clients are 3.7.8, then they are by default
running afr-v2. Afr-v2 was a re-write of afr that went in for 3.6., so
any gluster package from then on has this code,
https://joejulian.name/blog/glusterfs-replication-dos-and-donts/
On 02/24/2016 01:34 PM, Simone Taliercio wrote:
Hi all :)
I would need soon to create a pool of 17 nodes. Each node requires a
copy of the same file "locally" so that can be accessed from the
deployed application.
* Do you see
On 02/26/2016 01:53 AM, Mohammed Rafi K C wrote:
>
>
> On 02/26/2016 01:32 AM, Steve Dainard wrote:
>> I haven't done anything more than peer thus far, so I'm a bit
>> confused as to how the volume info fits in, can you expand on this a bit?
>>
>> Failed commits? Is this split brain on the replic
On 02/24/2016 04:34 PM, Simone Taliercio wrote:
> Hi all :)
>
> I would need soon to create a pool of 17 nodes. Each node requires a
> copy of the same file "locally" so that can be accessed from the
> deployed application.
>
> * Do you see any performance issue in creating a Replica Set on 17 no
On 02/26/2016 01:32 AM, Steve Dainard wrote:
> I haven't done anything more than peer thus far, so I'm a bit confused
> as to how the volume info fits in, can you expand on this a bit?
>
> Failed commits? Is this split brain on the replica volumes? I don't
> get any return from 'gluster volume he
Hi all :)
I would need soon to create a pool of 17 nodes. Each node requires a copy
of the same file "locally" so that can be accessed from the deployed
application.
* Do you see any performance issue in creating a Replica Set on 17 nodes ?
Any best practice that I should follow ?
* An other que
For clarity "no return" from 'gluster volume heal info':
# gluster volume heal vm-storage info
Brick 10.0.231.50:/mnt/lv-vm-storage/vm-storage
Number of entries: 0
Brick 10.0.231.51:/mnt/lv-vm-storage/vm-storage
Number of entries: 0
Brick 10.0.231.52:/mnt/lv-vm-storage/vm-storage
Number of entr
On 02/25/2016 11:45 PM, Steve Dainard wrote:
> Hello,
>
> I upgraded from 3.6.6 to 3.7.6 a couple weeks ago. I just peered 2 new
> nodes to a 4 node cluster and gluster peer status is:
>
> # gluster peer status *<-- from node gluster01*
> Number of Peers: 5
>
> Hostname: 10.0.231.51
> Uuid: b01de
Hi Aravinda,
Many thanks for the steps. I have a few questions about it:
- in your point number 3, can I simply do an "rm -rf
/my/brick/.glusterfs/changelogs" ?
- in your point number 4, do I need to remove the attributes from every master
nodes?
- i am currently using GlusterFS 3.7.6, will thi
How can I tell what AFR version a cluster is using for self-heal?
The reason I ask is that I have a two-node replicated 3.7.8 cluster (no
arbiters) which has locking behavior during self-heal which looks very
similar to that of AFRv1 (only heals one file at a time per self-heal
daemon, appears to
Hi,
/var/lib/glusterd/snaps/ contains only 1 file called missed_snaps_list. Apart
from it, there are only directories created with
the snap names. Is .nfs01722f42, that you saw in
/var/lib/glusterd a file or a directory. If it's a file, then it
was not placed there as part of sn
If I run "reboot" on the a node,there are not .snap files on A node after
reboot.
Does the snap file only appear after unexpect reboot?
Why its size is 0 byte?
In this situation ,is a right method to solve this problem removing the snap
file?
thanks
xin
发自我的 iPhone
> 在 2016年2月25日,19:05,Atin Mu
+ Rajesh , Avra
On 02/25/2016 04:12 PM, songxin wrote:
> Thanks for your reply.
>
> Do I need check all files in /var/lib/glusterd/*?
> Must all files be same in A node and B node?
Yes, they should be identical.
>
> I found that the size of
> file /var/lib/glusterd/snaps/.nfs01722f4
Steps to force Geo-rep to sync from beginning
1. Stop Geo-replication
2. Disable the Changelog using `gluster volume set
changelog.changelog off`
3. Delete all changelogs and htime files from Brick backend of Master
Volume, $BRICK/.glusterfs/changelogs
4. Remove stime xattrs from all Brick roo
Thanks for your reply.
Do I need check all files in /var/lib/glusterd/*?
Must all files be same in A node and B node?
I found that the size of file
/var/lib/glusterd/snaps/.nfs01722f42 is 0 bytes after A board
reboot.
So glusterd can't restore by this snap file on A node.
Is
I believe you and Abhishek are from the same group and sharing the
common set up. Could you check the content of /var/lib/glusterd/* in
board B (post reboot and before starting glusterd) matches with
/var/lib/glusterd/* from board A?
~Atin
On 02/25/2016 03:48 PM, songxin wrote:
> Hi,
> I have a p
Hi,
I have a problem as below when I start the gluster after reboot a board.
precondition:
I use two boards do this test.
The version of glusterfs is 3.7.6.
A board ip:128.224.162.255
B board ip:128.224.95.140
reproduce steps:
1.systemctl start glusterd (A board)
2.systemctl start glusterd
On 02/23/2016 04:34 PM, Lindsay Mathieson wrote:
On 23/02/2016 8:29 PM, Sahina Bose wrote:
Late jumping into this thread, but curious -
Is there a specific reason that you are removing and adding a brick?
Will replace-brick not work for you?
Testing procedures for replacing a failed brick
2016-02-25 9:20 GMT+01:00 Ravishankar N :
> Right. You can mount the replica 3 volume that you just created on any node.
> Like I said it's just like accessing a remote share. Except that the 'share'
> is a glusterfs volume that you just created.
> If I understand your use case correctly, you woul
On 02/23/2016 04:34 PM, Lindsay Mathieson wrote:
On 23/02/2016 8:29 PM, Sahina Bose wrote:
Late jumping into this thread, but curious -
Is there a specific reason that you are removing and adding a brick?
Will replace-brick not work for you?
Testing procedures for replacing a failed brick
Hello,
On 02/25/2016 11:42 AM, Simone Taliercio wrote:
Hi Ravi,
Thanks a lot for your prompt reply!
2016-02-25 6:07 GMT+01:00 Ravishankar N :
I don't know what your use case is but I don't think you want to create so many
replicas.
I need to scale my application on multiple nodes because we
27 matches
Mail list logo