Hi have setup a two node replication glusterfs. After the initial
installation the "master" node was put into the datacenter and after two
week we moved the second one also to the datacenter.
But the sync has not started yet.
On the "master"
gluster> volume info all
Volume Name: datastore1
Typ
Hey Fred,
You could implement this without touching Geo-Replication code. Gluster now
has the changelog translator (journaling mechanism) which records changes
made to the filesystem (on each brick). Journals can be consumed using the
changelog consumer library (libgfchangelog). Geo-Replication is
1.
Option: cluster.read-subvolume
Default Value: (null)
Description: inode-read fops happen only on one of the bricks in
replicate. Afr will prefer the one specified using this option if it is
not stale. Option value must be one of the xlator names of the children.
Ex: -client-0 till -client-
Hi,
I have created a proposal for the implementation of Geo Replication Hooks.
See here:
http://www.gluster.org/community/documentation/index.php/Features/Geo_Replication_Hooks
Any comments, thoughts, etc would be great.
Fred
___
Gluster-users mailing
From man (2) stat:
blksize_t st_blksize; /* blocksize for file system I/O */
blkcnt_t st_blocks; /* number of 512B blocks allocated */
The 128K you are seeing is "st_blksize" which is the recommended I/O
transfer size. The number of consumed blocks is always reported as 512
byte blocks. The a
>From man (2) stat:
blksize_t st_blksize; /* blocksize for file system I/O */
blkcnt_t st_blocks; /* number of 512B blocks allocated */
The 128K you are seeing is "st_blksize" which is the recommended I/O
transfer size. The number of consumed blocks is always rep
Hi,
I've come to notice that the file system block size reported from stat on a
client is 128k, which is pretty high for the small files I use. On the other
hand, I tested copying smaller files to the volume and it seems those 128k are
not the real block size - when I copy two 64k files to the
Hi guys!
shwetha,
Just to understand your sugestion:
1. What meaning "cluster.read-subvolume"? I searching something about it,
but didn't find anything...
2. Why i need to exec "xarg stat" to force the gluster "read" the file?
My volume has about 1.2Tb of used space, i can't stop the read 'caus
Dear Ravi,
You hit the nail right on the head. After successful MD5sum’s I reseted the the
afr extended attributes of the bricks as you proposed and self heal daemon now
runs without complains.
Thanks,
Mark
On 25 november 2013 at 12:22:27, Ravishankar N (ravishan...@redhat.com) wrote:
On 11/
On 11/25/2013 01:47 AM, Mark Ruys wrote:
So I decided to bite the bullet and upgraded from 3.3 to 3.4. Somehow
this was a painful proces for me (the glusterfs daemon refused to
start), so I decided to configure our Gluster pool from scratch.
Everything seems to work nicely, except for the self-
10 matches
Mail list logo