2 server using afr (client side afr), 1 client
1) 10 file[0-9].tar.bz2 @ 45M on /mnt/raid/gfs on server1
2) scp file* on server1 to /mnt/raid/gfs on server2
3) on server1 find /mnt/raid/gfs -exec setfattr
trusted.glusterfs.version -v 1 {} \;
4) on server1 find /mnt/raid/gfs -exec setfattr
trusted.
On Mon, May 5, 2008 at 5:58 PM, Brandon Lamb <[EMAIL PROTECTED]> wrote:
> I just did some testing, and came to the conclusion that trying to
> setup afr using one server with pre-existing data and a blank server,
> and copying your data and removing xattr's on the copied data then
> initiating afr
I just did some testing, and came to the conclusion that trying to
setup afr using one server with pre-existing data and a blank server,
and copying your data and removing xattr's on the copied data then
initiating afr DOES NO GOOD.
server1 - 400 megs of data in 10 tarballs, removed all xattr
serv
We removed the 'option replicate *:20 lines ' and now everything seems
to work fine. We still need to do some benchmarking to test performance,
but so far we are using the ALU scheduler for the unify translator in
the frontend and the NUFA one in the nodes; since they are the ones
serving resou
Hi Martin, I will respond to this email later today after reading
the entire thread.
I really want to understand the issue and help you out. We always
have heated discussions even in our labs. We only take it
positively :) Your feedback is very valuable to us.
Thanks and Regards,
--
Anand Babu P
I'm getting error in client
Any help?
/lib64/libc.so.6[0x3ad08300b0]
/usr/local/lib/glusterfs/1.3.8/xlator/performance/io-cache.so(ioc_page_wakeu
p+0x9f)[0x2b2f5fbf]
/usr/local/lib/glusterfs/1.3.8/xlator/performance/io-cache.so(ioc_inode_wake
up+0x9d)[0x2b2f685d]
/usr/local/l