Re: [Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread Sascha Ottolski
Am Dienstag 08 Januar 2008 06:06:30 schrieb Anand Avati: > > > Brandon, > > > who does the copy is decided where the AFR translator is loaded. if > > > you have AFR loaded on the client side, then the client does the two > > > writes. > > > > you > > > > > can also have AFR loaded on the server si

Re: [Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread Brandon Lamb
On Jan 7, 2008 9:53 PM, Anand Avati <[EMAIL PROTECTED]> wrote: > > I got some very odd failover problems when using the RR DNS failover > > config (when testing failures mid-write, etc), but that was a few > > versons back. I highly recommend using AFR on the client side as a > > failover solution

Re: [Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread Anand Avati
> I got some very odd failover problems when using the RR DNS failover > config (when testing failures mid-write, etc), but that was a few > versons back. I highly recommend using AFR on the client side as a > failover solution. It's very robust, and easy to deal with on the > server end (since y

Re: [Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread Kevan Benson
matthew zeier wrote: Anand Avati wrote: Here is a very nice tutorial from Paul England which gives a usage case for server side replication and high availability - http://www.gluster.org/docs/index.php/GlusterFS_High_Availability_Storage_with_GlusterFS The configuration in the article c

Re: [Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread Anand Avati
> > > In that config the client is only mounting from one server > (roundrobin.gluster.local). Seems that a failure would be user > impacting, especially if roundrobin.gluster.local gave me the failed > server's address again. yes, but the 'effect' is not as bad as it sounds. glusterfs caches al

Re: [Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread matthew zeier
Anand Avati wrote: Here is a very nice tutorial from Paul England which gives a usage case for server side replication and high availability - http://www.gluster.org/docs/index.php/GlusterFS_High_Availability_Storage_with_GlusterFS The configuration in the article can be further simplified

Re: [Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread Brandon Lamb
On Jan 7, 2008 9:11 PM, Anand Avati <[EMAIL PROTECTED]> wrote: > Here is a very nice tutorial from Paul England which gives a usage case for > server side replication and high availability - > > > http://www.gluster.org/docs/index.php/GlusterFS_High_Availability_Storage_with_GlusterFS > > The confi

Re: [Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread Anand Avati
Here is a very nice tutorial from Paul England which gives a usage case for server side replication and high availability - http://www.gluster.org/docs/index.php/GlusterFS_High_Availability_Storage_with_GlusterFS The configuration in the article can be further simplified by removing the unify la

Re: [Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread Anand Avati
> > Brandon, > > who does the copy is decided where the AFR translator is loaded. if you > > have AFR loaded on the client side, then the client does the two writes. > you > > can also have AFR loaded on the server side, and handle server do the > > replication. Translators can be loaded anywhere

Re: [Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread matthew zeier
I imagine the server config would look like: volume local-brick type storage/posix option directory /mnt/local-brick end-volume volume remote-brick type protocol/client option transport-type tcp/client option remote-host 10.2.10.100 option remote-subvolume other-local-brick end-volume vol

Re: [Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread Brandon Lamb
On Jan 7, 2008 8:48 PM, matthew zeier <[EMAIL PROTECTED]> wrote: > > Im working on trying to figure out how to do this right now, stumbling > > my way through a server config. Using NFS my 2 servers sit at under > > 0.40 load most of the time, I would rather add extra load to them than > > my clien

Re: [Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread matthew zeier
Im working on trying to figure out how to do this right now, stumbling my way through a server config. Using NFS my 2 servers sit at under 0.40 load most of the time, I would rather add extra load to them than my client machines. I imagine the server config would look like: volume local-brick

Re: [Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread Brandon Lamb
On Jan 7, 2008 8:41 PM, matthew zeier <[EMAIL PROTECTED]> wrote: > Anand Avati wrote: > > Brandon, > > who does the copy is decided where the AFR translator is loaded. if you > > have AFR loaded on the client side, then the client does the two writes. you > > can also have AFR loaded on the server

Re: [Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread matthew zeier
Anand Avati wrote: Brandon, who does the copy is decided where the AFR translator is loaded. if you have AFR loaded on the client side, then the client does the two writes. you can also have AFR loaded on the server side, and handle server do the replication. Translators can be loaded anywhere

Re: [Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread Anand Avati
Brandon, who does the copy is decided where the AFR translator is loaded. if you have AFR loaded on the client side, then the client does the two writes. you can also have AFR loaded on the server side, and handle server do the replication. Translators can be loaded anywhere (client or server, any

[Gluster-devel] Confused on AFR, where does it happen client or server

2008-01-07 Thread Brandon Lamb
I have been reading through old mails for this list and the wiki and i am confused on what machine would do the writes for an afr setup. Say I have two "servers" (192.168.0.10, 192.168.0.11) with configs - volume locks type features/posix-locks subvolumes brick end-volume volume

[Gluster-devel] afr's return "struct stat" scheme

2008-01-07 Thread LI Daobing
Hello, just a little review on the afr return "struct stat" scheme there are 18 functions[1] in xlator_fops which return a struct stat* in cbk. in 1.3.7 afr implement 16 of them(except fchmod and fchown, which return ENOSYS), and in TLA, all of them is implemented. most of them adopt a scheme to

Re: [Gluster-devel] 2 out of 4 bonnies failed :-((

2008-01-07 Thread Sam Douglas
My experience with Bonnie running on GlusterFS (or NFS) shows that you don't actually need to use the ridiculously large file sizes (to defeat kernel caching). If it is taking ages, try telling Bonnie to use a smaller file size -- it generally doesn't affect the benchmark results (much?). -- Sam.

Re: [Gluster-devel] latest patches?

2008-01-07 Thread Sascha Ottolski
Am Montag 07 Januar 2008 18:05:29 schrieben Sie: > Sascha, > please roll back to patch-629. The self heal changes are half done. In the > mean time, can you open the core with gdb : > > gdb -c /core.PID glusterfs > > and at the gdb prompt: > > (gdb) bt > > and give us this output. it will be of gr

Re: [Gluster-devel] 2 out of 4 bonnies failed :-((

2008-01-07 Thread Sascha Ottolski
Am Montag 07 Januar 2008 18:01:44 schrieben Sie: > > > Sascha, > > > the logs say op_errno=28, which is ENOSPC (no space left on device). > > > > were > > > > > you aware of that already? > > > > hmm, didn't see this, but this seems more than unlikely, with almost 80 > > GB of > > free space. howe

Re: [Gluster-devel] latest patches?

2008-01-07 Thread Anand Avati
Sascha, please roll back to patch-629. The self heal changes are half done. In the mean time, can you open the core with gdb : gdb -c /core.PID glusterfs and at the gdb prompt: (gdb) bt and give us this output. it will be of great help. avati 2008/1/7, Sascha Ottolski <[EMAIL PROTECTED]>: > >

Re: [Gluster-devel] 2 out of 4 bonnies failed :-((

2008-01-07 Thread Anand Avati
> > > > Sascha, > > the logs say op_errno=28, which is ENOSPC (no space left on device). > were > > you aware of that already? > > hmm, didn't see this, but this seems more than unlikely, with almost 80 GB > of > free space. however, don't be sure how many each bonnie would claim... the op_errno

Re: [Gluster-devel] latest patches?

2008-01-07 Thread Sascha Ottolski
Am Montag 07 Januar 2008 17:39:15 schrieben Sie: > Sascha, > these patches are still not final (self heal enhancements are on the way) > but, as-is they should be stable. thanks, but I think there is a problem :-( just gave'em a whirl, but the client would segfault on first acces of the mount po

Re: [Gluster-devel] latest patches?

2008-01-07 Thread Anand Avati
Sascha, these patches are still not final (self heal enhancements are on the way) but, as-is they should be stable. avati 2008/1/7, Sascha Ottolski <[EMAIL PROTECTED]>: > > Hi, > > does anyone know if the patches 629 till 633 can be considered stable? > they > sound as if they could increase per

Re: [Gluster-devel] 2 out of 4 bonnies failed :-((

2008-01-07 Thread Anand Avati
Sascha, the logs say op_errno=28, which is ENOSPC (no space left on device). were you aware of that already? avati 2008/1/7, Sascha Ottolski <[EMAIL PROTECTED]>: > > Hi, > > I found a somewhat frustrating test result after the weekend. I startet a > bonnie on four different clients (so a total o

[Gluster-devel] can mount several times

2008-01-07 Thread Sascha Ottolski
Hi, don't know if it's a bug or a feature: I have fstab entries for my gluster mounts. If I do "mount -a" several times, I have several number of identical looking mounts. Same things happens if glusterfs is called manually: glusterfs on /mnt/gluster-test type fuse (rw,nosuid,nodev,allow_other

[Gluster-devel] latest patches?

2008-01-07 Thread Sascha Ottolski
Hi, does anyone know if the patches 629 till 633 can be considered stable? they sound as if they could increase performance, is this the case? Thanks a lot, Sascha ___ Gluster-devel mailing list Gluster-devel@nongnu.org http://lists.nongnu.org/mail

[Gluster-devel] the default 5% free space is not enough

2008-01-07 Thread LI Daobing
Hello, consider the following case in [1], /dev/sdc1 is full, but the calculated free disk percent will be 5 in rr schedule (it's about 5.03%), and rr will still want to create file on the disk. I have found the workaround, just increase the value in conf file. But please increase the default val

[Gluster-devel] 2 out of 4 bonnies failed :-((

2008-01-07 Thread Sascha Ottolski
Hi, I found a somewhat frustrating test result after the weekend. I startet a bonnie on four different clients (so a total of four bonnies in parallel). I have two servers, each two partitions, wich are unifed and afred "over cross", so each server has a brick and a mirrored brick of the other,