Am Dienstag 08 Januar 2008 06:06:30 schrieb Anand Avati:
> > > Brandon,
> > > who does the copy is decided where the AFR translator is loaded. if
> > > you have AFR loaded on the client side, then the client does the two
> > > writes.
> >
> > you
> >
> > > can also have AFR loaded on the server si
On Jan 7, 2008 9:53 PM, Anand Avati <[EMAIL PROTECTED]> wrote:
> > I got some very odd failover problems when using the RR DNS failover
> > config (when testing failures mid-write, etc), but that was a few
> > versons back. I highly recommend using AFR on the client side as a
> > failover solution
> I got some very odd failover problems when using the RR DNS failover
> config (when testing failures mid-write, etc), but that was a few
> versons back. I highly recommend using AFR on the client side as a
> failover solution. It's very robust, and easy to deal with on the
> server end (since y
matthew zeier wrote:
Anand Avati wrote:
Here is a very nice tutorial from Paul England which gives a usage
case for server side replication and high availability -
http://www.gluster.org/docs/index.php/GlusterFS_High_Availability_Storage_with_GlusterFS
The configuration in the article c
>
>
> In that config the client is only mounting from one server
> (roundrobin.gluster.local). Seems that a failure would be user
> impacting, especially if roundrobin.gluster.local gave me the failed
> server's address again.
yes, but the 'effect' is not as bad as it sounds. glusterfs caches al
Anand Avati wrote:
Here is a very nice tutorial from Paul England which gives a usage case
for server side replication and high availability -
http://www.gluster.org/docs/index.php/GlusterFS_High_Availability_Storage_with_GlusterFS
The configuration in the article can be further simplified
On Jan 7, 2008 9:11 PM, Anand Avati <[EMAIL PROTECTED]> wrote:
> Here is a very nice tutorial from Paul England which gives a usage case for
> server side replication and high availability -
>
>
> http://www.gluster.org/docs/index.php/GlusterFS_High_Availability_Storage_with_GlusterFS
>
> The confi
Here is a very nice tutorial from Paul England which gives a usage case for
server side replication and high availability -
http://www.gluster.org/docs/index.php/GlusterFS_High_Availability_Storage_with_GlusterFS
The configuration in the article can be further simplified by removing the
unify la
> > Brandon,
> > who does the copy is decided where the AFR translator is loaded. if you
> > have AFR loaded on the client side, then the client does the two writes.
> you
> > can also have AFR loaded on the server side, and handle server do the
> > replication. Translators can be loaded anywhere
I imagine the server config would look like:
volume local-brick
type storage/posix
option directory /mnt/local-brick
end-volume
volume remote-brick
type protocol/client
option transport-type tcp/client
option remote-host 10.2.10.100
option remote-subvolume other-local-brick
end-volume
vol
On Jan 7, 2008 8:48 PM, matthew zeier <[EMAIL PROTECTED]> wrote:
> > Im working on trying to figure out how to do this right now, stumbling
> > my way through a server config. Using NFS my 2 servers sit at under
> > 0.40 load most of the time, I would rather add extra load to them than
> > my clien
Im working on trying to figure out how to do this right now, stumbling
my way through a server config. Using NFS my 2 servers sit at under
0.40 load most of the time, I would rather add extra load to them than
my client machines.
I imagine the server config would look like:
volume local-brick
On Jan 7, 2008 8:41 PM, matthew zeier <[EMAIL PROTECTED]> wrote:
> Anand Avati wrote:
> > Brandon,
> > who does the copy is decided where the AFR translator is loaded. if you
> > have AFR loaded on the client side, then the client does the two writes. you
> > can also have AFR loaded on the server
Anand Avati wrote:
Brandon,
who does the copy is decided where the AFR translator is loaded. if you
have AFR loaded on the client side, then the client does the two writes. you
can also have AFR loaded on the server side, and handle server do the
replication. Translators can be loaded anywhere
Brandon,
who does the copy is decided where the AFR translator is loaded. if you
have AFR loaded on the client side, then the client does the two writes. you
can also have AFR loaded on the server side, and handle server do the
replication. Translators can be loaded anywhere (client or server, any
I have been reading through old mails for this list and the wiki and i
am confused on what machine would do the writes for an afr setup.
Say I have two "servers" (192.168.0.10, 192.168.0.11) with configs
-
volume locks
type features/posix-locks
subvolumes brick
end-volume
volume
Hello,
just a little review on the afr return "struct stat" scheme
there are 18 functions[1] in xlator_fops which return a struct stat*
in cbk. in 1.3.7 afr implement 16 of them(except fchmod and fchown,
which return ENOSYS), and in TLA, all of them is implemented.
most of them adopt a scheme to
My experience with Bonnie running on GlusterFS (or NFS) shows that you
don't actually need to use the ridiculously large file sizes (to
defeat kernel caching). If it is taking ages, try telling Bonnie to
use a smaller file size -- it generally doesn't affect the benchmark
results (much?).
-- Sam.
Am Montag 07 Januar 2008 18:05:29 schrieben Sie:
> Sascha,
> please roll back to patch-629. The self heal changes are half done. In the
> mean time, can you open the core with gdb :
>
> gdb -c /core.PID glusterfs
>
> and at the gdb prompt:
>
> (gdb) bt
>
> and give us this output. it will be of gr
Am Montag 07 Januar 2008 18:01:44 schrieben Sie:
> > > Sascha,
> > > the logs say op_errno=28, which is ENOSPC (no space left on device).
> >
> > were
> >
> > > you aware of that already?
> >
> > hmm, didn't see this, but this seems more than unlikely, with almost 80
> > GB of
> > free space. howe
Sascha,
please roll back to patch-629. The self heal changes are half done. In the
mean time, can you open the core with gdb :
gdb -c /core.PID glusterfs
and at the gdb prompt:
(gdb) bt
and give us this output. it will be of great help.
avati
2008/1/7, Sascha Ottolski <[EMAIL PROTECTED]>:
>
>
>
>
> > Sascha,
> > the logs say op_errno=28, which is ENOSPC (no space left on device).
> were
> > you aware of that already?
>
> hmm, didn't see this, but this seems more than unlikely, with almost 80 GB
> of
> free space. however, don't be sure how many each bonnie would claim...
the op_errno
Am Montag 07 Januar 2008 17:39:15 schrieben Sie:
> Sascha,
> these patches are still not final (self heal enhancements are on the way)
> but, as-is they should be stable.
thanks, but I think there is a problem :-( just gave'em a whirl, but the
client would segfault on first acces of the mount po
Sascha,
these patches are still not final (self heal enhancements are on the way)
but, as-is they should be stable.
avati
2008/1/7, Sascha Ottolski <[EMAIL PROTECTED]>:
>
> Hi,
>
> does anyone know if the patches 629 till 633 can be considered stable?
> they
> sound as if they could increase per
Sascha,
the logs say op_errno=28, which is ENOSPC (no space left on device). were
you aware of that already?
avati
2008/1/7, Sascha Ottolski <[EMAIL PROTECTED]>:
>
> Hi,
>
> I found a somewhat frustrating test result after the weekend. I startet a
> bonnie on four different clients (so a total o
Hi,
don't know if it's a bug or a feature: I have fstab entries for my gluster
mounts. If I do "mount -a" several times, I have several number of identical
looking mounts. Same things happens if glusterfs is called manually:
glusterfs on /mnt/gluster-test type fuse
(rw,nosuid,nodev,allow_other
Hi,
does anyone know if the patches 629 till 633 can be considered stable? they
sound as if they could increase performance, is this the case?
Thanks a lot,
Sascha
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mail
Hello,
consider the following case in [1], /dev/sdc1 is full, but the
calculated free disk percent will be 5 in rr schedule (it's about
5.03%), and rr will still want to create file on the disk.
I have found the workaround, just increase the value in conf file. But
please increase the default val
Hi,
I found a somewhat frustrating test result after the weekend. I startet a
bonnie on four different clients (so a total of four bonnies in parallel). I
have two servers, each two partitions, wich are unifed and afred "over
cross", so each server has a brick and a mirrored brick of the other,
29 matches
Mail list logo