Hi,
I'm setting up gluster to share /usr/local among 24 compute nodes. The
basic goal is to be able to change files in /usr/local in one place, and
have it replicate out to all the other nodes.
What I'd like to avoid is having a single point of failure where one (or
several) nodes go down a
Stephan von Krawczynski wrote:
Hello all,
I am evaluating glusterfs currently and therefore use a testbed consisting of
four (real) boxes, two configured as fileservers, two as clients, see configs
below.
All I have to do to make the fs hang on both clients (sooner or later) is to
run bonnie (a
Paolo Pisati wrote:
could you try a different speed/stress test?
for example, download the latest firefox archive (or any other
big-enough compilable application),
decompress it on the server, export the filesystem via nfs and via gfs,
and try to compile it from
the client: what's the differ
t=200000 && sync"
Any ideas?
Thanks,
Matt
Matt M wrote:
Hi All,
I'm new to gluster and have a basic test environment of three old PCs:
two servers and one client. I've currently got it configured to do AFR
on the two servers and HA on the client, acc
Hi All,
I'm new to gluster and have a basic test environment of three old PCs:
two servers and one client. I've currently got it configured to do AFR
on the two servers and HA on the client, according to this example:
http://www.gluster.org/docs/index.php/High-availability_storage_using_serve