1. Stability
2. Stability
3. Stability
If my customers lose one file, everything else is irrelevant. It
really is that simple.
Cheers
Kon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Attempted to replicate with no luck. This is however the exact type of
error I was seeing with NGINX. My cluster is a distribute pair with
ALB bonded Gige, so it may be that my layout will simply manifest
itself less frequently than yours.
Cheers
Kon
On Wed, Feb 23, 2011 at 4:48 PM, Luis
On Mon, Feb 21, 2011 at 9:45 AM, Steve Wilson ste...@purdue.edu wrote:
We had trouble with reliability for small, actively-accessed files on a
distribute-replicate volume in both GlusterFS 3.11 and 3.12. It seems that
the replicated servers would eventually get out of sync with each other on
I'm having I/O errors on my clients who are mounting 3.1.1 via gluster
native NFS. The clients are running NGINX.
Any advice appreciated! Perhaps my access mode is incorrect for nfs?
The throughput is about 20Mbps sustained, but the system benched at
about 600-900Mbps so that shouldn't be a
What is the desired operation mode for mounting local volumes to
re-export when creating volumes in an automated fashion?
Using gluster to create a new volume automagically does not place any
.vol files in /etc/glusterfs. I'm not sure if this is by design, but
it isn't documented.
Creating a new
Not sure where to file this so I am posting it here.
Attaching the local brick's ip address results in a disconnect and the
peer cannot be removed.
- server x.x.x.x gluster peer probe x.x.x.x
- Issuing a gluster peer status shows x.x.x.x as disconnected with
uuid of zeros. This is replicated to
On Thu, Dec 9, 2010 at 2:05 PM, Jacob Shucart ja...@gluster.com wrote:
With Gluster 3.1.1, you no longer need to do anything with the vol files.
If you create a volume like you did below, then you simply mount it like:
mount -t glusterfs 172.16.16.50:/pool /pool/mount
Gluster automatically