1. Stability
2. Stability
3. Stability
If my customers lose one file, everything else is irrelevant. It
really is that simple.
Cheers
Kon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Attempted to replicate with no luck. This is however the exact type of
error I was seeing with NGINX. My cluster is a distribute pair with
ALB bonded Gige, so it may be that my layout will simply manifest
itself less frequently than yours.
Cheers
Kon
On Wed, Feb 23, 2011 at 4:48 PM, Luis wrote:
On Mon, Feb 21, 2011 at 9:45 AM, Steve Wilson wrote:
> We had trouble with reliability for small, actively-accessed files on a
> distribute-replicate volume in both GlusterFS 3.11 and 3.12. It seems that
> the replicated servers would eventually get out of sync with each other on
> these kinds of
I'm having I/O errors on my clients who are mounting 3.1.1 via gluster
native NFS. The clients are running NGINX.
Any advice appreciated! Perhaps my access mode is incorrect for nfs?
The throughput is about 20Mbps sustained, but the system benched at
about 600-900Mbps so that shouldn't be a probl
On Thu, Dec 9, 2010 at 2:08 PM, Jacob Shucart wrote:
> Can you tell me a little more about the node? Does it have several IP
> addresses? Did you get the same results with all of the IP addresses? Or
> just one of them? When I tried to probe the IP address of the local node,
> I received a mes
On Thu, Dec 9, 2010 at 2:05 PM, Jacob Shucart wrote:
> With Gluster 3.1.1, you no longer need to do anything with the vol files.
> If you create a volume like you did below, then you simply mount it like:
>
> mount -t glusterfs 172.16.16.50:/pool /pool/mount
>
> Gluster automatically gets the volu
A little information on my configuration for this task:
- I am deploying a gfs 3.1.1 cluster using 4 nodes in mirror mode.
- Each pair is running ucarp to provide failover support.
- The two ucarp ips are then made available to clients via dnsrr.
- My access mode is read-only for CIFS with no cred
Not sure where to file this so I am posting it here.
Attaching the local brick's ip address results in a disconnect and the
peer cannot be removed.
- gluster peer probe x.x.x.x
- Issuing a gluster peer status shows x.x.x.x as disconnected with
uuid of zeros. This is replicated to other bricks.
-
What is the desired operation mode for mounting local volumes to
re-export when creating volumes in an automated fashion?
Using gluster to create a new volume automagically does not place any
.vol files in /etc/glusterfs. I'm not sure if this is by design, but
it isn't documented.
Creating a new