Re: [Gluster-users] Seeking Feedback on Gluster Development Priorities/Roadmap
1. Stability 2. Stability 3. Stability If my customers lose one file, everything else is irrelevant. It really is that simple. Cheers Kon ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Does anyone see inaccessible files under NFS client for the distributed volumes?
Attempted to replicate with no luck. This is however the exact type of error I was seeing with NGINX. My cluster is a distribute pair with ALB bonded Gige, so it may be that my layout will simply manifest itself less frequently than yours. Cheers Kon On Wed, Feb 23, 2011 at 4:48 PM, Luis l...@luiscerezo.org wrote: I have a partial work around, but this one is troublesome. Anyone experience anything like this? If so, please contact me offline so we can compare notes. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Fwd: files not syncing up with glusterfs 3.1.2
On Mon, Feb 21, 2011 at 9:45 AM, Steve Wilson ste...@purdue.edu wrote: We had trouble with reliability for small, actively-accessed files on a distribute-replicate volume in both GlusterFS 3.11 and 3.12. It seems that the replicated servers would eventually get out of sync with each other on these kinds of files. For a while, we dropped replication and only ran the volume as distributed. This has worked reliably for the past week or so without any errors that we were seeing before: no such file, invalid argument, etc. I'm running thousands of small files over NFSv3 through NGINX with distribute and have had the opposite experience. Unfortunately when NGINX can't access a file over NFS it means a customer calling us, so right now gluster is basically sitting idle (posted my output to the list a while back with no response). Cheers Kon ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
[Gluster-users] NFS problems with 3.1.1
I'm having I/O errors on my clients who are mounting 3.1.1 via gluster native NFS. The clients are running NGINX. Any advice appreciated! Perhaps my access mode is incorrect for nfs? The throughput is about 20Mbps sustained, but the system benched at about 600-900Mbps so that shouldn't be a problem. Errors along the lines of: 2010/12/22 08:09:32 [alert] 12206#0: *5807333 sendfile() failed (5: Input/output error) while sending which matches to: nfs.log: [2010-12-22 16:07:36.937671] I [dht-common.c:369:dht_revalidate_cbk] pool-dht: subvolume pool-client-1 returned -1 (Invalid argument) (one system is gmt, the other pst) - NFS in fstab on clients: 10.2.16.51:/pool /gfs1 nfs rw,bg,rsize=8192,wsize=8192,timeo=14,noatime,intr,soft,retrans=6 0 0 - GFS configuration: 2 nodes running ubuntu 10.04.1LTS 1 brick per node consisting of EXT4+LVM on two RAID1 2TB drives (total of 2TB per node) - GFS create line: gluster volume create pool transport tcp 10.2.16.51:/pool/raw 10.2.16.52:/pool/raw - Access mode: client 1 accesses nfs on node 1 client 2 accesses nfs on node 2 - Interfaces 3 gige nics on each node bonded in bond mode 6 Cheers Kon ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
[Gluster-users] GlusterFS 3.1.1 - local volume mount
What is the desired operation mode for mounting local volumes to re-export when creating volumes in an automated fashion? Using gluster to create a new volume automagically does not place any .vol files in /etc/glusterfs. I'm not sure if this is by design, but it isn't documented. Creating a new volume: gluster volume create pool replica 2 transport tcp 172.16.16.50:/pool/raw 172.16.16.51:/pool/raw 172.16.16.52:/pool/raw 172.16.16.53:/pool/raw I can mount this as follows: glusterfs --volfile=/etc/glusterd/vols/pool/pool-fuse.vol /pool/mount Should I make a local copy of the pool-fuse.vol volume file and place it in /etc/glusterfs? Cheers Kon ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
[Gluster-users] GlusterFS 3.1.1 - peer attach bug
Not sure where to file this so I am posting it here. Attaching the local brick's ip address results in a disconnect and the peer cannot be removed. - server x.x.x.x gluster peer probe x.x.x.x - Issuing a gluster peer status shows x.x.x.x as disconnected with uuid of zeros. This is replicated to other bricks. - gluster peer detach x.x.x.x has no effect. The only resolution is to stop the gluster service on all nodes and remove the offending node from /etc/glusterd/peers on all nodes, and restart glusterd on all nodes. Gluster should prevent this from happening in the first place. Cheers Kon ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] GlusterFS 3.1.1 - local volume mount
On Thu, Dec 9, 2010 at 2:05 PM, Jacob Shucart ja...@gluster.com wrote: With Gluster 3.1.1, you no longer need to do anything with the vol files. If you create a volume like you did below, then you simply mount it like: mount -t glusterfs 172.16.16.50:/pool /pool/mount Gluster automatically gets the volume information when mounting. This is described at: http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Manu ally_Mounting_Volumes There is no need to do anything with the vol files you found in /etc/glusterd, and in fact using these can cause some functionality such as volume elasticity to break. Thanks, good to know. My specific use case is that I am re-exporting for CIFS -- the mount is local. My concern with using the ip address on the local machine for mounting was one of latency introduced for the round trip. Is this a valid concern? Cheers Kon ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users