Re: [Gluster-users] Question about Volume Type when bricks are on SAN

2010-10-22 Thread Patrick Irvine
On 22/10/2010 1:14 PM, Mike Hanby wrote: Thanks Patrick, In that sort of configuration, wouldn't having a failover configuration where one server can take over another servers brick negate the need for replication? or, wouldn't replicating negate the need for the corosync/pacemaker config, i

Re: [Gluster-users] Update on Gluster 3.1.0 Product GA

2010-10-22 Thread Renee Beckloff
Sorry- that was my typo- We are pulling 3.1.0 ..not 3.1.1 -Original Message- From: gluster-users-boun...@gluster.org [mailto:gluster-users-boun...@gluster.org] On Behalf Of Renee Beckloff Sent: Friday, October 22, 2010 2:36 PM To: 'Bernard Li' Cc: gluster-users@gluster.org; gluster-de...@

Re: [Gluster-users] Update on Gluster 3.1.0 Product GA

2010-10-22 Thread Renee Beckloff
Hi Bernard- We are only pulling 3.1.1 Storage platform. 3.1.1 filesystem is still GA. Please let me know if you have any other questions, renee -Original Message- From: Bernard Li [mailto:bern...@vanhpc.org] Sent: Friday, October 22, 2010 1:56 PM To: Renee Beckloff Cc: gluster-de...@nong

Re: [Gluster-users] Update on Gluster 3.1.0 Product GA

2010-10-22 Thread Bernard Li
Hi Renee: On Fri, Oct 22, 2010 at 1:33 PM, Renee Beckloff wrote: > First, thanks to you all for always providing the feedback we need to > continue to grow the functionality and stability of our storage platform > and file system.  Due to some unforeseen issues that were not produced > during be

Re: [Gluster-users] autofs problem

2010-10-22 Thread Craig Carl
Luis - I know we got you an answer offine but I still wanted to post this for users list - users -fstype=glusterfs :/etc/autofs_dfd/users-fuse.vol tps -fstype=glusterfs :/etc/autofs_dfd/tps-fuse.vol The ones above follow our 3.0.x syntax which are still honored. users -fstype=glusterfs sd

[Gluster-users] Update on Gluster 3.1.0 Product GA

2010-10-22 Thread Renee Beckloff
Hi Everyone- First, thanks to you all for always providing the feedback we need to continue to grow the functionality and stability of our storage platform and file system. Due to some unforeseen issues that were not produced during beta, we have decided to pull our current release of Gluster

Re: [Gluster-users] Question about Volume Type when bricks are on SAN

2010-10-22 Thread Mike Hanby
Thanks Patrick, In that sort of configuration, wouldn't having a failover configuration where one server can take over another servers brick negate the need for replication? or, wouldn't replicating negate the need for the corosync/pacemaker config, i.e. server 1 goes down, no problem since rep

Re: [Gluster-users] Question about Volume Type when bricks are on SAN

2010-10-22 Thread Patrick Irvine
Hi mike -Original Message- From: gluster-users-boun...@gluster.org [mailto:gluster-users-boun...@gluster.org] On Behalf Of Mike Hanby Sent: Friday, October 22, 2010 12:23 PM To: gluster-users@gluster.org Subject: [Gluster-users] Question about Volume Type when bricks are on SAN >One fina

[Gluster-users] Question about Volume Type when bricks are on SAN

2010-10-22 Thread Mike Hanby
Howdy, I'm in the process of setting up GlusterFS for our users to test. I'd like some opinions about which volume type makes sense for our configuration. Here's our hardware config: 2 x Gluster servers with 4Gbit FC and 10Gbit Ethernet (both FC and 10GigE are dual path to their respective swit

Re: [Gluster-users] autofs problem

2010-10-22 Thread Brent A Nelson
3.0.6 also appears to have fixed the autofs problem. Thanks, Brent On Wed, 20 Oct 2010, Brent A Nelson wrote: On Wed, 20 Oct 2010, Amar Tumballi wrote: Brent, Can you please try with 3.1.0 ? (if its a new setup) I remember seeing this issue long back when I was fixing 'autofs' issues with

Re: [Gluster-users] Some client problems with TCP-only NFS in Gluster 3.1

2010-10-22 Thread Craig Carl
Brent - Those OS'es are 13 and 14 years old, respectively. I'm all for stable but we certainly don't test on them :) I've sent a note to engineering, I'll let you know what they say about UDP support. Craig From: "Brent A Nelson" To: "Craig Carl" Cc: gluster-users@gluster.org Sent: Fri

Re: [Gluster-users] Some client problems with TCP-only NFS in Gluster 3.1

2010-10-22 Thread Brent A Nelson
Alas, that does not work on Solaris 2.6 or 7. Solaris 7 was apparently the first to support the WebNFS URL syntax, but it otherwise has the same behavior as 2.6. Both seem to be hardwired to look for the UDP mountd port, even when told to use TCP. On each, you can specify the NFS port, but a

Re: [Gluster-users] GlusterFS 3.1 on Amazon EC2 Questions

2010-10-22 Thread Joshua Saayman
Thanks Craig, I posted your answers to my blog. On Fri, Oct 22, 2010 at 4:04 PM, Craig Carl wrote: > Joshua - >Thanks for the great write-up! To answer Gart's questions - > > 1. Ideally you shouldn't have any data on the devices you use for new > Gluster volumes, we don't test that extensive

Re: [Gluster-users] I'm new to Gluster, and have some questions

2010-10-22 Thread Daniel Mons
On Fri, Oct 22, 2010 at 10:55 AM, Horacio Sanson wrote: > Distributed volume:  Aggregates the storage of several directories (bricks in > gluster terms) among several computers. The benefit is that you  can > grow/shrink the volume as you please. The bad part is that  this offers no > performance/

Re: [Gluster-users] GlusterFS 3.1 on Amazon EC2 Questions

2010-10-22 Thread Craig Carl
Craig - You can read from the back-end, but yes, we can't guarantee the timeliness or consistency of that data. Most users limit that access to backups, non-critical maintenance tasks. Thanks, Craig --> Craig Carl Senior Systems Engineer Gluster From: "Craig Box" To: "Craig Carl"

Re: [Gluster-users] GlusterFS 3.1 on Amazon EC2 Questions

2010-10-22 Thread Craig Box
Hi Craig, 2. You can read from the back end, all writes should go through the Gluster > mount point. > This contradicts what I have read in the past. If a file is out of date on the current node, you won't get the updated version, so you are always supposed to read from the mount point. Right?

Re: [Gluster-users] Some client problems with TCP-only NFS in Gluster 3.1

2010-10-22 Thread Beat Rubischon
Hi Stephan! Quoting (22.10.10 15:47): > you are talking of the problem with identification field being only 16 bit, > right? Right. I had some pretty bad experience ~2006 before I switched to TCP. > We experienced this scenario to be far less severe than TCP busted by packet > drops. In fact w

Re: [Gluster-users] GlusterFS 3.1 on Amazon EC2 Questions

2010-10-22 Thread Craig Carl
Joshua - Thanks for the great write-up! To answer Gart's questions - 1. Ideally you shouldn't have any data on the devices you use for new Gluster volumes, we don't test that extensively. It will generally work, the self heal process will replicate & distribute the files as necessary, on first

Re: [Gluster-users] I'm new to Gluster, and have some questions

2010-10-22 Thread Steve Wilson
On 10/21/2010 08:55 PM, Horacio Sanson wrote: This is also something I would like to know. When connecting clients I use the command mount -t [nfs|glusterfs]: /mount/point where ip-address is the IP of any of the servers that have the volume configured. It is not clear to me how the relia

Re: [Gluster-users] Some client problems with TCP-only NFS in Gluster 3.1

2010-10-22 Thread Stephan von Krawczynski
On Fri, 22 Oct 2010 15:18:09 +0200 Beat Rubischon wrote: > Hi Stephan! > > Quoting (22.10.10 15:05): > > > We never experienced any performance problem with NFS over UDP. > > Be careful when using NFSoUDP on recent networking hardware. It's simply too > fast for the primitive reassembly algor

Re: [Gluster-users] Some client problems with TCP-only NFS in Gluster 3.1

2010-10-22 Thread Beat Rubischon
Hi Stephan! Quoting (22.10.10 15:05): > We never experienced any performance problem with NFS over UDP. Be careful when using NFSoUDP on recent networking hardware. It's simply too fast for the primitive reassembly algorithm in UDP. You will get silent data corruption. SuSE warns about this fa

Re: [Gluster-users] Some client problems with TCP-only NFS in Gluster 3.1

2010-10-22 Thread Craig Carl
Stephan - I missed that in the FAQ, thanks! I'll forward your comments to the engineering team that wrote gNFS, see what feedback I get. Craig --> Craig Carl Senior Systems Engineer Gluster From: "Stephan von Krawczynski" To: "Craig Carl" Cc: "Brent A Nelson" , gluster-users@gluster

[Gluster-users] GlusterFS 3.1 on Amazon EC2 Questions

2010-10-22 Thread Joshua Saayman
I'll appreciate help with the following questions by Gart on my blog (http://goo.gl/8eKn) Thanks Joshua Thanks for this accessibly straightforward series. A couple basic questions about the way gluster works: 1) If I do this exercise -exactly- as you have done, but one of the instances EBS

Re: [Gluster-users] Some client problems with TCP-only NFS in Gluster 3.1

2010-10-22 Thread Stephan von Krawczynski
On Fri, 22 Oct 2010 04:46:44 -0500 (CDT) Craig Carl wrote: > {Resending due to incomplete response] > > Brent, > Thanks for your feedback . To mount with a Solaris client use - > ` mount -o proto=tcp,vers=3 nfs://:38467/ ` > > As to UDP access we want to force users to use TCP. Everything a

Re: [Gluster-users] caching.

2010-10-22 Thread Daniel Goolsby
On Fri, Oct 22, 2010 at 6:23 AM, Craig Carl wrote: > Daniel - >An idea and questions - > > 1. Are you fsync()'ing the directories after you create them? Could you try > that? > 2. Are you sure you are only accessing Gluster via the Gluster mountpoint? > 3. What version of Gluster? > 4. What O

[Gluster-users] cannot create volume

2010-10-22 Thread 陶毅
hi all guys! I'm using glusterfs 3.10 now .I got 10 node to run gluster.I found a problem here: [r...@gluster-bak-1 /root] #gluster volume create db-backup stripe 4 transport tcp gluster-bak-3:/data3 gluster-bak-4:/data4 gluster-bak-5:/data5 gluster-bak-6:/data6 Creation of volume db-backup h

Re: [Gluster-users] caching.

2010-10-22 Thread Amon Ott
On Thursday 21 October 2010 wrote Daniel Goolsby: > I seem to have some kind of caching issue. I have a process that will > create hundreds of directories, then immediately spawn a parallel process > across multiple nodes. The job ends of failing because some of the nodes > cannot see the directo

Re: [Gluster-users] cannt delete volume

2010-10-22 Thread Craig Carl
You can upgrade to 3.1.1QA2 in place if you would rather not do a new install. As root run: `qa-mode --enable` Then in the GIU go to the GSN tab, upgrade from there. You will need to manually reboot. Thanks, Craig --> Craig Carl Senior Systems Engineer Gluster From: "Craig Carl"

Re: [Gluster-users] caching.

2010-10-22 Thread Craig Carl
Daniel - An idea and questions - 1. Are you fsync()'ing the directories after you create them? Could you try that? 2. Are you sure you are only accessing Gluster via the Gluster mountpoint? 3. What version of Gluster? 4. What OS, version and file system are you using? 5. How many storage se

Re: [Gluster-users] cannt delete volume

2010-10-22 Thread Craig Carl
At this point I'm going to suggest that you move to the Gluster File System version of Gluster. You will lose the web interface but this version is far more stable and complete. http://download.gluster.com/pub/gluster/glusterfs/3.1/LATEST/ If you want to continue to test Storage Platform engi

Re: [Gluster-users] loop

2010-10-22 Thread Craig Carl
Frederic - What version of Gluster? How did you get Gluster running again? Thanks, Craig --> Craig Carl Senior Systems Engineer Gluster From: frede...@placenet.org To: gluster-users@gluster.org Sent: Wednesday, October 20, 2010 3:29:21 AM Subject: [Gluster-users] loop hi just fo

[Gluster-users] loop

2010-10-22 Thread frederic
hi just for information, i use glusterfs on debian squeeze. i have 2 server with the glusterfs server running (replicate), and a mount with glusterfs client. i have glusterfs-server that use /srv/glusterfs for export directory. i made a mistake to mount the client on the same directory in t

[Gluster-users] caching.

2010-10-22 Thread Daniel Goolsby
I seem to have some kind of caching issue. I have a process that will create hundreds of directories, then immediately spawn a parallel process across multiple nodes. The job ends of failing because some of the nodes cannot see the directories that the first process created. If I wait a few minu

[Gluster-users] Gluster in a web cluster

2010-10-22 Thread Mathieu Massebœuf - Tradingsat
Hi, I'm trying to figure out a solution for a web cluster, where part of the sources will be fixed (those can be handled by a rsync from the dev environment) - and the rest will be user sent datas (pictures and so on). I wanted to avoid the NAS solution for various reasons (price, nfs locking

Re: [Gluster-users] cannt delete volume

2010-10-22 Thread tee...@gsmserver.com
Unfortunately there is much more bugs and errors. :( It is very bad that english is not my native language. I have much to say. I will try as much as possible to simplify my thoughts. 1) After applying patch the volume was deleted, but i cannt create new volume any more. Next patch needed? 2)

[Gluster-users] autofs problem

2010-10-22 Thread Brent A Nelson
I'm working on replacing my Ubuntu 8.04 desktops with Ubuntu 10.04, but I've hit a snag. Automount hangs on glusterfs (tried 3.0.4 and 3.0.5) in the same manner as described on the RedHat Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=603378 So, it's apparently a problem in Fedora, too.

Re: [Gluster-users] Some client problems with TCP-only NFS in Gluster 3.1

2010-10-22 Thread Craig Carl
{Resending due to incomplete response] Brent, Thanks for your feedback . To mount with a Solaris client use - ` mount -o proto=tcp,vers=3 nfs://:38467/ ` As to UDP access we want to force users to use TCP. Everything about Gluster is designed to be fast , as NFS over UDP approaches line spee

Re: [Gluster-users] Split Brain?

2010-10-22 Thread Craig Carl
Vlad - I'm not sure why mounting/unmounting would resolve the error you are getting, I have asked someone from engineering to get in touch. In the meantime can you upgrade to 3.0.6? http://download.gluster.com/pub/gluster/glusterfs/3.0/LATEST/ Thanks, Craig - Original Message -