On 22/10/2010 1:14 PM, Mike Hanby wrote:
Thanks Patrick,
In that sort of configuration, wouldn't having a failover configuration where
one server can take over another servers brick negate the need for replication?
or, wouldn't replicating negate the need for the corosync/pacemaker config,
i
Sorry- that was my typo-
We are pulling 3.1.0 ..not 3.1.1
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Renee Beckloff
Sent: Friday, October 22, 2010 2:36 PM
To: 'Bernard Li'
Cc: gluster-users@gluster.org; gluster-de...@
Hi Bernard-
We are only pulling 3.1.1 Storage platform. 3.1.1 filesystem is still GA.
Please let me know if you have any other questions,
renee
-Original Message-
From: Bernard Li [mailto:bern...@vanhpc.org]
Sent: Friday, October 22, 2010 1:56 PM
To: Renee Beckloff
Cc: gluster-de...@nong
Hi Renee:
On Fri, Oct 22, 2010 at 1:33 PM, Renee Beckloff wrote:
> First, thanks to you all for always providing the feedback we need to
> continue to grow the functionality and stability of our storage platform
> and file system. Due to some unforeseen issues that were not produced
> during be
Luis -
I know we got you an answer offine but I still wanted to post this for users
list -
users -fstype=glusterfs :/etc/autofs_dfd/users-fuse.vol
tps -fstype=glusterfs :/etc/autofs_dfd/tps-fuse.vol
The ones above follow our 3.0.x syntax which are still honored.
users -fstype=glusterfs sd
Hi Everyone-
First, thanks to you all for always providing the feedback we need to
continue to grow the functionality and stability of our storage platform
and file system. Due to some unforeseen issues that were not produced
during beta, we have decided to pull our current release of Gluster
Thanks Patrick,
In that sort of configuration, wouldn't having a failover configuration where
one server can take over another servers brick negate the need for replication?
or, wouldn't replicating negate the need for the corosync/pacemaker config,
i.e. server 1 goes down, no problem since rep
Hi mike
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Mike Hanby
Sent: Friday, October 22, 2010 12:23 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Question about Volume Type when bricks are on SAN
>One fina
Howdy,
I'm in the process of setting up GlusterFS for our users to test. I'd like some
opinions about which volume type makes sense for our configuration.
Here's our hardware config:
2 x Gluster servers with 4Gbit FC and 10Gbit Ethernet (both FC and 10GigE are
dual path to their respective swit
3.0.6 also appears to have fixed the autofs problem.
Thanks,
Brent
On Wed, 20 Oct 2010, Brent A Nelson wrote:
On Wed, 20 Oct 2010, Amar Tumballi wrote:
Brent,
Can you please try with 3.1.0 ? (if its a new setup)
I remember seeing this issue long back when I was fixing 'autofs' issues
with
Brent -
Those OS'es are 13 and 14 years old, respectively. I'm all for stable but we
certainly don't test on them :) I've sent a note to engineering, I'll let you
know what they say about UDP support.
Craig
From: "Brent A Nelson"
To: "Craig Carl"
Cc: gluster-users@gluster.org
Sent: Fri
Alas, that does not work on Solaris 2.6 or 7. Solaris 7 was apparently
the first to support the WebNFS URL syntax, but it otherwise has the same
behavior as 2.6. Both seem to be hardwired to look for the UDP mountd
port, even when told to use TCP. On each, you can specify the NFS port,
but a
Thanks Craig, I posted your answers to my blog.
On Fri, Oct 22, 2010 at 4:04 PM, Craig Carl wrote:
> Joshua -
>Thanks for the great write-up! To answer Gart's questions -
>
> 1. Ideally you shouldn't have any data on the devices you use for new
> Gluster volumes, we don't test that extensive
On Fri, Oct 22, 2010 at 10:55 AM, Horacio Sanson wrote:
> Distributed volume: Aggregates the storage of several directories (bricks in
> gluster terms) among several computers. The benefit is that you can
> grow/shrink the volume as you please. The bad part is that this offers no
> performance/
Craig -
You can read from the back-end, but yes, we can't guarantee the timeliness or
consistency of that data. Most users limit that access to backups, non-critical
maintenance tasks.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
From: "Craig Box"
To: "Craig Carl"
Hi Craig,
2. You can read from the back end, all writes should go through the Gluster
> mount point.
>
This contradicts what I have read in the past. If a file is out of date on
the current node, you won't get the updated version, so you are always
supposed to read from the mount point. Right?
Hi Stephan!
Quoting (22.10.10 15:47):
> you are talking of the problem with identification field being only 16 bit,
> right?
Right. I had some pretty bad experience ~2006 before I switched to TCP.
> We experienced this scenario to be far less severe than TCP busted by packet
> drops. In fact w
Joshua -
Thanks for the great write-up! To answer Gart's questions -
1. Ideally you shouldn't have any data on the devices you use for new Gluster
volumes, we don't test that extensively. It will generally work, the self heal
process will replicate & distribute the files as necessary, on first
On 10/21/2010 08:55 PM, Horacio Sanson wrote:
This is also something I would like to know. When connecting clients I use the
command
mount -t [nfs|glusterfs]: /mount/point
where ip-address is the IP of any of the servers that have the volume
configured. It is not clear to me how the relia
On Fri, 22 Oct 2010 15:18:09 +0200
Beat Rubischon wrote:
> Hi Stephan!
>
> Quoting (22.10.10 15:05):
>
> > We never experienced any performance problem with NFS over UDP.
>
> Be careful when using NFSoUDP on recent networking hardware. It's simply too
> fast for the primitive reassembly algor
Hi Stephan!
Quoting (22.10.10 15:05):
> We never experienced any performance problem with NFS over UDP.
Be careful when using NFSoUDP on recent networking hardware. It's simply too
fast for the primitive reassembly algorithm in UDP. You will get silent data
corruption.
SuSE warns about this fa
Stephan -
I missed that in the FAQ, thanks! I'll forward your comments to the engineering
team that wrote gNFS, see what feedback I get.
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
From: "Stephan von Krawczynski"
To: "Craig Carl"
Cc: "Brent A Nelson" , gluster-users@gluster
I'll appreciate help with the following questions by Gart on my blog
(http://goo.gl/8eKn)
Thanks
Joshua
Thanks for this accessibly straightforward series. A couple basic
questions about the way gluster works:
1) If I do this exercise -exactly- as you have done, but one of the
instances EBS
On Fri, 22 Oct 2010 04:46:44 -0500 (CDT)
Craig Carl wrote:
> {Resending due to incomplete response]
>
> Brent,
> Thanks for your feedback . To mount with a Solaris client use -
> ` mount -o proto=tcp,vers=3 nfs://:38467/ `
>
> As to UDP access we want to force users to use TCP. Everything a
On Fri, Oct 22, 2010 at 6:23 AM, Craig Carl wrote:
> Daniel -
>An idea and questions -
>
> 1. Are you fsync()'ing the directories after you create them? Could you try
> that?
> 2. Are you sure you are only accessing Gluster via the Gluster mountpoint?
> 3. What version of Gluster?
> 4. What O
hi
all guys!
I'm using glusterfs 3.10 now .I got 10 node to run gluster.I found a problem
here:
[r...@gluster-bak-1 /root]
#gluster volume create db-backup stripe 4 transport tcp gluster-bak-3:/data3
gluster-bak-4:/data4 gluster-bak-5:/data5 gluster-bak-6:/data6
Creation of volume db-backup h
On Thursday 21 October 2010 wrote Daniel Goolsby:
> I seem to have some kind of caching issue. I have a process that will
> create hundreds of directories, then immediately spawn a parallel process
> across multiple nodes. The job ends of failing because some of the nodes
> cannot see the directo
You can upgrade to 3.1.1QA2 in place if you would rather not do a new install.
As root run:
`qa-mode --enable`
Then in the GIU go to the GSN tab, upgrade from there. You will need to
manually reboot.
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
From: "Craig Carl"
Daniel -
An idea and questions -
1. Are you fsync()'ing the directories after you create them? Could you try
that?
2. Are you sure you are only accessing Gluster via the Gluster mountpoint?
3. What version of Gluster?
4. What OS, version and file system are you using?
5. How many storage se
At this point I'm going to suggest that you move to the Gluster File System
version of Gluster. You will lose the web interface but this version is far
more stable and complete.
http://download.gluster.com/pub/gluster/glusterfs/3.1/LATEST/
If you want to continue to test Storage Platform engi
Frederic -
What version of Gluster? How did you get Gluster running again?
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
From: frede...@placenet.org
To: gluster-users@gluster.org
Sent: Wednesday, October 20, 2010 3:29:21 AM
Subject: [Gluster-users] loop
hi
just fo
hi
just for information,
i use glusterfs on debian squeeze.
i have 2 server with the glusterfs server running (replicate), and a
mount with glusterfs client.
i have glusterfs-server that use /srv/glusterfs for export directory.
i made a mistake to mount the client on the same directory in t
I seem to have some kind of caching issue. I have a process that will
create hundreds of directories, then immediately spawn a parallel process
across multiple nodes. The job ends of failing because some of the nodes
cannot see the directories that the first process created. If I wait a few
minu
Hi,
I'm trying to figure out a solution for a web cluster, where part of the
sources will be fixed (those can be handled by a rsync from the dev
environment) - and the rest will be user sent datas (pictures and so on).
I wanted to avoid the NAS solution for various reasons (price, nfs
locking
Unfortunately there is much more bugs and errors. :(
It is very bad that english is not my native language. I have much to say.
I will try as much as possible to simplify my thoughts.
1) After applying patch the volume was deleted, but i cannt create new
volume any more. Next patch needed?
2)
I'm working on replacing my Ubuntu 8.04 desktops with Ubuntu 10.04, but
I've hit a snag. Automount hangs on glusterfs (tried 3.0.4 and 3.0.5) in
the same manner as described on the RedHat Bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=603378
So, it's apparently a problem in Fedora, too.
{Resending due to incomplete response]
Brent,
Thanks for your feedback . To mount with a Solaris client use -
` mount -o proto=tcp,vers=3 nfs://:38467/ `
As to UDP access we want to force users to use TCP. Everything about Gluster is
designed to be fast , as NFS over UDP approaches line spee
Vlad -
I'm not sure why mounting/unmounting would resolve the error you are
getting, I have asked someone from engineering to get in touch. In the meantime
can you upgrade to 3.0.6?
http://download.gluster.com/pub/gluster/glusterfs/3.0/LATEST/
Thanks,
Craig
- Original Message -
38 matches
Mail list logo