On Wed, 9 Jan 2008, Kevin Anderson wrote:
> If it is a MSA20, MSA30 or MSA500 - they won't work with GFS. Shared
> SCSI bus isn't really shared, accesses lock the bus such that when one
> node accesses the storage the other node is locked out.
But only temporarily, surely. The filesystem shoul
On Thu, 3 Jan 2008, Lon Hohberger wrote:
On Wed, 2008-01-02 at 17:35 -0500, James Chamberlain wrote:
Hi all,
I'm having some major stability problems with my three-node CS/GFS cluster.
Every two or three days, one of the nodes fences another, and I have to
hard-reboot the entire cluster to rec
On Wed, 2008-01-09 at 16:59 -0500, Lon Hohberger wrote:
> On Wed, 2008-01-09 at 15:47 -0600, Kevin Anderson wrote:
> > Sorry, Lon gave me updated info about the MSA500. It isn't a parallel
> > shared scsi bus configuration, so might work with gfs. However, we
> > have never run with it before and
On Wed, 2008-01-09 at 15:47 -0600, Kevin Anderson wrote:
> Sorry, Lon gave me updated info about the MSA500. It isn't a parallel
> shared scsi bus configuration, so might work with gfs. However, we
> have never run with it before and not sure about the performance
> characteristics.
It's a multi
On Wed, 2008-01-09 at 15:04 +0100, Alain Moulle wrote:
> Hi
>
> Testing the CS5 on a two-nodes cluster with quorum disk, when I did
> the test ifdown on the heart-beat interface, I got a segfault in log :
> Jan 9 09:45:30 [EMAIL PROTECTED] openais[28300]: [TOTEM] entering
> OPERATIONAL state.
>
Sorry, Lon gave me updated info about the MSA500. It isn't a parallel
shared scsi bus configuration, so might work with gfs. However, we have
never run with it before and not sure about the performance
characteristics.
Kevin
On Wed, 2008-01-09 at 12:56 -0800, Coman ILIUT wrote:
> We're using an
We're using an MSA500 actually, so what you're saying is that we're not using
the proper hardware for GFS.
Can you tell us how bad is this? The reason I'm asking is because we are
already at the second version of our product using this solution and we did not
have any issues before. So we never
On Tue, 2008-01-08 at 22:39 -0500, Charlie Brady wrote:
> On Tue, 8 Jan 2008, Gordan Bobic wrote:
>
> > Charlie Brady wrote:
> > > On Fri, 4 Jan 2008, Charlie Brady wrote:
> > >
> > >> I'm helping a colleague to collect information on an application lockup
> > >> problem on a two-node DLM/GFS cl
You are right, that package was not installed.
So now I installed the package, and recompiled "fence", but "fence_scsi" is
still not there in /sbin/
Any more idea? (Thanks for the first hint).
Alexandre Racine
Projets spéciaux
514-461-1300 poste 3304
[EMAIL PROTECTED]
-Original Message-
I believe you may not have the sg3_utils packages installed. I'll first
check for that.
Thanks.
Abdel..
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alexandre Racine
Sent: Wednesday, January 09, 2008 10:24 AM
To: linux clustering
Subject: scsi reserva
Hi all,
I am currently using version 1.0.4 of GFS and the scsi reservation binairies
(scsi_reserve, fence_scsi, etc) are not there. Is it suppose to be like this or
this is the distro I a using playing games with me (not my choice! It's Gentoo).
If it's normal that they are not there, is there
Hi
Testing the CS5 on a two-nodes cluster with quorum disk, when I did
the test ifdown on the heart-beat interface, I got a segfault in log :
Jan 9 09:45:16 [EMAIL PROTECTED] avahi-daemon[3106]: Interface eth0.IPv6 no
longer
relevant for mDNS.
Jan 9 09:45:18 [EMAIL PROTECTED] qdiskd[28265]: H
Hi,
There's a fence_apc_snmp.py script available in the cluster code repository:
http://sources.redhat.com/cgi-bin/cvsweb.cgi/cluster/fence/agents/apc/?cvsroot=cluster
I tested it a little (replaced /sbin/fence_apc with it - they both
have the same CLI parameters) and it seems to work where the f
13 matches
Mail list logo