On Wed, 2009-08-26 at 16:11 +0200, Jakov Sosic wrote:
> Hi.
>
> I have a situation - when two nodes are up in 3 node cluster, and one
> node goes down, cluster looses quorate - although I'm using qdiskd...
>
>
>
>
> label="SAS-qdisk" status_file="/tmp/qdisk"/>
On Wed, 2009-08-26 at 09:55 +0100, Mike Cardwell wrote:
> On 25/08/2009 20:40, Mike Cardwell wrote:
>
> > I figured that failover would happen more smoothly if the client was
> > aware of and in control of what was going on. If the IP suddenly moves
> > to another NFS server I don't know how the N
Is there any book that covers virtualization using Xen and clustering
using Red hat Cluster suite in a single book that covers running a HA
cluster for virtual machines ?
Thanks
Paras.
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Hi all,
When will be possible to use fence_vmware or fence_vmware_ng with
vsphere esxi 4?? Maybe on RHEL/CentOS 5.4??
Thanks.
--
CL Martinez
carlopmart {at} gmail {d0t} com
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
On 01/09/09 14:51, brem belguebli wrote:
Thanks
Will it be supported in the future ?
Yes it will. But I can't be sure about just when "the future" is in this
case, sorry!
Chrissie
2009/9/1, Christine Caulfield mailto:ccaul...@redhat.com>>:
On 01/09/09 08:53, brem belguebli wrote:
Thanks
Will it be supported in the future ?
2009/9/1, Christine Caulfield :
>
> On 01/09/09 08:53, brem belguebli wrote:
>
>> Hello Chrissie,
>> I couldn't find the item in the doc (the CMAN FAQ).
>> Brem
>>
>>
>>
> It's here:
>
> http://sources.redhat.com/cluster/wiki/MultiHome
>
> Chrissie
>
>
>
>
>
> There is a good documentation at http://www.drbd.org/ search for
> primary-primary mode and make sure the replication channels is the same as
> for
> the cluster communication to avoid split-brain and data corruption
>
I´ll check it, thanks
>
> > About the cluster - you don't need to def
On Tue, 1 Sep 2009 14:48:13 +0200, ESGLinux wrote
> 2009/9/1 Kaloyan Kovachev
> On Tue, 1 Sep 2009 14:21:47 +0200, ESGLinux wrote
>
> >
> >
> >
> >
> >
> > You should use one iscsi lun shared by both cluster nodes. You can mount a
> GFS filesystem without locking (lock=nolock) with (cor
On 01/09/09 08:53, brem belguebli wrote:
Hello Chrissie,
I couldn't find the item in the doc (the CMAN FAQ).
Brem
It's here:
http://sources.redhat.com/cluster/wiki/MultiHome
Chrissie
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluste
2009/9/1 Kaloyan Kovachev
> On Tue, 1 Sep 2009 14:21:47 +0200, ESGLinux wrote
> >
> >
> >
> >
> >
> > You should use one iscsi lun shared by both cluster nodes. You can mount
> a
> GFS filesystem without locking (lock=nolock) with (correct me if I am
> wrong)
> the node not being part of a cluste
On Tue, 1 Sep 2009 14:21:47 +0200, ESGLinux wrote
>
>
>
>
>
> You should use one iscsi lun shared by both cluster nodes. You can mount a
GFS filesystem without locking (lock=nolock) with (correct me if I am wrong)
the node not being part of a cluster, but only in one node at a time.
> You
> You should use one iscsi lun shared by both cluster nodes. You can mount a
> GFS filesystem without locking (lock=nolock) with (correct me if I am wrong)
> the node not being part of a cluster, but only in one node at a time.
> You can mount a GFS filesystem created for a certain cluster without
On Tue, Sep 1, 2009 at 1:05 PM, ESGLinux wrote:
>
>
> 2009/9/1 Juan Ramon Martin Blanco
>
>>
>>
>> On Tue, Sep 1, 2009 at 12:38 PM, ESGLinux wrote:
>>
>>> Hi All,
>>> First, sorry if this can be considered Off Topic but my first aproach was
>>> using clustering to my problem so I suposse you co
Am Dienstag, den 01.09.2009, 12:48 +0200 schrieb Jakov Sosic:
> On Tue, 01 Sep 2009 12:29:36 +0200
> "Marc - A. Dahlhaus [ Administration | Westermann GmbH ]"
> wrote:
>
> > It isn't misbehaving at all here.
> >
> > The job of RHCS in this case is to save your data against failure.
> >
> > If f
2009/9/1 Juan Ramon Martin Blanco
>
>
> On Tue, Sep 1, 2009 at 12:38 PM, ESGLinux wrote:
>
>> Hi All,
>> First, sorry if this can be considered Off Topic but my first aproach was
>> using clustering to my problem so I suposse you could have the same problem.
>>
>> I have 2 computers running JBos
On Tue, Sep 1, 2009 at 12:38 PM, ESGLinux wrote:
> Hi All,
> First, sorry if this can be considered Off Topic but my first aproach was
> using clustering to my problem so I suposse you could have the same problem.
>
> I have 2 computers running JBoss and I need to share a directory for the
> cach
On Tue, 01 Sep 2009 12:29:36 +0200
"Marc - A. Dahlhaus [ Administration | Westermann GmbH ]"
wrote:
> It isn't misbehaving at all here.
>
> The job of RHCS in this case is to save your data against failure.
>
> If fenced can't fence a node successfully, RHCS will wait in stalled
> mode (because
Hi All,
First, sorry if this can be considered Off Topic but my first aproach was
using clustering to my problem so I suposse you could have the same problem.
I have 2 computers running JBoss and I need to share a directory for the
cache (I use OSCache).
First I try to use a NFS service on a Red
Am Dienstag, den 01.09.2009, 11:26 +0200 schrieb Jakov Sosic:
> On Mon, 31 Aug 2009 23:26:06 +0200
> "Marc - A. Dahlhaus" wrote:
>
> > I think your so called 'limitation' is more related to mistakes that
> > was made during the planing phase of your cluster setup than to
> > missing functionality
Hi,
On Tue, 1 Sep 2009 11:26:48 +0200, Jakov Sosic wrote
> On Mon, 31 Aug 2009 23:26:06 +0200
> "Marc - A. Dahlhaus" wrote:
>
> > I think your so called 'limitation' is more related to mistakes that
> > was made during the planing phase of your cluster setup than to
> > missing functionality.
>
On Mon, 31 Aug 2009 23:26:06 +0200
"Marc - A. Dahlhaus" wrote:
> I think your so called 'limitation' is more related to mistakes that
> was made during the planing phase of your cluster setup than to
> missing functionality.
Yeah, and what can be that mistake? I'll feel free to quote John:
> Th
On Mon, 31 Aug 2009 14:35:22 -0700
Rick Stevens wrote:
> On re-reading my response, it seemed unintentionally harsh. I didn't
> mean any disrespect, sir. I was simply questioning the concept that a
> reconfiguration of a cluster shouldn't be required when, indeed the
> cluster was being reconfi
On Mon, 31 Aug 2009 14:22:07 -0700
Rick Stevens wrote:
> I don't see that there's anything to fix. You had a three-node
> cluster so you needed a majority of nodes up to maintain a quorum.
> One node died, killing quorum and thus stopping the cluster
Nope. Quorum is still there. I have 3 nodes
On Tue, 2009-09-01 at 11:08 +0200, Alain.Moulle wrote:
> Hi,
> I have this cman version :
> cman-3.0.0-15.rc1.fc11.x86_64
> is it possible to put the cluster.conf in another place
> than /etc/cluster/.
> and if so, how can I tell it to cman ?
> Thanks
> Alain
The exact same way I already explained
Hi,
I have this cman version :
cman-3.0.0-15.rc1.fc11.x86_64
is it possible to put the cluster.conf in another place than /etc/cluster/.
and if so, how can I tell it to cman ?
Thanks
Alain
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Hello Chrissie,
I couldn't find the item in the doc (the CMAN FAQ).
Brem
2009/9/1, Christine Caulfield :
>
> On 31/08/09 08:56, brem belguebli wrote:
>
>> Hi,
>> I was wondering if there is a way with cman to configure 2 heartbeat
>> channels (let's say my prod bond0 and my outband bond1) as
On 31/08/09 08:56, brem belguebli wrote:
Hi,
I was wondering if there is a way with cman to configure 2 heartbeat
channels (let's say my prod bond0 and my outband bond1) as it seems to
be possible with openais and their redundant ring interfaces configuration.
Brem
Yes, there's an item about i
On 28/08/09 18:38, brem belguebli wrote:
Hi
the clusternodes defined in cluster.conf are :
node1.mydomain
node2.mydomain
which correpond to the bond0 interfaces on both nodes.
I expect to use node1-hb and node2-hb as heartbeat interfaces. (bond1)
I may have misunderstood something, but are yo
28 matches
Mail list logo