> > I tried this, but when I killed the switch for the cluster traffic,
> > BOTH nodes got fenced! Can I avoid this?
Hi,
i accidentally deleted previous messages so I am not sure if you have
sent your cluster.conf? Please resend it...
Jakub
--
Linux-cluster mailing list
Linux-cluster@redhat.com
Robert,
even for HA cluster, you usually need Virtual IP to run your service.
This IP can have the same problem as GFS - you can't know if failed node
is really down or it's just a glitch -> you need fencing.
=> you need fencing everytime.
Best,
Jakub Suchy
> Yes, if you are
Two simple virtual machines will do the job! I even tried running Vmware
machines as a cluster and XEN inside (=your setup), it works too but it's kind
of slow.
Cheers,
Jakub
Geoffrey wrote:
> I'm hoping to put together a simple two machine cluster with some kind
> of shared disk. This is sim
Hello,
this is a common problem which arised in past months in RHCS.
The usual solution is to let the nodes solve the problem naturally -
after the node is killed, it is usually fenced and rejoins back in OK
state after a reboot. You only have a problem if you are using manual
fencing...Don't...
We use 2530 with 2 servers and RHCS without a problem. I think that the
limitations are just due to a possible throughput of SAS.
Jakub
> I'm thinking of purchasing the 2530 to enable shared storage for a small
> cluster of Redhat Xen hosts.
>
> But, on this little blurb page from Redhat:
>
>
Hi,
> migrate="live" name="servicetest" path="/etc/xen/vm" recovery="restart"/>
>
Do you have file "/etc/xen/vm/servicetest" - XEN configuration file?
> xm shutdown servicetest ...
> Error: Domain 'servicetest' does not exist.
Jakub
--
Jakub Suchý <[EMAIL PROTECTED]>
GSM: +420 - 777 8
Hi,
> |fencing can be a SPOF, you need two fencing devices).
>
> ILO is used on many sites successfully.
It is. Just just have to be careful when using it.
> |- It's not a shared device. It MUST be on the same network path as
> | heartbeat.
> Not true... I have my ILO on my public network and
Hello,
> Does this mean that I can use this IPMI fencing to fence the cluseter
> nodes? Does this provide functionality smiliar to as using apc or hp
> power switchs?
HP iLO is supported as a fencing device. If you are serious, you should
still consider power fencing (don't forget, that fencing c
Hello,
I believe that this is normal and is caused by a recent update of cman to
cman-2.0.84-2.el5_2.2 - everything waits until fencing is completed.
However, your fencing seems misconfigured, this should be a correct
syntax for manual fence:
Jakub
--
Jakub Suchý <
Leo Pleiman wrote:
>
> The kbase article can be found at
> http://kbase.redhat.com/faq/FAQ_51_11755.shtm
> It has a link to Cisco's web site enumerating 5 possible solutions.
> http://www.cisco.com/en/US/products/hw/switches/ps708/products_tech_note09186a008059a9df.shtml
Hello,
I am aware of th
e not linked through crossover cable.
If so, can you please contact me for further details? I would very much
appreciate the help.
Thank you,
Jakub Suchy
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
You may be having the same issue as I have:
[Linux-cluster] Node won't rejoin after reboot
Do you have Red Hat support case open for it? If you do, can you send me
the issue number privately (!) ? I would use it in our support case,
because the issues may be related.
Also:
- Is IGMP Snooping enab
Hello,
we are currently trying to determine a problem in our cluster setup. We
are having two problems, both related together:
1) When doing failover, living node reports "waiting for node to be
fenced" and no failover is done...
2) When the failing node rejoins the cluster, it is killed with a
mes
Hi,
are you sure that you have all hostnames set up properly?
/etc/hosts should say:
A.B.C.D node1.somewhere
A.B.C.E node2.somewhere
You should use node1.somewhere as node name in your cluster.conf
Jakub
> For the benefit of future googlers.
>
> I managed to get the cman service started without
Fence may be failing because you have NO fencing devices :)
Jakub Suchy
Gian Paolo Buono wrote:
> Hi,
> I have a cluster configuration with two node..this is my cluster.conf:
>
> cluster.conf
>
>
>
Hello,
i am trying to solve a problem with timeouting a status script.
We are using custom init script for our service, which is doing some
operations in status section. However, one of the options when this
service is down is, that it hangs. Then, a status script may hang too
because it is waiti
> Does not sound like you are having a fencing issue, but I can share our
> configuration / implementation and experiences with it.
>
> We have been using fencing configured for HP iLO and iLO2 for the better
> of 2-years, with almost a full year in production now. It is slow (42+
> seconds per fe
(I am not talking about network bonding).
Thanks you very much,
Jakub Suchy
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
laves as master. Every server has its own
server ID and therefore it's possible to use master-master replication
(with the same limitations as MySQL master-master replication has) or
multimaster round replication. For more information have look at
documentation.
-- cut --
Seems interes
Hi,
is it possible to run a virtual service on a cluster (XEN host) without
using GFS? I know I can create an ext3 partition, but it is not possible
to add a resource to virtual service, so I can't join ext3 to it.
Thanks,
Jakub Suchy
--
Linux-cluster mailing list
Linux-cluster@redhat.com
>> 3) after clu2 is started again, VM is automatically migrated to clu1
>> according to logs and this fails.
>
> According to the logs the vm:win2003 was migrated to clu2 and not to clu1
> as you wrote.
Sure, this is my typo...
>> Do anybody know, why this fails?
>
> What exactly do you want to
> Its relatively unlikely that an RS-232 port would supply enough current
> to drive a relay directly, but you might find a suitable alternative in
> an opto-isolator, and that would also have the advantage of not being
> inductive and thus it won't potentially put a surge onto your
> power-rails i
>> Well, here's a cheap 'out' ;)
>> Ebay item # 250213910258
>> 8 port WTI NPS for $70 + $15 s/h.
>
> Indeed, that would be tempting if it wasn't on the wrong side of the
> atlantic. A more local search for similar things doesn't seem to come up
> with anything. :-(
Another "cheap" power switch
> Indeed, that would be tempting if it wasn't on the wrong side of the
> atlantic. A more local search for similar things doesn't seem to come up
> with anything. :-(
Have you considered something like this?
http://www.oracle.com/technology/pub/articles/hunter_rac10gr2.html
Build Your Own Oracle
Hi,
i am currently designing a cluster of XEN machines, having two nodes and
one shared storage (SAS). We have Recovery Point Objective of 15
minutes, so I am thinking about putting XEN machine on a LVM partition
and making a snapshot of this partition every 15 minutes.
Do anybody know any caveats
> I get the message mount: fs type gfs not supported by kernel.
Hi,
did you loaded kernel module "gfs"?
Jakub
--
Jakub Suchý <[EMAIL PROTECTED]>
GSM: +420 - 777 817 949
Enlogit s.r.o, U Cukrovaru 509/4, 400 07 Ústí nad Labem
tel.: +420 - 474 745 159, fax: +420 - 474 745 160
e-mail: [EMAIL PROT
Scott Becker wrote:
> Yes, slow at best. This list is better. Support is not the issue. I'm
> shooting for 100% uptime with simple failover. There are too many problems
> with the software and I'm a month behind schedule. The biggest problem is
> the core system is malfunctioning. Smaller proble
28 matches
Mail list logo