Hi All,
The resource of Xen uses 'xm destroy -w .'.
But, this command does not have the w option.
I think that it is a typo.
Best Regards,
Hideo Yamauchi.
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/lis
Hi,
I need a pair of simple USB-controlled 110v-power devices.
Just to turn on/off power remotely under WindozXP. (sorry I cant get em to
use ubuntu!)
Anyone have any leads?
Thanks guys.
-jim
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http:
Hi,
I'm new to linux-ha, I've been able to test a 2 nodes setup running
drbd, lvm and ipaddr2 ressources in r1 and crm mode.
I'm using heartbeat+pacemaker from the binary repository on debian
etch.
I'm now trying this setup :
- 2 drbd nodes having ocf/{drbd,lvm,ipaddr2} resources
- 3 nodes with
2008/7/15 Lars Marowsky-Bree <[EMAIL PROTECTED]>:
> On 2008-07-14T08:05:52, Ciro Iriarte <[EMAIL PROTECTED]> wrote:
>
>> >> Checking the Xen agent I see that the configuration file is required.
>> >> Can't xen sync the configuration by itself at migration time
>> >> (xend-relocation-server)?, I cas
On Tue, Jul 15, 2008 at 1:26 PM, Chase Simms <[EMAIL PROTECTED]> wrote:
> OK. I thought pingd was used to test connectivity and could put a node
> in a degraded state.
>
> I thought suicide was valid because it was listed in the output from
> stonith -L.
>
> So, if pingd does not control killing a
OK. I thought pingd was used to test connectivity and could put a node
in a degraded state.
I thought suicide was valid because it was listed in the output from
stonith -L.
So, if pingd does not control killing a node, and STONITH does not
support suicide; how does a node know to shut down w
Michael Schwartzkopff schrieb:
> Ehlers, Kolja schrieb:
>> Hi all,
>>
>(...)
> is defined a enterprises 4682, which is .1.2.5.1.4.1. Putting all
Sorry, wrong keys where I pressed the keyboard. enterprises is, of
course, 1.3.6.1.4.1
(...)
Michael.
___
L
Ehlers, Kolja schrieb:
> Hi all,
>
> while reading michaels book i was wondering if there is a complete treeview
> abailable for the ha mib?
Thanks. I hope you also bought it ;-)
> I have looked that the file itself but it did not
> help. How do I get from LHALiveNodeCount to .1.2.0. I guess snm
On Tue, Jul 15, 2008 at 11:40 AM, Chase Simms <[EMAIL PROTECTED]> wrote:
> If it is the link between locations, the server that is not located with
> the 3rd party address used by pingd would no longer be able to reach it.
pingd has nothing to do with STONITH. pingd can control where resource
shal
Chase Simms schrieb:
> I have a cluster set up and working except STONITH. Which means it's
> unmanageable and not fault tolerant. I have multiple fibre connections
> between two geographically separated locations. I want to have one node
> at each location for disaster recovery. This means I c
Hi,
I see that sometimes heartbeat shutdown appears to hang. The following is
the log snippet of the same.
What is causing the above?
There is the error message "We are still in a transition. Delaying until the
TE completes". Also from the logs I guess the shutdown request has come when
in state
Here is one more thing I notice:
I modified my haresources file to be this:
watchdog-client1 IPaddr::10.0.38.71/24/eth0 drbddisk::r0 Delay::3::0 filesystem
killnfs Delay::3::0 nfs nfslock
It looks like the filesystem and the killnfs shell scripts get run twice during
heartbeat take over.
Is some
If it is the link between locations, the server that is not located with
the 3rd party address used by pingd would no longer be able to reach it.
>>> "Serge Dubrouski" <[EMAIL PROTECTED]> 7/15/2008 11:34 AM >>>
On Tue, Jul 15, 2008 at 9:04 AM, Chase Simms <[EMAIL PROTECTED]>
wrote:
> I have a cl
I'm running the following versions of heartbeat:
[EMAIL PROTECTED] ~]# rpm -qa | grep heartbeat
heartbeat-pils-2.1.3-3.el5.centos
heartbeat-2.1.3-3.el5.centos
heartbeat-stonith-2.1.3-3.el5.centos
Also I see this in the logs regarding the file system being mounted twice:
ResourceManager[10758]: 200
On Tue, Jul 15, 2008 at 9:04 AM, Chase Simms <[EMAIL PROTECTED]> wrote:
> I have a cluster set up and working except STONITH. Which means it's
> unmanageable and not fault tolerant. I have multiple fibre connections
> between two geographically separated locations. I want to have one node
> at e
Hi all,
while reading michaels book i was wondering if there is a complete treeview
abailable for the ha mib? I have looked that the file itself but it did not
help. How do I get from LHALiveNodeCount to .1.2.0. I guess snmpwalk is used
for that but sill it gives me
LINUX-HA-MIB::LHALiveNodeCount
I have a cluster set up and working except STONITH. Which means it's
unmanageable and not fault tolerant. I have multiple fibre connections
between two geographically separated locations. I want to have one node
at each location for disaster recovery. This means I cannot use a
cross-over or ser
> -Ursprüngliche Nachricht-
> Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> ha.org] Im Auftrag von Michael Schwartzkopff
> Gesendet: Dienstag, 15. Juli 2008 16:10
> An: General Linux-HA mailing list
> Betreff: Re: [Linux-HA] stupid crm-verify /cibadmin -C error :(
>
> Am Dienstag, 1
Am Dienstag, 15. Juli 2008 15:41 schrieb Schmidt, Florian:
> Hi everyone,
>
> I just wanted to create a new CIB..but got stuck at the very first
> resource
>
(...)
continued from my posting before:
If you really want to check a single resource export the actual CIB to a file,
edit this file,
Am Dienstag, 15. Juli 2008 15:41 schrieb Schmidt, Florian:
> Hi everyone,
>
> I just wanted to create a new CIB..but got stuck at the very first
> resource
>
>
(...)
crm_verify can only check complete CIBs including all sections, not single
resources.
--
Dr. Michael Schwartzkopff
MultiNET S
Hi everyone,
I just wanted to create a new CIB..but got stuck at the very first
resource
crm_verify -V -x drbd_1_master_slave.xml
element master_slave: validity error : Element master_slave content does
not follow the DTD, expecting (meta_attributes | instance_attributes |
primitive | group
Hi,
On Tue, Jul 15, 2008 at 12:45:23PM +0100, Paul Walsh wrote:
> Lars Marowsky-Bree wrote:
>> On 2008-07-15T10:56:44, Paul Walsh <[EMAIL PROTECTED]> wrote:
>
>> That's lucky, but not guaranteed if the timing is wrong. Certainly you
>> can do that and not hit the monitor.
>
> :)
I'd propose to se
Lars Marowsky-Bree wrote:
On 2008-07-15T10:56:44, Paul Walsh <[EMAIL PROTECTED]> wrote:
That's lucky, but not guaranteed if the timing is wrong. Certainly you
can do that and not hit the monitor.
:)
CIB is attached for reference.
If you read the CIB, you will find that you have target
I changed the multicast IP in /etc/ha.d/ha.cf and things seem to be
fine now (dont see any more messages in /var/log/messages apart from
the info ones). Thanks for the help.
Bala
On Tue, Jul 15, 2008 at 4:11 PM, Lars Marowsky-Bree <[EMAIL PROTECTED]> wrote:
> On 2008-07-15T15:54:10, Bala <[EMAIL
On 2008-07-14T08:05:52, Ciro Iriarte <[EMAIL PROTECTED]> wrote:
> >> Checking the Xen agent I see that the configuration file is required.
> >> Can't xen sync the configuration by itself at migration time
> >> (xend-relocation-server)?, I case this is false, I still need to
> >> export and sync th
On 2008-07-15T16:19:31, HIDEO YAMAUCHI <[EMAIL PROTECTED]> wrote:
> Hi Lars,
>
> I forgot to say a very important thing.
>
> The system which I intend for does not use STONITH.
> If STONITH enters, I do not become such a problem.
Systems without STONITH are not supportable anway.
Regards,
On 2008-07-15T15:54:10, Bala <[EMAIL PROTECTED]> wrote:
> >Do you have several clusters on the same network segment? If so, you
> >should put them on different port numbers (udpport) or multicast
> >addresses.
> Actually, yes. I do have a different cluster on the same network.
> Thanks for the poi
On 2008-07-15T10:56:44, Paul Walsh <[EMAIL PROTECTED]> wrote:
>>> /usr/lib/ocf/resource.d/BCU/apache2 stop
>>> /usr/lib/ocf/resource.d/BCU/apache2 start
>> No, not if you're doing monitoring; the cluster will find out and
>> restart the group.
> The group, or just the resource? In theory, the scr
>How can that be true? You're reporting node names of RHEL5HA1 and the
>log message you show is from w2k8-src?
I have two other machines (w2k8-src and w2k8-tgt) pinging this node
and downloading some files via an FTP session. I have set RHEL5HA1 and
RHEL5HA2 on a HA cluster so that if one of the m
Lars Marowsky-Bree wrote:
On 2008-07-15T08:13:25, Paul Walsh <[EMAIL PROTECTED]> wrote:
snip
(Stopping Apache would at least imply also stopping appsAlert, btw, as
the group has linear dependencies.)
Not a problem. appsAlert would send an email to say the resource was migrating away from t
I am out of the office until 08/01/2008.
Check with Pamela Eggler, Terry Sorrell, Jeff Hurst if you have questions.
Note: This is an automated response to your message Re: [Linux-HA] About
motion time of Watchdog. sent on 7/15/08 1:41:44 AM.
This is the only notification you will receive while
On 2008-07-15T08:13:25, Paul Walsh <[EMAIL PROTECTED]> wrote:
> I have the following resource group defined:
>
> Resource Group: Moodle
> web_dev (heartbeat:drbddisk): Started mercury
> mysql_dev (heartbeat:drbddisk): Started mercury
> weblog_dev(heartbeat:drbddisk): St
Hi Lars,
I forgot to say a very important thing.
The system which I intend for does not use STONITH.
If STONITH enters, I do not become such a problem.
I registered this demands with Bugzilla.
(I write that I do not have any problem if I use STONITH.)
Best Regards,
Hideo Yamauchi.
--- Lars
I have the following resource group defined:
Resource Group: Moodle
web_dev (heartbeat:drbddisk): Started mercury
mysql_dev (heartbeat:drbddisk): Started mercury
weblog_dev (heartbeat:drbddisk): Started mercury
web_fs (heartbeat::ocf:Filesystem):Started mercu
On 2008-07-14T19:25:30, Bala <[EMAIL PROTECTED]> wrote:
> Jul 12 01:27:43 w2k8-src heartbeat: [5424]: ERROR: MSG[3] : [protocol=1]
> My /etc/ha.d/ha.cnf has the following two nodes defined in the HA
> configuration which match with the "uname -n" output as recommended:
35 matches
Mail list logo