Hi All,
I have recently setup a 2-node iSCSI fail-over array backed onto
shared SAS MD3000 storage.
I have everything (including RDAC) working fine on my Debian Etch
nodes - however I am curious if it is possible to get heartbeat to
demote itself if it loses access to the disks - I am
On 2007-12-19T21:19:48, Alan Robertson [EMAIL PROTECTED] wrote:
Dave, Dejan and (if possible) Lars:
I have put this patch into 'test'. PLEASE begin testing it at your
earliest convenience.
Is there any specific reason why you did not push it into dev?
Regards,
Lars
--
Teamlead
I'm a bit lost ... I couldn't install this package, because it's built for
etch, and I'm having conflicts. Is there anywhere where I can get the source
of this build ? I'll make my own package with it and keep you informed.
Thanks a lot for helping.
Le Wednesday 19 December 2007 17:21:18
Hi everyone!
I have a question about switching resources.
Cluster consist of 2 nodes, one fails and the other one takes over.
Lets say the failed one come up again, is there a way to prevent that
the failed one takes the resources over again?
I tried the default_resource_stickiness parameter,
On Thu, Dec 20, 2007 at 10:43:21AM +0100, Lars Marowsky-Bree wrote:
On 2007-12-19T21:19:48, Alan Robertson [EMAIL PROTECTED] wrote:
Dave, Dejan and (if possible) Lars:
I have put this patch into 'test'. PLEASE begin testing it at your
earliest convenience.
Is there any specific
Hi,
On Thu, Dec 20, 2007 at 05:48:10PM +0900, Trent Lloyd wrote:
Hi All,
I have recently setup a 2-node iSCSI fail-over array backed onto
shared SAS MD3000 storage.
How is this thing connected: is it iSCSI or SAS?
I have everything (including RDAC) working fine on my Debian Etch
Hi,
On Wed, Dec 19, 2007 at 06:39:44PM +0100, Giuseppe Castellucci wrote:
Hi guys, first of all sorry for my english if you'll find some errors.
I have a problem with heartbeat and ftp servers (both vsftpd, which i
prefer to use, and proftpd).
I have two servers (latest debian stable, with
Hi Dejan,
On 20/12/2007, at 7:50 PM, Dejan Muhamedagic wrote:
Hi,
On Thu, Dec 20, 2007 at 05:48:10PM +0900, Trent Lloyd wrote:
Hi All,
I have recently setup a 2-node iSCSI fail-over array backed onto
shared SAS MD3000 storage.
How is this thing connected: is it iSCSI or SAS?
Sorry that
Hi,
On Thu, Dec 20, 2007 at 11:47:55AM +0100, Johan Hoeke wrote:
LS,
Happy holidays everybody!
Am please to say I've gotten a cluster up and running, which goes to
show that even regular folks can have a go ;)
Haven't gotten the hbaping to work on x86_64, but I'll save that for
Hi Johan,
On 20/12/2007, at 7:47 PM, Johan Hoeke wrote:
LS,
Happy holidays everybody!
Am please to say I've gotten a cluster up and running, which goes to
show that even regular folks can have a go ;)
Haven't gotten the hbaping to work on x86_64, but I'll save that for
another day.
Check
Hi Johan,
On 20/12/2007, at 8:51 PM, Johan Hoeke wrote:
Trent Lloyd wrote:
Hi Johan,
On 20/12/2007, at 7:47 PM, Johan Hoeke wrote:
LS,
Happy holidays everybody!
Am please to say I've gotten a cluster up and running, which goes to
show that even regular folks can have a go ;)
Haven't
Trent Lloyd wrote:
Hi Johan,
On 20/12/2007, at 8:51 PM, Johan Hoeke wrote:
Trent Lloyd wrote:
Hi Johan,
On 20/12/2007, at 7:47 PM, Johan Hoeke wrote:
LS,
Happy holidays everybody!
Am please to say I've gotten a cluster up and running, which goes to
show that even regular folks can
Hi,
On Thu, Dec 20, 2007 at 08:15:12PM +0900, Trent Lloyd wrote:
Hi Dejan,
On 20/12/2007, at 7:50 PM, Dejan Muhamedagic wrote:
Hi,
On Thu, Dec 20, 2007 at 05:48:10PM +0900, Trent Lloyd wrote:
Hi All,
I have recently setup a 2-node iSCSI fail-over array backed onto
shared SAS MD3000
On Thu 12/20/2007 2:50 AM, [EMAIL PROTECTED] said:
Hi everyone!
I have a question about switching resources.
Cluster consist of 2 nodes, one fails and the other one takes over.
Lets say the failed one come up again, is there a way to prevent that
the failed one takes the resources over again?
Hi Dejan,
On 20/12/2007, at 9:54 PM, Dejan Muhamedagic wrote:
Hi,
On Thu, Dec 20, 2007 at 08:15:12PM +0900, Trent Lloyd wrote:
Hi Dejan,
On 20/12/2007, at 7:50 PM, Dejan Muhamedagic wrote:
Hi,
On Thu, Dec 20, 2007 at 05:48:10PM +0900, Trent Lloyd wrote:
Hi All,
I have recently setup a
Hi,
On Thu, Dec 20, 2007 at 10:47:02PM +0900, Trent Lloyd wrote:
Hi Dejan,
On 20/12/2007, at 9:54 PM, Dejan Muhamedagic wrote:
Hi,
On Thu, Dec 20, 2007 at 08:15:12PM +0900, Trent Lloyd wrote:
Hi Dejan,
On 20/12/2007, at 7:50 PM, Dejan Muhamedagic wrote:
Hi,
On Thu, Dec 20, 2007 at
On Thu 12/20/2007 7:14 AM, Dejan Muhamedagic said:
Hi,
On Thu, Dec 20, 2007 at 08:31:16AM -0500, Scott Mann wrote:
On Thu 12/20/2007 2:50 AM, [EMAIL PROTECTED] said:
Hi everyone!
I have a question about switching resources.
Cluster consist of 2 nodes, one fails and the other one takes
Quoting Dejan Muhamedagic [EMAIL PROTECTED]:
Hi,
On Thu, Dec 20, 2007 at 08:31:16AM -0500, Scott Mann wrote:
On Thu 12/20/2007 2:50 AM, [EMAIL PROTECTED] said:
Hi everyone!
I have a question about switching resources.
Cluster consist of 2 nodes, one fails and the other one takes over.
Lets
Quoting Scott Mann [EMAIL PROTECTED]:
On Thu 12/20/2007 2:50 AM, [EMAIL PROTECTED] said:
Hi everyone!
I have a question about switching resources.
Cluster consist of 2 nodes, one fails and the other one takes over.
Lets say the failed one come up again, is there a way to prevent that
the
I saw the docs on linux-ha.org for using Heartbeat V2 with DRBD 7 in a
master / slave arrangement. It also says the configuration is not for
version 8 of DRBD. I have a few questions about this:
1) Is there a document that describes doing the same thing with DRBD 8?
2) What is the
I'm using heartbeat 2 without crm.
I'm trying to figure out why the nagios check_heartbeat_link isn't
working. It's using cl_status but cl_status isn't working.
[EMAIL PROTECTED] contrib]# cl_status hbstatus
Heartbeat is running on this machine.
cl_status[5458]: 2007/12/20_11:49:29 ERROR:
On Thu, Dec 20, 2007 at 11:55:47AM -0800, Rois Cannon wrote:
I'm using heartbeat 2 without crm.
I'm trying to figure out why the nagios check_heartbeat_link isn't
working. It's using cl_status but cl_status isn't working.
[EMAIL PROTECTED] contrib]# cl_status hbstatus
Heartbeat is
Hello,
we are using a two node cluster master/slave with an openSuSE 10.3,
heartbeat 2.0.7 and drbd 8.0.6.
I tried the configuration from this webpage:
http://www.linux-ha.org/DRBD/HowTov2
Dec 20 12:57:49 mylogin1 drbd[7119]: [7131]: DEBUG: : Calling
/sbin/drbdadm -c /etc/drbd.conf state
23 matches
Mail list logo