On 06/07/2010 08:59 AM, Lee Riemer wrote:
> The 35M means Megabytes, not bits. There's no way you will push 35M
> over that link.
>
> On 6/3/2010 9:10 AM, Gerry Kernan wrote:
>>
>> Hi
>>
>>
>>
>> I have a 2 drbd resources setup to run to a remote server via a 35Mb
>> wireless link. I am using p
As an update, I managed to get DRBD starting up on boot by using update-rc.d.
It is currently starting with the default priority (20), which is lower than
corosync's, ocfs2's, etc. My presumption is that this means that DRBD should
be starting before those services. Nevertheless, I continue t
Hello folks,
I'm noticing that DRBD is not starting automatically on boot in my Ubuntu 8.04
LTS Server implementation. All was working fine with DRBD and Pacemaker /
Corosync until I restarted the two nodes. DRBD did not startup on boot, and
crm_mon is showing errors.
I was wondering how (1)
The reason will be in the message logs. What's dmesg show?
Dan, Atlanta
From: drbd-user-boun...@lists.linbit.com
[mailto:drbd-user-boun...@lists.linbit.com] On Behalf Of ch huang
Sent: Sunday, June 06, 2010 11:37 PM
To: drbd-user@lists.linbit.com
Subject: [DRBD-user] drbd nodes can not see
The 35M means Megabytes, not bits. There's no way you will push 35M
over that link.
On 6/3/2010 9:10 AM, Gerry Kernan wrote:
Hi
I have a 2 drbd resources setup to run to a
remote server
via a 35Mb wireless link. I am using prococol A and sync rate of 35M,
are there
any o
Bart Coninckx wrote:
It looks quite OK to me. Nothing special in the Heartbeat logfiles the moment
he is failing over? What is in your ha.cf?
Here is my ha.cf:
debugfile /var/log/ha-debug
logfile /var/log/ha-log
autojoin none
logfacility local0
keepalive 1
deadtime 10
serial /dev/ttyS0
bcast
Hello,
First thing to check out is your firewall / iptables.
Put your firewall down on both nodes with 'service iptables stop' and
then do 'service drbd restart' on both nodes.
If that works, then your problem is that you have not mutually opened
the ports defined for each drbd resource in t
I have a simple DRBD mount with two nodes that the CRM promotes to Master on
fail. My problem occurs when I sleep one node and then wake the failing node
back up.
In this instance I end up with two Primary (Split Brain) nodes but it is my
understanding that Pacemaker tries to promote and use the
On Thu, Jun 03, 2010 at 02:03:50PM +0200, Rainer Klute wrote:
> Hi,
>
> I am new to DRBD, just tried to setup my first sample cluster and
> stumbled over the first problem. I'd like to specify the hosts my
> resources reside on by hostnames rather than by IP addresses. The reason
> is that I want
Hi list,
this note to let you know we added the drbd support to OpenSVC
clustering tool.
drbd devices are treated as disk resources by OpenSVC : they implement
the start, stop and startstandby commands.
o Start sets up the disk attachment, the connection and promote role to
primary.
o Stop uncon
i start the 2 drbd nodes,but from output of drbd status,it seems the nodes
can not see each other? i do not know why?
[r...@prim ~]# cat /proc/drbd
version: 8.3.2 (api:88/proto:86-90)
GIT-hash: dd7985327f146f33b86d4bff5ca8c94234ce840e build by
mockbu...@v20z-x86-64.home.local, 2009-08-29 14:02:24
Hi
I have a 2 drbd resources setup to run to a remote servervia a 35Mb wireless
link. I am using prococol A and sync rate of 35M, are thereany other setting i
can use to try and improve performance. Since turning ondrbd users on the
system are complaining that process are taking longer to ru
DRBD is not creating the /dev/drbd0 device. Everything else seems to be
happy. I went through the usual steps setting everything up and there
were no apparent errors. I have included my drbd.conf file, the contents
of /proc/drbd for both systems and /var/log/messages from either system.
Any ideas w
Hi,
I am new to DRBD, just tried to setup my first sample cluster and
stumbled over the first problem. I'd like to specify the hosts my
resources reside on by hostnames rather than by IP addresses. The reason
is that I want to setup a small cluster over a wide-area network (WAN)
and the hosts invo
On Monday 07 June 2010 08:41:42 Florian Haas wrote:
> On 06/06/2010 12:18 PM, Olivier Le Cam wrote:
> > Bart Coninckx wrote:
> >> Sounds like a Heartbeat issue. Heartbeat should not make the node
> >> primary before the sync has finished.
>
> That assumption is incorrect for this configuration.
>
15 matches
Mail list logo