On 2018-10-18 04:07, Bryan K. Walton wrote:
Hi,
I'm trying to configure a two-node cluster, where each node has
dedicated redundant nics:
storage node 1 has two private IPs:
10.40.1.3
10.40.2.2
storage node 2 has two private IPs:
10.40.1.2
10.40.2.3
I'd like to configure the resource so that
Hi,
I'm trying to configure a two-node cluster, where each node has
dedicated redundant nics:
storage node 1 has two private IPs:
10.40.1.3
10.40.2.2
storage node 2 has two private IPs:
10.40.1.2
10.40.2.3
I'd like to configure the resource so that the nodes have two possible
paths to the
It turned out that the NFS daemon was blocking DRBD.
Thanks, the comment about the 'drbd' kernel processes was helpful.
BTW, the documentation (man pages) for DRBD 9.0 is still from 8.4 and some
options are no longer there.
On Thu, Oct 11, 2018 at 9:48 AM, Radoslaw Garbacz <
On 2018-10-17 5:35 a.m., Adam Weremczuk wrote:
> Hi all,
>
> Yesterday I rebooted both nodes a couple of times (replacing BBU RAID
> batteries) and ended up with:
>
> drbd0: Split-Brain detected but unresolved, dropping connection!
Fencing prevents this.
> on both.
>
> node1: /drbd-overview//
Just a quick note ..
You are correct, it shouldn't be required (v8.9.10) and I was surprised
> with that too.
>
In the DRBD documentation, it is stated that ...
"When multiple DRBD resources share a single replication/synchronization
network, synchronization with a fixed rate may not be an
Hi all,
Yesterday I rebooted both nodes a couple of times (replacing BBU RAID
batteries) and ended up with:
drbd0: Split-Brain detected but unresolved, dropping connection!
on both.
node1: /drbd-overview//
//0:r0/0 StandAlone Primary/Unknown UpToDate/DUnknown /srv/test1 ext4
3.6T 75G 3.4T
You are correct, it shouldn't be required (v8.9.10) and I was surprised
with that too.
Another evidence of the option being honored is "want: 150,000 k/sec"
which I sometimes (not always) see in /proc/drbd
On 17/10/18 10:17, Oleksiy Evin wrote:
If I'm not wrong, the "syncer" section has been
If I'm not wrong, the "syncer" section has been deprecated somewhere around
8.4.0 drbd version. Based on the logs you provided the version you use is
8.4.10, so I don't think that should have any speed impact. But I'm glad you've
got it resolved.
//OE
-Original Message-
From: Adam
"Max-buffers 8k" appear to be the sweet spot for me.
I'm now getting 145-150 MB/s transfer rates between nodes which I'm
happy with.
The biggest problem was I didn't have "syncer" section defined at all.
Currently my fully working and behaving config looks like below:
global { usage-count no;
Hi,
the list is shorter than with the last releases. I think this is good news.
What really made us to release now, is fixing the regression introduced with
9.0.15. It was probably not triggered by many parties, because you can only
trigger it if you have requests in flight in excactly the
10 matches
Mail list logo