Hello ALL
Have a 3-node synchronization issue when attempting to use a T1. I don't know
if this is the result of stacked resources or something else.
Our offsite connection destroys our network throughput but only if we attempt
to synchronize to the offsite box at normal T1 connection speed. Understandably
a T1 could not keep up during the day, but was expected to eventually catch up
after hours at night.
We run 2 ESXi 4.0 servers with 8 MS servers and storage sitting on 1.5 TB. With
onsite bandwidth everything works better than I could have ever hoped. A clean
copy of XP boots in less than 15 seconds. Virtual Terminal Servers are snappy
even sitting on stacked resourced DRBD. The block data changes usually are in
the order of some 6 gigs daily. This is within the bandwidth of a T1 during the
12 hour x 695 MB window each night. We have a 3-node setup using stacked DRBD
8.3.2. We are using an Openfiler 64bit 2.3 Xeon 32gig ram server with SAS
drives mirroring to another Openfiler 64bit 2.3 dual core P4 4gig ram on a
software raid connect protocol C. I have the third leg connected to a QNAP
509pro converted to Openfiler 2.3 64bit connected over Openvpn tunnel connected
protocol A. Openvpn usually does an excellent job for me with encryption and
compression. All runs perfectly onsite with Openvpn wide open and no shaper
settings. Virtual servers can be booted and run quite nicely even off of the
little QNAP box doing disaster recovery tests, truly a beautiful thing.
However, surprisingly, when I attempted to move the offsite box to a remote
office over a T1, the network throughput from the main SAN server went into the
dumpster. But, shutting down the offsite DRBD service or killing the connection
to this offsite box, immediately brings everything back to speed.
So far I have tried every conceivable bandwidth setting with no luck. Presently
I have the offsite box back in-house. Any attempt to get even close to
simulating T1 speeds with the tunnel consistently brings the network to its
knees. On my last attempt I set the DRBD upper sync rate to 56K (~1/3 of 175)
and the Openvpn shaper rate to 175000 which should be a normal T1 rate. The
only thing that seems to help is cranking the bandwidth back up. Why does a low
bandwidth synchronization destroy the throughput on the network as soon as
things get inconsistent? Should DRBD not be able to just trickle data over a
connection?
If I have all 3 machines synch'ed and I lower the bandwidth using drbdsetup
/dev/drbd3 syncer -r 56K and then lower the bandwidth to T1 speed on the
tunnel, everything is ok until drbd reports inconsistent on third leg.
Inconsistent would of course be expected here since I just did something make
it that way such as defragging a drive or whatever. Then boom the network
throughput is in the dirt again, until I break the connection to third leg and
everything pops back up and dusts itself off like nothing happened. If I
increase the bandwidth back and let it sync back up again, it works perfectly
and again it couldn't care less. There must be some programming issue here or
there has to be a way to tweak this situation. Surely this kind of a DRBD setup
should be able to function over a T1 speed connection in protocol A. I was
hoping 8.3.2 would do better than 8.3.1 but it made no improvement. If you say
I don't have enough bandwidth, then for argument's sake let's say I did have
more bandwidth and the speed on the connection dropped temporarily, we all know
how the telephone companies are. Would DRBD bring the network down because it
could not synch up? This just cannot be right.
Any thoughts would be most desired. I am new to list and tried to search to see
if this subject had already been addressed.
global {
# minor-count 64;
# dialog-refresh 5; # 5 seconds
# disable-ip-verification;
usage-count ask;
}
common {
syncer { rate 100M; }
}
resource data-lower
{
protocol C;
startup {
wfc-timeout 0; ## Infinite!
degr-wfc-timeout 120; ## 2 minutes.
}
disk {
on-io-error detach;
}
net {
# timeout 60;
# connect-int 10;
# ping-int 10;
# max-buffers 2048;
# max-epoch-size 2048;
}
syncer {
}
on sas {
device /dev/drbd0;
disk /dev/volgrp/mirror;
address 10.10.10.112:7789;
meta-disk internal;
}
on giga {
device /dev/drbd0;
disk /dev/volgrp/mirror;
address 10.10.10.111:7789;
meta-disk internal;
}
}
resource data-upper
{
protocol A;
syncer
{
after data-lower;
rate 56K;
al-extents 513;
}
net {
shared-secret "LINBIT";
}
stacked-on-top-of data-lower
{
device /dev/drbd3;
address 192.168.100.1:7788; }
on offsite
{
device /dev/drbd3;
disk /dev/volgrp/mirror;
address 192.168.100.2:7788; # Public IP of the backup node
meta-disk internal;
}
}
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user