Re: [DRBD-user] Response to Mr. Ellenberg's answer to: "Warning: If using drbd; data-loss / corruption is possible; [...]"

2017-08-17 Thread Lars Ellenberg
On Wed, Aug 16, 2017 at 07:05:41PM +0200, pub...@thomas-r-bruecker.ch wrote:
> Dear Mr. Ellenberg,
> 
> I thank you for the profound explanation of the issues of my settings.
> 
> Ok. at the moment, the scope where I am using this configuration is either
> experimental, and I use it as a replacement of transferring date by "rsync"
> (with "rsync", I had to wait about 6h till my data has been synchronized,
> with drbd at about 20 min to 1h; so 'drbd is much better' anyway.).


> Because you advise me against using the drbd-device as I intended, I have
> to discuss it with my boss if we at all ...; so I allow myself, to cc.
> this mail to my boss (blind carbon copy), and attach "your response to my
> former letter" to this mail.

You may want to forget about "ahead",
and just do a scheduled "connect ; wait sync; disconnect",
if that is really what you want.

Given your context knowledge,
you can even do that during "convenient" times.


Actually what we (and others) have done in production for a long time:
Have a drbd 8.4 "protocol C" production cluster for on-site redundancy,
with stacked "protocol A" replication to an other two node off-site cluster.

One of those would be scheduled to disconnect,
take a new snapshot and delete old snapshots as per retention policies
(using lvm thin in this case, but technology does not matter).

We'd then use these snapshots as base images for point-in-time recovery
of data bases, or to (test) drive reports, or to test changes,
with a data set that is as close to production as possible.

Perfectly good use case for DRBD.

Just the "ahead/behind" stuff is only really useful with a "huge"
buffer, and even then should be considered an emergency break.
It is NOT a "beam my TiB/s change rate over a GSM link without
production impact".
It is not meant to be "normal processing for every request", either.


And yes, we obviously have bugs in that ahead/behind code,
you hit (some of?) them. But when using it "in spec",
they don't trigger (or we'd had a lot of angry customers).


That being said,
again,

> > If you want snapshot shipping,
> > use a system designed for snapshot shipping.
> > DRBD is not.

Cheers,

-- 
: Lars Ellenberg
: LINBIT | Keeping the Digital World Running
: DRBD -- Heartbeat -- Corosync -- Pacemaker

DRBD® and LINBIT® are registered trademarks of LINBIT
__
please don't Cc me, but send to list -- I'm subscribed
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] DRBD over ZFS - or the other way around?

2017-08-17 Thread Gionatan Danti

Hi list,
I am discussing how to have a replicated ZFS setup on the ZoL mailing 
list, and DRBD is obviously on the radar ;)


It seems that three possibilities exist:

a) DRBD over ZVOLs (with one DRBD resource per ZVOL);
b) ZFS over DRBD over the RAW disks (with DRBD resource per disk);
c) ZFS over DRBD over a single huge and sparse ZVOL (see for an example: 
http://v-optimal.nl/index.php/2016/02/04/ha-zfs/)


What option do you feel is the better one? On the ZoL list seems to 
exists a preference for option b - create a DRBD resource for each disk 
and let ZFS manage the DRBD devices.


Any thought on that?
Thanks.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user