Hi Ondrej,
yes, this is perfectly normal in single-primary environments. DRBD simply does
not permit to access the resource block devices until it is promoted to
primary. What you describe would only work in dual-primary environments, but
running such an environment also requires a lot more pr
On Thu, Mar 15, 2018 at 10:55:58AM +, Ondrej Valousek wrote:
>
> > That is perfectly expected behavior. Imagine a resync from the primary
> > that, for some time, makes the secondary inconsistent (see what Lars
> > already told you). Would not make sense to mount that
> > one... Error codes
> That is perfectly expected behavior. Imagine a resync from the primary that,
> for some time, makes the secondary inconsistent (see what Lars already told
> you). Would not make sense to mount that
> one... Error codes are limited, "Wrong medium type" is the one that makes
> most sense.
> F
On Thu, Mar 15, 2018 at 10:21:49AM +, Ondrej Valousek wrote:
> Hi list,
>
> When trying to mount the filesystem on the slave node (read-only, I do
> not want to crash the filesystem), I am receiving:
>
> mount: mount /dev/drbd0 on /brick1 failed: Wrong medium type
>
> Is it normal? AFAIK it
Hi list,
When trying to mount the filesystem on the slave node (read-only, I do not want
to crash the filesystem), I am receiving:
mount: mount /dev/drbd0 on /brick1 failed: Wrong medium type
Is it normal? AFAIK it should be OK to mount the filesystem read-only on the
slave node.
Thanks,
Ondr
Hi,
Thanks for the explanation.
So for example, let's have 2 nodes in a different geo locations (for say
disaster recovery), so let's use protocol A so things go fast for the 1st node
(the primary).
But we have a large data to resync, say 10Tb and the link is slow so it might
take few days for