On 20/09/2022 08:42, Roland Kammerer wrote:
Dear DRBD users,
The second bigger change is that the rpm part now generates a
drbd-selinux sub package containing an updated SELinux policy. Depending
on the host distribution, that package might even become a runtime
dependency for the drbd-utils
On 29/09/2021 01:10, Chris Pacejo wrote:
Hi, I have a three-node active/passive DRBD cluster, operating with default
configuration. I had to replace disks on one of the nodes (call it node A) and
resync the cluster.
Somehow, after doing this, A was not in sync with the primary (node C); I onl
On 10/08/2021 00:01, Digimer wrote:
On 2021-08-05 5:53 p.m., Janusz Jaskiewicz wrote:
Hello.
I'm experimenting a bit with DRBD in a cluster managed by Pacemaker.
It's a two node, active-passive cluster and the service that I'm
trying to put in the cluster writes to the file system.
The service
On 03/06/2021 13:50, Eric Robinson wrote:
-Original Message-
From: Digimer
Sent: Wednesday, June 2, 2021 7:23 PM
To: Eric Robinson ; drbd-user@lists.linbit.com
Subject: Re: [DRBD-user] The Problem of File System Corruption w/DRBD
On 2021-06-02 5:17 p.m., Eric Robinson wrote:
Since DRBD
On 10/12/2020 12:07, Christoph Böhmwalder wrote:
Hi Pierre,
As much as we may want it, DRBD's coccinelle-based compat system is not
a general purpose solution. We can't guarantee that DRBD will build for
any given kernel – there is simply too much going on in the block layer
and other parts o
On 13/08/2019 16:22, Jamie wrote:
On Mon, Aug 13, 2019 at 17:17:31, Jamie wrote:
Sorry, I tried posting this message this morning and something
apparently didn't work out correctly... :(
Okay, consider me stupid...? Somehow, I couldn't see my post from this
morning until just a moment ago. I
On 13/08/2019 13:16, Roland Kammerer wrote:
On Tue, Aug 13, 2019 at 12:40:56PM +0100, Eddie Chapman wrote:
I interpreted the package "drbd.x86_64 - version 9.5.0-2.fc30" as being the
kernel module version 9.5, since what else is left after you've packages the
utilities and udev s
On 13/08/2019 07:12, Roland Kammerer wrote:
On Mon, Aug 12, 2019 at 12:00:40PM +0200, Jamie wrote:
Hi all,
I've encountered quite a problem after updating my Fedora 30 and I hope
someone might've come across this problem as well because I couldn't really
find a lot of information online.
I'm u
Hello Jamie,
On 12/08/2019 11:00, Jamie wrote:
//Aug 12 11:28:34 ///serverjm/ drbdadm[8142]: Command 'drbdsetup-84
new-resource VMNAME' terminated with exit code 20//
I'm not familiar in the slightest with how Fedora packages drbd in your
installation. However the above line strongly suggests
On 29/07/2019 10:34, Eddie Chapman wrote:
Hello,
I've managed to corrupt one of the drbd9 resources on one of my
production servers, now I'm trying to figure out exactly what happened
so I can try and recover the data. I wonder if anyone might understand
what went wrong here ( apar
Hello,
I've managed to corrupt one of the drbd9 resources on one of my
production servers, now I'm trying to figure out exactly what happened
so I can try and recover the data. I wonder if anyone might understand
what went wrong here ( apart from the fact that PEBCAK :-) )?
This is a simple
On 02/11/18 08:45, Jarno Elonen wrote:
More clues:
Just witnessed a resync (after invalidate) to steadily go from 100%
out-of-sync to 0% (after several automatic disconnects and reconnects).
Immediately after reaching 0%, it went to negative -%
! After that, drbdtop started showing 8.0ZiB out
Just wanted to say thanks guys for fixing the missing set-out-of-sync
bit issue and others in the commits that have just appeared in drbd-9.0
git repo. I was relieved to see them.
I believe I have been bitten by these bugs more than once in recent
months, as whenever I have had a Primary resou
On 20/07/18 15:17, Philipp Reisner wrote:> Hi,
>
> Another DRBD release is coming up. Please help in testing the release
> candidate.
>
> This release contains one important fix for users that use DRBD9
> in combination with LINSTOR or DRBDmanage and intend to do live
> migrations of VMs.
>
> What
:43PM +0100, Eddie Chapman wrote:
I keep an eye on commits to the drbd 9 repository
https://github.com/LINBIT/drbd-9.0/
and I see quite a few have gone in since 9.0.14-1 was released end of April
( thanks all at Linbit for the hard work :-) )
I'm planning on rebooting a couple of drbd servers us
I keep an eye on commits to the drbd 9 repository
https://github.com/LINBIT/drbd-9.0/
and I see quite a few have gone in since 9.0.14-1 was released end of
April ( thanks all at Linbit for the hard work :-) )
I'm planning on rebooting a couple of drbd servers using 9.0.14-1 this
weekend to up
On 04/05/18 09:10, Christiaan den Besten wrote:
Hi !
Question. Using DRBD 9.0.14 (latest from git) we can't get a resync after
verify working. Having a simple 2-node resource created/configured 8.x style.
A "drbdadm verify" now succesfully ends at 100% ( thank you some much Lars for
fixing th
Have been looking through the list archives and release notes but cannot
find any info on this, apologies if has been stated before somewhere
I was wondering if there are plans to eventually submit drbd9 codebase
to be integrated into the mainline kernel? AFAICT from looking through
drbd
Hello,
I was wondering if someone could eyeball the logs further below from a
resource that has completely failed over yesterday and today and tell me
if it looks like a "normal" failure from underlying storage, or if there
is anything strange?
I ask because there are 2 things that are odd:
On 25/09/17 22:02, Eric Robinson wrote:
Problem:
Under high write load, DRBD exhibits data corruption. In repeated tests
over a month-long period, file corruption occurred after 700-900 GB of
data had been written to the DRBD volume.
Testing Platform:
2 x Dell PowerEdge R610 servers
32GB
Hello,
Am I right in my understanding that disk-timeout is safe to use as long
as the device that fails *never* comes back? Because it is the delayed
"recovery", as the drbd.conf man page puts it, which is what is
dangerous and could lead to kernel panic, corruption of original request
pages
Hello,
This happens to me often lately on an otherwise working well two node
cluster.
If I have a Primary/Secondary resource, with the Primary then becoming
diskless through me having run drbdadm detach on it, and I then
create-md and attach a *new* block device to the Primary, the subsequen
22 matches
Mail list logo