On 2017-10-10 13:11, Jan Bakuwel wrote:
Hi Veit,
On 10/10/17 08:47, Veit Wahlich wrote:
Hi Jan,
Am Dienstag, den 10.10.2017, 06:56 +1300 schrieb Jan Bakuwel:
I've seen OOS blocks in the past where storage stack appeared to be
fine (hardware wise). What possible causes could there be?
Hi Veit,
On 10/10/17 08:47, Veit Wahlich wrote:
Hi Jan,
Am Dienstag, den 10.10.2017, 06:56 +1300 schrieb Jan Bakuwel:
I've seen OOS blocks in the past where storage stack appeared to be fine
(hardware wise). What possible causes could there be? Hardware issues, bugs in
storage stack
Hi Jan,
Am Dienstag, den 10.10.2017, 06:56 +1300 schrieb Jan Bakuwel:
> I've seen OOS blocks in the past where storage stack appeared to be fine
> (hardware wise). What possible causes could there be? Hardware issues, bugs
> in storage stack including DRBD itself, network issues. In most (all?)
Hi Veit,
> On 9/10/2017, at 11:07 PM, Veit Wahlich wrote:
>
> Hi Jan,
>
> Am Sonntag, den 08.10.2017, 13:07 +1300 schrieb Jan Bakuwel:
>> I'd like to include an automatic disconnect/connect on the secondary if
>> out-of-sync blocks were found but so far I haven't found out
Il 09/10/2017 16:48, Roberto Resoli ha scritto:
> Il 09/10/2017 15:35, Robert Altnoeder ha scritto:
>> On 10/09/2017 11:04 AM, Roberto Resoli wrote:
>>> I currently remove snap_percent from used space in "update_pool"
>>> function inside
>>>
>>>
Il 09/10/2017 15:35, Robert Altnoeder ha scritto:
> On 10/09/2017 11:04 AM, Roberto Resoli wrote:
>> I currently remove snap_percent from used space in "update_pool"
>> function inside
>>
>> /usr/lib/python2.7/dist-packages/drbdmanage/storage/lvm_thinlv.py
> It is quite interesting that you even
Oh,
I skipped it. I had not seen.
I will try it!
Thank you!
On 10/09/2017 05:19 PM, Roland Kammerer wrote:
Section 5.4.1
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
On Mon, Oct 09, 2017 at 04:24:56PM +0300, Tsirkas Georgios wrote:
> Hello,
>
> I noticed that sometimes when execute join command to drbd-node , control
> volume is not removed. So i checked the source code on git.
> Especially at 3358 line , drbdmanage do wipefs -a "device". Maybe needs to
>
On 10/09/2017 11:04 AM, Roberto Resoli wrote:
> I currently remove snap_percent from used space in "update_pool"
> function inside
>
> /usr/lib/python2.7/dist-packages/drbdmanage/storage/lvm_thinlv.py
It is quite interesting that you even get any values in this column that
could then mess up your
Hello,
I noticed that sometimes when execute join command to drbd-node ,
control volume is not removed. So i checked the source code on git.
Especially at 3358 line , drbdmanage do wipefs -a "device". Maybe needs
to call the same function before create drbdctrl_lv_0 because of
existhing,
Hi Jan,
Am Sonntag, den 08.10.2017, 13:07 +1300 schrieb Jan Bakuwel:
> I'd like to include an automatic disconnect/connect on the secondary if
> out-of-sync blocks were found but so far I haven't found out how I can
> query drbd to find out (apart from parsing the log somehow). I hope
>
Hi Robert,
> On 9/10/2017, at 9:17 PM, robert.koe...@knapp.com wrote:
>
> Just parse /proc/drbd for lines with oos: where the number behind oos is
> not 0
Thanks, that's easy :-)
Jan
> Von:Jan Bakuwel
> An:drbd-user@lists.linbit.com
> Datum:08.10.2017 02:28
I am close following recent developments of drbd9; I am using the
drbdmanage.storage.lvm_thinlv.LvmThinLv
plugin on a pve5 three node cluster installation (drbdmanaged is from
recent pve5 repo: drbdmanage 0.99.12 ).
As I reported in detail some time ago on this list[1], i think that free
space
Just parse /proc/drbd for lines with oos: where the number behind oos is
not 0
...
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-
ns:103417564 nr:338520704 dw:428066036 dr:1845017517 al:129307 bm:2175
lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
1: cs:Connected
14 matches
Mail list logo