Hello,
Sorry, I won't answer to any of your questions (as I'm not using DRBD
for Kubernetes) but to anwser to your first question : Yes, DRBD/Linstor
is completely free and a really nice piece of software.
But as it's free, we'll have to learn most caveats (and there is) by
yourself, debug by your
t 3MB/s.
What did I miss ?
Best regards,
Julien Escario
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Hello,
We're using ZFSThin as backend for our Linstor cluster. Nothing fancy.
I'm trying to set up a backup with ZFS snapshots and zfs send/receive.
First : is this a bad idea ? I know DRBD integrate his own snapshot
system (LVM only ?) but it can't be exported to a non-DRBD system as-is
AFAIK.
tch ?
I can't even find any changelog of Proxmox's Storage API changes from V1
to V2.
Best regards,
Julien Escario
P.S. : we're getting a ton of mail at each backup task.
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Hello,
Yesterday and today, I experienced a strange crash when live migrating a
VM inside a Proxmox cluster from a diskless node to another node (with
disk attached).
I'm using ZFSThin as backend.
You'll find below the kernel error message I've been able to catch
before everything goes wrong.
I'
Le 27/11/2018 à 18:08, Yannis Milios a écrit :
> Upgraded to linstor-proxmox (3.0.2-3) and seems to be working well with
> libpve-storage-perl (5.0-32).
> There's a warning notification during live migrates about the upgraded
> storage API, but at the end the process is completed successfully..
>
Le 03/12/2018 à 09:47, Roland Kammerer a écrit :
> On Fri, Nov 30, 2018 at 12:46:08PM +0300, Max O.Kipytkov wrote:
>>
>> The external command sent the follwing error information:
>> drbdadm: unrecognized option '--config-to-exclude'
>> try 'drbdadm help'
>
> Did you load a DRBD9 kernel
Hello,
I can't really find useful information about this in docs : what's the
difference between SATELLITE node and COMBINED node ?
In a recent lab, I deployed 2 nodes that are both in SATELLITE type.
And one node is also running the controller.
Everything runs perfectly (I have to specify --cont
Le 30/11/2018 à 10:43, Roland Kammerer a écrit :
> On Fri, Nov 30, 2018 at 10:30:05AM +0100, Julien Escario wrote:
>> Ok, nervermind :
>> # cat /proc/drbd
>> version: 8.4.10 (api:1/proto:86-101)
>> srcversion: 17A0C3A0AF9492ED4B9A418
>
> There will be a check
Le 30/11/2018 à 10:35, Roland Kammerer a écrit :
> On Fri, Nov 30, 2018 at 10:27:10AM +0100, Julien Escario wrote:
>>> Any chances you are using your distributions DRBD8.4 module instead of
>>> the DRBD9 one you should use?
>>
>> I know it's a frequent an
Ok, nervermind :
# cat /proc/drbd
version: 8.4.10 (api:1/proto:86-101)
srcversion: 17A0C3A0AF9492ED4B9A418
Sorry,
Julien
Le 30/11/2018 à 10:27, Julien Escario a écrit :
>> Any chances you are using your distributions DRBD8.4 module instead of
>> the DRBD9 one you should use?
>
ages) :
ii drbd-dkms 9.0.16-1
ii drbd-utils 9.6.0-1
ii drbdtop 0.2.1-1
Double checked versions and all seems pretty uptodate (installed yesterday).
Thanks for your help
--config-to-exclude'
try 'drbdadm help'
It seems drbdadm does not have a '--config-to-exclude' option. Anything
I missed ?
I can confirm, same error is throwed when using command line :
# linstor resource create vm1
l should be in secondary state.
It only becomes primary when drbdmanage change 'something' (move disk,
reconfigure, etc ...).
> b) forget everything in a), remove drbdmanage, and install LINSTOR.
You'll save you some time.
Best regards,
Julien Escario
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Le 05/10/2018 à 11:06, Roland Kammerer a écrit :
> Dear Proxmox users,
>
> There will be a new release soon. Actually this would have been it, but
> hey, why rush a new release on a Friday when you don't have to :-). So
> let's call this an rc1.
>
> Notable changes:
> - Multi-Pool support
One wo
Le 02/10/2018 à 19:56, Radoslaw Garbacz a écrit :
> Hi,
>
>
> I have a problem, which (from what I found) has been discussed, however not in
> the particular case, which I experienced, so I would be grateful for any
> suggestions of how to deal with it.
Your problem sounds pretty similar to a re
Le 02/10/2018 à 11:31, Rene Peinthor a écrit :
> Hi Everyone!
>
> This mostly a bugfix release, one change that needs mentioning is that
> all delete commands will now wait until the resource is actually deleted on
> the
> satellites.
Great, thank you !
> linstor-server 0.6.5
>
Le 02/10/2018 à 11:31, Rene Peinthor a écrit :
> Hi Everyone!
>
> This mostly a bugfix release, one change that needs mentioning is that
> all delete commands will now wait until the resource is actually deleted on
> the
> satellites.
Great, thank you !
> linstor-server 0.6.5
>
Le 24/09/2018 à 16:36, Brice CHAPPE a écrit :
> Hi mailing !
>
>
>
> I have three nodes drbdmanage cluster.
>
> Two nodes work as storage backend (S1/S2).
>
> One node as satellite pure client (for future nova usage)
>
> I work on 20GB/s LACP network between storage backends and satellite pu
Le 24/09/2018 à 13:19, Robert Altnoeder a écrit :
> On 09/24/2018 01:03 PM, Julien Escario wrote:
>> Hello,
>> When trying to resize disk (aka grow only) on Proxmox interface for a
>> linstor-backed device, this error is thrown :
>> VM 2000 qmp command 'block_res
00-disk-2_0
= 26GB.
But perhaps is it more realted to ZFS configuration (ashift for example).
Best regards,
Julien Escario
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
and/or confirm bug ? (force thin
mark somewhere ?)
Best regards,
Julien Escario
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Le 24/09/2018 à 10:12, Robert Altnoeder a écrit :
> On 09/24/2018 09:43 AM, Julien Escario wrote:
>> Did I miss something or the ZFSthin storage plugin is only for thin
>> for the creation node ?
>
> Is the resource using thin provisioning on all of the nodes?
>
> Mixi
Le 24/09/2018 à 07:42, Rene Peinthor a écrit :
> It is not possible to delete a storage pool that is still in use by some
> resources/volumes.
Thanks !
I removed all ressource then delete storage-pool and recreate it as zfsthin.
linstor storage-pool create zfsthin nodeA drbdpool drbdpool
Everyth
Le 11/09/2018 à 15:55, Rene Peinthor a écrit :
> On Tue, Sep 11, 2018 at 3:04 PM Julien Escario <mailto:julien.esca...@altinea.fr>> wrote:
>
> Le 10/09/2018 à 15:52, Rene Peinthor a écrit :
> And one question : is there a way to 'convert' an existing st
27; an existing storage-pool from zfs
to zfsthin ?
Best regards,
Julien Escario
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Le 03/09/2018 à 11:57, abdollah karimnia a écrit :
> Dear all,
>
> Is there any way to replicate different VGs using drbdmanage? Currently we
> can add only one VG name (drbdpool) into /etc/drbdmanaged.cfg file. Seems
> that it is not possible to h
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Le 29/08/2018 à 12:00, Lars Ellenberg a écrit :
> Something that was fixed in April I think. You want to upgrade to 9.0.15
> (or whatever is "latest" at the time someone else finds this in the
> archives...)
>
> Unfortunately you will have to rebo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Many many thanks for the detailled procedure.
I'll try in a few days with drbd9 and will let you know if something has to be
changed (mainly because resources are created on the fly on each side).
Julien
Le 29/08/2018 à 14:32, David Bruzos a écrit
Hello,
Just wanted to know : is there a way to get rid of initial sync with linstor and
zfs backend ?
Right now, I have a 1TB volume to create and initial sync is vry long.
I think it's mostly due to unavailability of thinly provisionned ZFS resources
but perhaps is there a way to suspend resy
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Le 24/08/2018 à 10:54, Roland Kammerer a écrit :
> Dear Proxmox VE users,
>
> we released version 2.9.0 of the linstor-proxmox plugin.
Hello,
Just to let you know : today, I tried to deploy linstor-proxmox on an
ipv6-only server. Sadly, packages.li
Ok, just forget it : just a matter of NOT running mtu 9000 on switch interfaces
with mtu 1500.
*ashamed*
Julien
Le 27/08/2018 à 20:04, Julien Escario a écrit :
> Hello,
> I'm continuing side lab with linstor.
>
> I recently moved connection between two nodes to a VLA
Hello,
I'm continuing side lab with linstor.
I recently moved connection between two nodes to a VLAN interface. Same adresses
for each were moved to the new VLAN.
Both nodes were restarted (dunno remember the order).
One node is Controller+ satellite (dedie83) and other is satellite only
(dedie
Le 27/08/2018 à 18:15, Julien Escario a écrit :
> Le 27/08/2018 à 17:44, Lars Ellenberg a écrit :
>> On Mon, Aug 27, 2018 at 05:01:52PM +0200, Julien Escario wrote:
>>> Hello,
>>> We're stuck in a strange situation. One of our ressources is marked as :
>>>
Le 27/08/2018 à 17:44, Lars Ellenberg a écrit :
> On Mon, Aug 27, 2018 at 05:01:52PM +0200, Julien Escario wrote:
>> Hello,
>> We're stuck in a strange situation. One of our ressources is marked as :
>> volume 0 (/dev/drbd155): UpToDate(normal disk state) Blocked: upper
&
Hello,
We're stuck in a strange situation. One of our ressources is marked as :
volume 0 (/dev/drbd155): UpToDate(normal disk state) Blocked: upper
I used drbdtop to get this info because drbdadm hangs.
I can also see a drbdsetup process blocked :
drbdsetup disk-options 155 --set-defaults --read-
Le 21/08/2018 à 18:39, Robert Altnoeder a écrit :
> On 08/21/2018 06:23 PM, Julien Escario wrote:
>> Hello,
>> Just hit a bug after multiple creation/deletion of resources on my two nodes
>> cluster.
>>
>> Syslog reports :
>>
>> Aug 21 17
nvme speed (so
delete/create a few times to compare with connection and without).
All other ressources are fine.
drbdadm adjust put the resource is Connecting state.
Best regards,
Julien Escario
___
drbd-user mailing list
drbd-user@lists.linbit.com
htt
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Le 20/08/2018 à 18:21, Robert Altnoeder a écrit :
> On 08/20/2018 05:03 PM, Julien Escario wrote:
>> My question was essentially about thinprov with ZFS. In recent versions
>> (2018), drbdmanage is able to skip full resync at volum
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Le 20/08/2018 à 16:43, Roland Kammerer a écrit :
>> I'm still missing a few things like ZFS Thin provisionning (When I create
>> a new disk, a full resync is initiated). Did I miss something ? Is it
>> planned ?
>
> You used a LVM pool, so yes, you
with linstore ? (I
didn't manage to found it).
Best regards,
Julien Escario
-BEGIN PGP SIGNATURE-
iQIcBAEBCgAGBQJbes8GAAoJEOWv2Do/Mctuca0QAIRGjtTa98X+/c9X2rXZ3tYI
tePljDeKuxLsGNiuZUghEhdqKtIfunLl52/Cl3m2NMvzK59gb2VBGC3Jq7vm34M4
oUIH5RzMidOSyvyVj85JtSdtiq5pMKfMOQwyGLLrZAdAbzxzKSpnA
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Le 26/07/2018 à 15:28, Roland Kammerer a écrit :
> Dear Proxmox VE users,
>
> we released the first version of the linstor-proxmox plugin. This
> integrates LINSTOR (the successor of DRBDManage) into Proxmox.
>
> It contains all the features the d
he support for all platforms be ready by the end of the year ?
Best regards and thanks for clarification,
Julien Escario
-BEGIN PGP SIGNATURE-
iQIcBAEBCgAGBQJbTwArAAoJEOWv2Do/MctuTsIP/3FZiCDYLu2NzXphZbhyDiJq
bmHPb4BJvEKHpnFxjZX5D05KW5tadf5+H4HmgEs2NJr3MPBoxGTZhsTIbiINS0v+
AUPI
Hello Roland,
First, thanks for your concern about my question.
Le 28/06/2018 à 08:41, Roland Kammerer a écrit :
> On Wed, Jun 27, 2018 at 12:37:20PM +0200, Julien Escario wrote:
>> Hello,
>> We're experiencing a really strange situation.
>> We often play with :
>>
wow, my mails finally made it to the list ... forget it, it's redondant with
my today's thread.
Julien
Le 22/06/2018 à 14:39, Julien Escario a écrit :
> Hello, DRBD9 is really a great piece of software but from time to time, we
> end stuck in a situation without other solu
Hello,
We're experiencing a really strange situation.
We often play with :
drbdmanage peer-device-options --resource --c-max-rate
especially when a node crash and need a (full) resync.
When doing this, sometimes (after 10 or 20 such commands), we end up with
drbdmanage completely stuck and a dr
d164): UpToDate(normal disk state) Blocked: upper
and :
Connection to node2(Unknown): NetworkFailure(lost connection to node2)
How can I debug such situation without rebooting node1 ?
This is not the time we're encountering such situation and rebooting each time
is really a pain,
d164): UpToDate(normal disk state) Blocked: upper
and :
Connection to node2(Unknown): NetworkFailure(lost connection to node2)
How can I debug such situation without rebooting node1 ?
This is not the time we're encountering such situation and rebooting each time
is really a pain,
Yup, this is ABSOLUTELY boring ;-)
Thanks for your work !
Best regards,
Julien Escario
Le 21/03/2018 à 11:10, Roland Kammerer a écrit :
> Hi,
>
> This drbd-utils release should be rather boring for most users. It
> accumulates the fixes in the 9.2.x branches and adds some additi
Le 21/02/2018 à 04:07, Igor Cicimov a écrit :
>
>
> On Tue, Feb 20, 2018 at 9:55 PM, Julien Escario <mailto:esca...@azylog.net>> wrote:
>
> Le 10/02/2018 à 04:39, Igor Cicimov a écrit :
> > Did you tell it
> > to?
> https://docs.linbit.com/
Hello,
I'm trying to benchmark *correctly* my lab setup.
Pretty simple : 2 proxmox nodes setup, protocol C, ZFS RAID1 HDDs backend with
mirror log and cache on SSDs.
DRBD9, 10Gbps Ethernet network, tuned latency by reading a lot of papers on
this.
What I'm trying : run fio with below parameters
as far as I understand, 'detach' behavior should be the default no ?
My tought is that DRBD wasn't notified or didn't detect the blocked IOs on the
backend. Perhaps a specific bahevior of ZFS.
More tests to come.
Best regards,
Julien Escario
___
in this case ? I probably missed a
detection mecanism.
Best regards,
Julien Escario
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
ebalancing) but without
drbdmanage.
Best regards,
Julien Escario
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Le 12/12/2017 à 11:54, Robert Altnoeder a écrit :
> On 12/12/2017 11:10 AM, Julien Escario wrote:
>
>> Hello,
>> May we have a pointer to linstor informations ? I can't find any info on this
>> software by googling 5 min.
>>
>> Best regards,
>> Julie
t; support that.
Hello,
May we have a pointer to linstor informations ? I can't find any info on this
software by googling 5 min.
Best regards,
Julien Escario
<>___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
when I´m calculating the VM volumes I have only 2.4t
> in
> use.
Hello,
I can see you're using thinlv. AFAIK usage report is based on percentage
returned by lvdisplay command on each host.
Did you tried to run /usr/bin/drbdmanage update-pool ?
Best regards,
Julien Escario
__
derstand, drbdmanaged opened exclusively the device and
doesn't give it back.
Am I right ?
Is there a way to unblock this without rebooting the whole node ? I tried
drbdmanage shutdown -q.
Kill directly the process ? Is it safe ?
Best regards,
Julien Escario
___
Le 03/10/2017 à 14:50, Robert Altnoeder a écrit :
> On 10/02/2017 06:20 PM, Julien Escario wrote:
>> Hello,
>> In the doc, I can read : "In this case drbdmanage chooses 3 nodes that fit
>> all
>> requirements best, which is by default the set of nodes with t
ng
to have *at least* a copy on each site ?
Best regards,
Julien Escario
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
ng
to have *at least* a copy on each site ?
Best regards,
Julien Escario
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Le 13/09/2017 à 10:14, Yannis Milios a écrit :
> Usually when I need that, I live migrate the vm from the client node to the
> other node and then I use drbdmanage unassign/assign to convert client to a
> 'normal' satellite node with local storage. Then wait for sync to complete and
> move the vm b
Le 12/09/2017 à 11:39, Roland Kammerer a écrit :
> On Tue, Sep 12, 2017 at 09:49:26AM +0200, Julien Escario wrote:
>> Hello,
>> I'm trying to 'promote' a client node to have a local copy of datas but can't
>> find any reference to such command in
Le 12/09/2017 à 00:11, Lars Ellenberg a écrit :
> On Mon, Sep 11, 2017 at 11:21:35AM +0200, Julien Escario wrote:
>> Hello,
>> This moring, when creating a ressource from Proxmox, I got a nice
>> "Authentication of peer failed".
>>
>> [33685507.246574]
Date
This one is in secondary state (VM not running) so I could unassign/assign this
ressource to this node without problem.
But if the ressource is already in primary state, any way to ask for a local
copy of datas with drbdmanage ?
Best regards,
Julien Escario
smime.p7s
Description: Signature cr
.97-1
drbd-utils 8.9.7-1
but upgrading is ... complex ;-)
Any way to correct this without shuting down all ressources ? (and reboot).
I was thinking of some so-nice hidden command to force a kind of reauth between
2 hosts.
Best regards,
Julien Escario
smime.p7s
Descri
Le 07/09/2017 à 18:43, Roland Kammerer a écrit :
> On Thu, Sep 07, 2017 at 01:48:02PM +0300, Tsirkas Georgios wrote:
>> Hello,
>> What are the changes on drbdmanage command;
>
> w00t?
I was thinking almost the same thing. Even started writing a flame answer ;-)
Can you unsubscride such lame user
Hello,
Just to let you know that the link
https://docs.linbit.com/doc/users-guide-90/s-proxmox-configuration is dead in
the documentation.
This link is present on
https://docs.linbit.com/doc/users-guide-90/s-proxmox-install/ and on
https://docs.linbit.com/doc/users-guide-90/ch-proxmox/
Julien
Le 17/08/2017 à 16:48, Gionatan Danti a écrit :
> Hi list,
> I am discussing how to have a replicated ZFS setup on the ZoL mailing list,
> and
> DRBD is obviously on the radar ;)
>
> It seems that three possibilities exist:
>
> a) DRBD over ZVOLs (with one DRBD resource per ZVOL);
> b) ZFS over
Le 12/06/2017 à 10:09, Robert Altnoeder a écrit :
> On 06/12/2017 09:39 AM, Julien Escario wrote:
>
>> Finally, I've been able to fully restore vm4 and vm5 (drbdsetup and
>> drbdmanage
>> working) but not vm7.
>>
>> I've done that by firewalli
t nodes ?
Thanks for a lot !
Julien Escario
smime.p7s
Description: Signature cryptographique S/MIME
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Le 09/06/2017 à 14:24, Julien Escario a écrit :
> Le 09/06/2017 à 09:59, Robert Altnoeder a écrit :
>> On 06/08/2017 04:14 PM, Julien Escario wrote:
>>> Hello,
>>> A drbdmanage cluster is actually stuck in this state :
>>> .drbdctrl role:Secondary
>>
Le 09/06/2017 à 09:59, Robert Altnoeder a écrit :
> On 06/08/2017 04:14 PM, Julien Escario wrote:
>> Hello,
>> A drbdmanage cluster is actually stuck in this state :
>> .drbdctrl role:Secondary
>> volume:0 disk:UpToDate
>> volume:1 disk:UpToDate
>>
Le 09/06/2017 à 09:59, Robert Altnoeder a écrit :
> On 06/08/2017 04:14 PM, Julien Escario wrote:
>> Hello,
>> A drbdmanage cluster is actually stuck in this state :
>> .drbdctrl role:Secondary
>> volume:0 disk:UpToDate
>> volume:1 disk:UpToDate
>>
-hash: d0032c6a22c29812263ab34a6e856a5b36fd7da0
Of course, same versions on three nodes.
Thanks for your help,
Julien Escario
smime.p7s
Description: Signature cryptographique S/MIME
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/ma
Le 18/08/2016 à 13:33, Julien Escario a écrit :
> Hello,
> After rebooting a node, I can see somthing strange :
>
> # drbdmanage list-assignments
>> | vm4 | vm-206-disk-1 | * | |
>> ok |
&
Hello,
After rebooting a node, I can see somthing strange :
# drbdmanage list-assignments
> | vm4 | vm-206-disk-1 | * | |
> ok |
> | vm5 | vm-206-disk-1 | * | |
>
Le 11/08/2016 09:10, Ml Ml a écrit :
> Hello List,
>
> i wonder if DRBD9 is ready for production?
>
> I posted my Problem here:
> http://lists.linbit.com/pipermail/drbd-user/2016-April/022893.html
>
> And i ran into this problem a few times now. So i switched to a 2 Node
> Setup (which works f
Le 17/08/2016 12:19, Roland Kammerer a écrit :
> On Wed, Aug 17, 2016 at 11:34:22AM +0200, Julien Escario wrote:
>> So my question now : is there a way to restart drbdmanage 'server' without
>> having to restart the whole server ? As it's dbus, I don't want to
anage server-version
server_version=0.97
So my question now : is there a way to restart drbdmanage 'server' without
having to restart the whole server ? As it's dbus, I don't want to create a
mess. I would like to test the procedure and the second node.
Best regards,
Julien Esca
Hello,
Using /proc/drbd doesn't sounds like a good idea : with drbd9, there's
nothing but version in there.
Perhaps should you use drbdadm and other command line tools ?
Regards,
Julien
Le 18/02/2016 19:59, Digimer a écrit :
Hi all,
I'm working on a program that (amoung other things) man
Le 04/02/2016 10:07, Robert Altnoeder a écrit :
> On 01/29/2016 11:08 AM, Julien Escario wrote:
>> Le 25/01/2016 15:19, Julien Escario a écrit :
>>> So I'm wondering how and when 'pool free' value is calculated. Is it
>>> recalculated only when a ne
43.24
> 87.26
>
I'm really surprised by your high percentage of metadata (87,26%).
Did you do some snapshots ? It shoud be displayed by the lvs command but perhaps
some are still pending deletion or something like that ?
Cou
Le 25/01/2016 15:19, Julien Escario a écrit :
> So I'm wondering how and when 'pool free' value is calculated. Is it
> recalculated only when a new ressource is created ? deleted ?
>
> Is there a way to force it to rescan free space ? (with a dbus command
> perhaps
Le 27/01/2016 12:10, Matthew Vernon a écrit :
> resource mws-priv-7 {
> device /dev/drbd87;
> disk /dev/guests/mwsig-mws-priv-7;
> meta-disk internal;
> on agogue {
> address ipv6 [fd19:1b70:f7a6:1ae5::8d:6]:7875;
> }
> on odochium {
> address ipv6 [fd19:1b70:f7a6:1ae5::8d:7]:
Hello,
So to continue with my experiments with drbdmanage and thinlv plugin :
drbdmanage 0.91 (same thing seems to happen with 0.50).
storage-plugin = drbdmanage.storage.lvm_thinlv.LvmThinLv
I create a VM with proxmox, 10 GB disk.
Right after, I got this free pool space with drbdmanage list-node
Le 23/01/2016 09:25, Roland Kammerer a écrit :
> On Fri, Jan 22, 2016 at 07:48:25PM +0100, Julien Escario wrote:
>> Seems I found a anwser : I was using drbdmanage 0.91 with a thin lv but this
>> version is using drbdmanage.storage.lvm.Lvm as default plugin.
>>
>>
orage.lvm.Lvm",
> +KEY_STOR_NAME : "drbdmanage.storage.lvm_thinlv.LvmThinLv",
> KEY_DEPLOYER_NAME : "drbdmanage.deployers.BalancedDeployer",
> KEY_MAX_NODE_ID: str(DEFAULT_MAX_NODE_ID),
> KEY_MAX_PEERS : str(DEFAULT_MAX_PEERS),
Seems I found a anwser : I was using drbdmanage 0.91 with a thin lv but this
version is using drbdmanage.storage.lvm.Lvm as default plugin.
I'm now wondering how I can change the default plugin BEFORE initializing the
nodes.
Regards,
Julien Escario
Le 22/01/2016 11:53, Julien Escario a
Le 21/01/2016 11:59, Rudolf Kasper a écrit :
> Hi,
>
> i got a question. We've got a setup with three nics. One of them is cross-over
> and only for drbd use. So i expect that we can replicate 120M/s over this nic
> constantly. But when i transfer some files to the drbd device i see sometimes
> tr
Hello,
Today, I've been stuck in creation a new volumes (with proxmox) on a 2 nodes
cluster because it reports "Not enough free space".
I checked a few settings on the machines :
> # drbdmanage list-nodes
> +--
Le 19/01/2016 10:57, Roland Kammerer a écrit :
> On Tue, Jan 19, 2016 at 10:43:11AM +0100, Julien Escario wrote:
>> If not possible now, is it a planned feature ?
>
> Not possible now, but on the roadmap (for > 1.0.0). 1.0.0 should be out
> soon.
>
> Regards, rck
Hello,
We're extensively trying drbdmanage. We're still using v0.5 but considering
trying 0.9 soon.
Just a quick question about multi-tiering : is it possible to manage 2 different
pools (one with HDD and SSD) with drbdmanage ? It's mainly about having 2
different LVM vgs as backend devices and be
Hello,
I'm currently trying some setups with DRBD9.
Today, I simulated a network failure between two nodes in protocol A with two
ressources created using DRBDmanage 0.5.
For this, I disabled and re-enabled a switch port on NodeB.
Both ressources were sync and only NodeA was primary. NodeB was s
tests by moving back the VM to node1 and with more
log activated.
Thanks for your reading and help,
Julien Escario
And finally the call trace :
Feb 13 06:38:27 dedie58 kernel: INFO: task kvm:820630 blocked for more than 120
seconds.
Feb 13 06:38:27 dedie58 kernel: "echo 0 >
/proc/sys/k
I asked for the same thing (without SSD) a few weeks ago.
Someone answered me that these preformances are perfectly normal is dual master
configuration.
Seems to be due to the network latency (first server I/O + network latency +
second server I/O + network latency (ACK))
I finally decided th
8.0.16 (api:86/proto:86)
Do you have any clue about what is giving me this 3.5 factor ?
What other information can I give you ?
Thanks for your help,
Julien Escario
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
97 matches
Mail list logo