Hi again,

In fact there is other problems, when I migrate a VM on another node, it's fail 
with this error message :

sept. 05 16:26:32 starting migration of VM 100 to node 'dmz-pve2' (192.168.0.20)
sept. 05 16:26:32 copying disk images
sept. 05 16:26:32 starting VM 100 on remote node 'dmz-pve2'
sept. 05 16:26:34 start failed: command '/usr/bin/kvm -id 100 -chardev 
'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 
'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/100.pid -daemonize 
-smbios 'type=1,uuid=644673aa-fb7e-4b11-9fa3-2f3ec9098235' -name vm100 -smp 
'4,sockets=2,cores=2,maxcpus=4' -nodefaults -boot 
'menu=on,strict=on,reboot-timeout=1000' -vga cirrus -vnc 
unix:/var/run/qemu-server/100.vnc,x509,password -cpu 
kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 4096 -k fr -device 
'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 
'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 
'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device
'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 
'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 
'initiator-name=iqn.1993-08.org.debian:01:96495c6512a9' -drive 
'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 
'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 
'file=/dev/drbd/by-res/vm-100-disk-2/0,if=none,id=drive-virtio0,cache=writethrough,format=raw,aio=threads,detect-zeroes=on'
 -device 
'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100'
 -netdev 
'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on'
 -device
'virtio-net-pci,mac=B6:35:60:34:BD:DD,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'
 -machine 'type=pc-i440fx-2.6' -incoming unix:/run/qemu-server/100.migrate -S' 
failed: exit code 1
sept. 05 16:26:34 ERROR: online migrate failure - command '/usr/bin/ssh -o 
'BatchMode=yes' root@192.168.0.20 qm start 100 --skiplock --migratedfrom 
dmz-pve3 --stateuri unix --machine pc-i440fx-2.6' failed: exit code 255
sept. 05 16:26:34 aborting phase 2 - cleanup resources
sept. 05 16:26:34 migrate_cancel
sept. 05 16:26:34 ERROR: migration finished with problems (duration 00:00:02)
TASK ERROR: migration problems

Proxmox search for /dev/drbd/by-res/vm-100-disk-2/0 which is not present 
(/dev/drbdpool/vm-100-disk-2_00 exist).
If I do:

mkdir /dev/drbd/by-res/vm-100-disk-2
cd /dev/drbd/by-res/vm-100-disk-2
ln -s ../../../drbd101 0

My virtual machine can start.
How to have /dev/drbd/by-res automatically filled by resources for all VMs ?

PS : I do some apt-get purge ` deborphan --guess-all ` (until nothing need to 
be purged) on all Proxmox nodes as I do for all my Debian servers.

Thanks in advance.


Le 05/09/2016 à 12:53, Jean-Daniel TISSOT a écrit :
>
> Hi Robert,
>
> Thanks a lot. This export solve my problem. A drbdadm adjust all and after a 
> drbdadm status show me good results.
>
>
> Le 05/09/2016 à 11:54, Robert Altnoeder a écrit :
>> On 09/02/2016 06:05 PM, Jean-Daniel TISSOT wrote:
>>> Hi list,
>>>
>>> I remark i have a file /var/lib/drbd.d/drbdmanage_vm-100-disk-2.res [...]
>>> If i reboot a node, this file disappear on this node [...]
>>> [...] if i do drbdadm connect vm-100-disk-2 i receive :
>>> 'vm-100-disk-2' not defined in your config (for this host).
>>>
>> The DRBD resources on ProxMox with DRBD9 are managed by drbdmanage instead 
>> of just by drbdadm.
>> You can run:
>> drbdmanage export vm-100-disk-2
>> which will recreate the configuration file in /var/lib/drbd.d for use with 
>> drbdadm
>>
>> This command will also implicitly start the drbdmanage server if it is not 
>> yet started (otherwise, drbdmanage startup can be used to start the 
>> drbdmanage server, which also starts all resources managed by drbdmanage).
>>
>> Apart from that, since the resource appears to be running, but is marked as 
>> Outdated and StandAlone, there might also be something wrong with the 
>> resource itself. If the resource does not reconnect/resync after a 'drbdadm 
>> adjust' command, the system log should be checked for messages regarding the 
>> state of that resource (such as e.g. a split-brain alert)
>>
>> -- 
>> Robert Altnoeder
>> DRBD - Corosync - Pacemaker
>> +43 (1) 817 82 92 - 0 <tel:43181782920>
>> robert.altnoe...@linbit.com <mailto:robert.altnoe...@linbit.com>
>>
>> DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
>>
>>
>>
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>
> -- 
> Bien cordialement, Jean-Daniel TISSOT 
> <http://chrono-environnement.univ-fcomte.fr/spip.php?article457>
> Administrateur Systèmes et Réseaux
> Tel: +33 3 81 666 440 Fax: +33 3 81 666 568
>
> Laboratoire Chrono-environnement <http://chrono-environnement.univ-fcomte.fr/>
> 16, Route de Gray
> 25030 BESANÇON Cédex
>
> Plan et Accès 
> <https://mapsengine.google.com/map/viewer?mid=zjsxW4ZzZPLY.kp2qPHUBD45c>

-- 
Bien cordialement, Jean-Daniel TISSOT 
<http://chrono-environnement.univ-fcomte.fr/spip.php?article457>
Administrateur Systèmes et Réseaux
Tel: +33 3 81 666 440 Fax: +33 3 81 666 568

Laboratoire Chrono-environnement <http://chrono-environnement.univ-fcomte.fr/>
16, Route de Gray
25030 BESANÇON Cédex

Plan et Accès 
<https://mapsengine.google.com/map/viewer?mid=zjsxW4ZzZPLY.kp2qPHUBD45c>
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to