Very cool ;)
I can wait a couple of days;)
Live Migration also implemented as well?
> Am 16.06.2016 um 15:06 schrieb Wolfgang Link :
>
> This will come in some days.
> code is at git available.
>
>
> On 06/16/2016 03:04 PM, Daniel Eschner wrote:
>> Hi to all,
>
Hi to all,
it seems that it is not possible to mirgate an offline LCX container to another
cluster member when its located on thin-lvm:
Jun 16 15:02:32 starting migration of CT 105 to node 'host07' (10.0.2.116)
Jun 16 15:02:32 copy mountpoint 'rootfs' (local-lvm:vm-105-disk-1) to node '
host07'
Hi all,
i am playing with Ceph and some thinks i dont understand.
The Proxmox docu tells how to setup easily - everthinks is working.
But i want to understand how the Recplication and Redundancy works.
I Setup 3 SSD OSDs ans created a Pool with Size 3 and Mn Size 2
Proxmox tells me the available
seems to be :-(
but should not be a big deal to copy all files ;)
> Am 27.05.2016 um 13:34 schrieb David Lawley :
>
> seen that too, I would appear that it is not implemented yet in LXC?
>
>
>
> On 5/26/2016 5:39 PM, Daniel Eschner wrote:
>> Mhh,
>>
&g
ab of the VM.
> Press move disk and choose Ceph storage.
> Do not click delete...
>
> After migrate you get a unused disk and you can remove if running OK in the VM
>
> Verstuurd vanaf mijn iPhone
>
>> Op 26 mei 2016 om 23:30 heeft Daniel Eschner het
>> volgen
gt; After migrate you get a unused disk and you can remove if running OK in the VM
>
> Verstuurd vanaf mijn iPhone
>
>> Op 26 mei 2016 om 23:30 heeft Daniel Eschner het
>> volgende geschreven:
>>
>> Do you know if there is a easy way to migrate a VM to anothe
c/pve/node etc in the right dir
>
> Verstuurd vanaf mijn iPhone
>
>> Op 26 mei 2016 om 23:20 heeft Daniel Eschner het
>> volgende geschreven:
>>
>> Configurationfiles and LVM still exist in the Host-System.
>>
>> ACTIVE'/dev/pve/v
Hi,
thx. That was realy easy ;) Many thank.
> Am 26.05.2016 um 23:25 schrieb Bart Lageweg | Bizway :
>
> Hi Daniel,
>
> You can move the config files trough /etc/pve/node etc in the right dir
>
> Verstuurd vanaf mijn iPhone
>
>> Op 26 mei 2016 om 23:20 heeft Da
Configurationfiles and LVM still exist in the Host-System.
ACTIVE'/dev/pve/vm-100-disk-1' [20.00 GiB] inherit
ACTIVE'/dev/pve/vm-101-disk-1' [50.00 GiB] inherit
Just proxmox need to know the the VM is located on that server ;)
> Am 26.05.2016
Hi All,
i played a bit with the HA feature. It seems now i damaged 2 VMs which are
located on the local LVM-THIN
After i placed the nodes in the HA and rebootet the Host-Systems all VMs has
been migrated to another Host which is OK but now i am not able to start the VM
or not able to migrate it
Problem is fixed. It was a configuration problem with my switches.
it seems that they have Multicast groups and serval Ports.
> Am 22.05.2016 um 18:01 schrieb Daniel Eschner :
>
> Hi all,
>
> anyone know what happend with that issue:
>
> root@host01:~# omping -c 60 -i 1 -q
Hi all,
anyone know what happend with that issue:
root@host01:~# omping -c 60 -i 1 -q host01 host02 host03 host04 host05 host06
host07 |grep multicast
host02 : multicast, xmt/rcv/%loss = 60/60/0%, min/avg/max/std-dev =
0.076/0.155/0.274/0.048
host03 : multicast, xmt/rcv/%loss = 60/60/0%, min/av
Yep was my fault ;)
Need to start that command on all nodes ;)
Then it works without problems
> Am 22.05.2016 um 16:18 schrieb Michael Rasmussen :
>
> On Sun, 22 May 2016 16:16:29 +0200
> Daniel Eschner wrote:
>
>> Is that correct?
>>
>> root@host01:~# om
, seq=1, size=69 bytes, dist=0, time=0.010ms
host01 : unicast, seq=2, size=69 bytes, dist=0, time=0.017ms
host01 : multicast, seq=2, size=69 bytes, dist=0, time=0.020ms
> Am 22.05.2016 um 16:10 schrieb Daniel Eschner :
>
> Mhh, maybe it can be a multicast problem :-(
>
> host0
what i can do.
> Am 22.05.2016 um 15:55 schrieb Michael Rasmussen :
>
> On Sun, 22 May 2016 15:47:59 +0200
> Daniel Eschner wrote:
>
>> Mhh
>>
>> have corosync problem with bonding maybe?
>>
> Looks more like multicast problem to me.
>
> --
> Am 22.05.2016 um 15:55 schrieb Michael Rasmussen :
>
> On Sun, 22 May 2016 15:47:59 +0200
> Daniel Eschner wrote:
>
>> Mhh
>>
>> have corosync problem with bonding maybe?
>>
> Looks more like multicast problem to me.
Its a typical Nework de
Mhh
have corosync problem with bonding maybe?
May 22 15:45:23 host01 pmxcfs[2046]: [status] notice: node lost quorum
May 22 15:45:23 host01 pmxcfs[2046]: [dcdb] crit: received write while not
quorate - trigger resync
May 22 15:45:23 host01 pmxcfs[2046]: [dcdb] crit: leaving CPG group
May 22 15
Hope yes ;)
Just one Node of 10 Nodes making trouble :-(
Is there anyway to test it easily?
> Am 22.05.2016 um 14:12 schrieb Michael Rasmussen :
>
> Have you verified that multicast is working properly?
>
> On May 22, 2016 1:50:16 PM GMT+02:00, Daniel Eschner
> wrote:
Thats what not working :-(
I Reinstall the whole Cluster no again :-(
Third try :-(
> Am 22.05.2016 um 07:05 schrieb Dietmar Maurer :
>
>> is it possible to force delnode? Dont know why but it seems i have a lot of
>> trouble with Proxmox cluster.
>>
>> Nothing happens since the last 30 Minutes
Hi there,
is it possible to force delnode? Dont know why but it seems i have a lot of
trouble with Proxmox cluster.
Nothing happens since the last 30 Minutes:
oot@host01:~# pvecm delnode host09
Cheers
Daniel
___
pve-user mailing list
pve-user@pve.
Hi there,
i still made a small Proxmox Cluster with 2 bonding interfaces. Here is my
Config:
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode active-backup
auto
21 matches
Mail list logo