Happy to say that I just passed this "Get local VM IP" step
There were a lot of leftover from previous failed attempts (cf log I sent
earlier : "internal error: Failed to autostart storage pool..." )
Those were not cleaned up by ovirt-hosted-engine-cleanup
I had to do the followinf so libvirt w
journalctl -u libvirtd.service :
févr. 25 18:47:24 vs-inf-int-kvm-fr-301-210.hostics.fr systemd[1]: Stopping
Virtualization daemon...
févr. 25 18:47:24 vs-inf-int-kvm-fr-301-210.hostics.fr systemd[1]: Stopped
Virtualization daemon.
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr systemd[1]:
On Mon, Feb 25, 2019 at 7:15 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:
> No, as indicated previously, still :
>
> [root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default
> Expiry Time MAC addressProtocol IP address
> HostnameClien
No, as indicated previously, still :
[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default
Expiry Time MAC addressProtocol IP address
HostnameClient ID or DUID
--
On Mon, Feb 25, 2019 at 7:04 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:
> I still can't connect with VNC remotely but locally with X forwarding it
> works.
> However my connection has too high latency for that to be usable (I'm in
> Japan, my hosts in France, ~250 ms ping)
I still can't connect with VNC remotely but locally with X forwarding it
works.
However my connection has too high latency for that to be usable (I'm in
Japan, my hosts in France, ~250 ms ping)
But I could see that the VM is booted!
and in Hosts logs there is :
févr. 25 18:51:12 vs-inf-int-kvm-f
On Mon, Feb 25, 2019 at 5:50 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:
> I did that but no success yet.
>
> I see that "Get local VM IP" task tries the following :
>
> virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{
> print $5 }' | cut -f1 -d'/'
>
I did that but no success yet.
I see that "Get local VM IP" task tries the following :
virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{
print $5 }' | cut -f1 -d'/'
However while the task is running, and vm running in qemu, "virsh -r
net-dhcp-leases default" never returns
OK, try this:
temporary
edit
/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml
around line 120
and edit tasks "Get local VM IP"
changing from "retries: 50" to "retries: 500" so that you have more time
to debug it
On Mon, Feb 25, 2019 at 4:20 PM
I retried after killing the remaining qemu process and
doing ovirt-hosted-engine-cleanup
The new attempt failed again at the same step. Then after it fails, it
cleans the temporary files (and vm disk) but *qemu still runs!* :
[ INFO ] TASK [ovirt.hosted_engine_setup : Get local VM IP]
[ ERROR ]
Something was definitely wrong ; as indicated, qemu process
for guest=HostedEngineLocal was running but the disk file did not exist
anymore...
No surprise I could not connect
I am retrying
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Mon, Feb 25, 2019 at 11:15 PM Guillaume
It fails too :
I made sure PermitTunnel=yes in sshd config but when I try to connect to
the forwarded port I get the following error on the openened host ssh
session :
[gpavese@sheepora-X230 ~]$ ssh -v -L 5900:
vs-inf-int-kvm-fr-301-210.hostics.fr:5900
r...@vs-inf-int-kvm-fr-301-210.hostics.fr
...
I made sure of everything and even stopped firewalld but still can't
connect :
[root@vs-inf-int-kvm-fr-301-210 ~]# cat
/var/run/libvirt/qemu/HostedEngineLocal.xml
[root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59
tcp0 0 127.0.0.1:5900 0.0.0.0:*
Hey!
You can check under /var/run/libvirt/qemu/HostedEngine.xml
Search for 'vnc'
>From there you can look up the port on which the HE VM is available and
connect to the same.
On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:
> 1) I am running in a
On Mon, Feb 25, 2019 at 2:14 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:
> 1) I am running in a Nested env, but under libvirt/kvm on remote Centos
> 7.4 Hosts
>
> Please advise how to connect with VNC to the local HE vm. I see it's
> running, but this is on a remote host, n
1) I am running in a Nested env, but under libvirt/kvm on remote Centos 7.4
Hosts
Please advise how to connect with VNC to the local HE vm. I see it's
running, but this is on a remote host, not my local machine :
qemu 13376 100 3.7 17679424 845216 ? Sl 12:46 85:08
/usr/libexec/qemu-kv
On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:
> He deployment with "hosted-engine --deploy" fails at TASK
> [ovirt.hosted_engine_setup : Get local VM IP]
>
> See following Error :
>
> 2019-02-25 12:46:50,154+0100 INFO
> otopi.ovirt_hosted_engine_s
I had a somewhat related issue. An NFS domain I was using for an iso
domain failed. It set off a sequence of constantly rebooting hosts
(when fencing was enabled) and constantly deactivating/reactivating
hosts (when I disabled fencing). For about a day, oVirt was completely
unusable. All be
Il giorno mer 20 feb 2019 alle ore 16:07 Guillaume Pavese <
guillaume.pav...@interactiv-group.com> ha scritto:
> I am on the Trello board, but I can not create cards
>
Sorry for that, try using
https://trello.com/invite/b/5ZNJgPC3/f1b1826ee4902f348c44607765a15099/ovirt-431-test-day-1
for joining
I am on the Trello board, but I can not create cards
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Wed, Feb 20, 2019 at 10:05 PM Sandro Bonazzola
wrote:
>
>
> Il giorno mar 12 feb 2019 alle ore 10:49 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
>
>> Hi,
>> We are pl
Il giorno mar 12 feb 2019 alle ore 10:49 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:
> Hi,
> We are planning to release the first candidate of 4.3.1 on February
> 20th[1] and the final release on February 26th.
> Please join us testing this release candidate right after it will be
> announ
Hi,
Well, there is a severe bug that I complained about it on 4.2 (or 4.1? I
don't remember) and it's regarding "yanking the power cable".
Basically I'm performing a simple test: kill all hosts immediately to
simulate a power loss without UPS.
For this test I have 2 nodes, and 4 storage domains:
22 matches
Mail list logo