[ovirt-users] Re: template permissions not inherited (4.3.4)

2019-07-23 Thread Lucie Leistnerova

Hi Christoph,

On 7/22/19 2:28 PM, Timmi wrote:

Hi oVirt List,

I have just a quick question if I should open a ticket for this or if 
I'm doing something wrong.


I created a new VM template with specific permissions in addition to 
the system wide permissions. If I create a new VM with the template I 
notices that only system permissions are copied to the permission of 
the new VM.


Is this the intended behavior? I was somehow under the impression that 
the permission from the template should have been copied to the newly 
created VM.


Did you check by creating the VM that permissions should be copied?

I've tested ovirt 4.3.5 and I added UserRole and custom role to the 
template for test user. Newly created VM contained both of the roles for 
the user. Is this the case you mean?




Tested with Version 4.3.4.3-1.el7

Best regards
Christoph
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ADS2EUY4K3RA2ZF6OEG2GHW6ZPUIZKLH/

Best regards,

--
Lucie Leistnerova
Senior Quality Engineer, QE Cloud, RHVM
Red Hat EMEA

IRC: lleistne @ #rhev-qe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCV6GXBBPRCZMPACA56LKZFYT2RQBPEQ/


[ovirt-users] Re: Import VM from ovirt-exported ova

2019-07-23 Thread xilazz
Hello, I have encountered the same problem, please ask whether you have solved 
it at last.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OMYH74QSGU7SI3SDRXIDFJP3KLMCMZCP/


[ovirt-users] Re: major network changes

2019-07-23 Thread carl langlois
If i try to access http://ovengine/ovirt-engine/services/health
i always get "Service Unavailable" in the browser and each time i it reload
in the browser i get in the error_log

 [proxy_ajp:error] [pid 1868] [client 10.8.1.76:63512] AH00896: failed to
make connection to backend: 127.0.0.1
[Tue Jul 23 14:04:10.074023 2019] [proxy:error] [pid 1416] (111)Connection
refused: AH00957: AJP: attempt to connect to 127.0.0.1:8702 (127.0.0.1)
failed

Thanks & Regards

Carl


On Tue, Jul 23, 2019 at 12:59 PM carl langlois 
wrote:

> Hi
> At one point we did have issue with DNS resolution(mainly the reverse
> lookup). But that was fix.  Yes we can ping both network and vice-versa.
>
> Not sure how to multi-home the engine. Will do some research on that.
>
> I did find something in the error_log on the engine.
>
> In the /etc/httpd/logs/error_log i always get this messages.
>
> [Tue Jul 23 11:21:52.430555 2019] [proxy:error] [pid 3189] AH00959: 
> ap_proxy_connect_backend disabling worker for (127.0.0.1) for 5s
> [Tue Jul 23 11:21:52.430562 2019] [proxy_ajp:error] [pid 3189] [client 
> 10.16.248.65:35154] AH00896: failed to make connection to backend: 127.0.0.1
>
> The 10.16.248.65 is the new address of the host that was move to the new
> network.
>
>
> Thanks & Regards
> Carl
>
>
>
>
> On Tue, Jul 23, 2019 at 11:52 AM Strahil  wrote:
>
>> According to another post in the mailing list, the Engine Hosts (that has
>> ovirt-ha-agent/ovirt-ha-broker running) is checking http://
>> {fqdn}/ovirt-engine/services/health
>>
>> As the IP is changed, I think you need to check the URL before and after
>> thr mifgration.
>>
>> Best Regards,
>> Strahil NikolovOn Jul 23, 2019 16:41, Derek Atkins 
>> wrote:
>> >
>> > Hi,
>> >
>> > If I understand it correctly, the HE Hosts try to ping (or SSH, or
>> > otherwise reach) the Engine host.  If it reaches it, then it passes the
>> > liveness check. If it cannot reach it, then it fails.  So to me this
>> error
>> > means that there is some configuration, somewhere, that is trying to
>> reach
>> > the engine on the old address (which fails when the engine has the new
>> > address).
>> >
>> > I do not know where in the *host* configuration this data lives, so I
>> > cannot suggest where you need to change it.
>> >
>> > Can 10.16.248.x reach 10.8.236.x and vice-versa?
>> >
>> > Maybe multi-home the engine on both networks for now until you figure
>> it out?
>> >
>> > -derek
>> >
>> > On Tue, July 23, 2019 9:13 am, carl langlois wrote:
>> > > Hi,
>> > >
>> > > We have managed to stabilize the DNS udpate in out network. Now the
>> > > current
>> > > situation is.
>> > > I have 3 hosts that can run the engine (hosted-engine).
>> > > They were all in the 10.8.236.x. Now i have moved one of them in the
>> > > 10.16.248.x.
>> > >
>> > > If i boot the engine on one of the host that is in the 10.8.236.x the
>> > > engine is going up with status "good". I can access the engine UI. I
>> can
>> > > see all my hosts even the one in the 10.16.248.x network.
>> > >
>> > > But if i boot the engine on the hosted-engine host that was switch to
>> the
>> > > 10.16.248.x the engine is booting. I can ssh to it but the status is
>> > > always
>> > > " fail for liveliness check".
>> > > The main difference is that when i boot on the host that is in the
>> > > 10.16.248.x network the engine gets a address in the 248.x network.
>> > >
>> > > On the engine i have this in the
>> > > /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
>> > > 019-07-23
>> > >
>> 09:05:30|MFzehi|YYTDiS|jTq2w8|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
>>
>> > > not sample data, oVirt Engine is not updating the statistics. Please
>> check
>> > > your oVirt Engine status.|9704
>> > > the engine.log seems okey.
>> > >
>> > > So i need to understand what this " liveliness check" do(or try to
>> do) so
>> > > i
>> > > can investigate why the engine status is not becoming good.
>> > >
>> > > The initial deployment was done in the 10.8.236.x network. Maybe is
>> as
>> > > something to do with that.
>> > >
>> > > Thanks & Regards
>> > >
>> > > Carl
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > On Thu, Jul 18, 2019 at 8:53 AM Miguel Duarte de Mora Barroso <
>> > > mdbarr...@redhat.com> wrote:
>> > >
>> > >> On Thu, Jul 18, 2019 at 2:50 PM Miguel Duarte de Mora Barroso
>> > >>  wrote:
>> > >> >
>> > >> > On Thu, Jul 18, 2019 at 1:57 PM carl langlois <
>> crl.langl...@gmail.com>
>> > >> wrote:
>> > >> > >
>> > >> > > Hi Miguel,
>> > >> > >
>> > >> > > I have managed to change the config for the ovn-controler.
>> > >> > > with those commands
>> > >> > >  ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=ssl:
>> > >> 10.16.248.74:6642
>> > >> > >  ovs-vsctl set Open_vSwitch .
>> external-ids:ovn-encap-ip=10.16.248.65
>> > >> > > and restating the services
>> > >> >
>> > >> > Yes, that's what the script is supposed to do, check [0].
>> > >> >
>>

[ovirt-users] Re: major network changes

2019-07-23 Thread carl langlois
Hi
At one point we did have issue with DNS resolution(mainly the reverse
lookup). But that was fix.  Yes we can ping both network and vice-versa.

Not sure how to multi-home the engine. Will do some research on that.

I did find something in the error_log on the engine.

In the /etc/httpd/logs/error_log i always get this messages.

[Tue Jul 23 11:21:52.430555 2019] [proxy:error] [pid 3189] AH00959:
ap_proxy_connect_backend disabling worker for (127.0.0.1) for 5s
[Tue Jul 23 11:21:52.430562 2019] [proxy_ajp:error] [pid 3189] [client
10.16.248.65:35154] AH00896: failed to make connection to backend:
127.0.0.1

The 10.16.248.65 is the new address of the host that was move to the new
network.


Thanks & Regards
Carl




On Tue, Jul 23, 2019 at 11:52 AM Strahil  wrote:

> According to another post in the mailing list, the Engine Hosts (that has
> ovirt-ha-agent/ovirt-ha-broker running) is checking http://
> {fqdn}/ovirt-engine/services/health
>
> As the IP is changed, I think you need to check the URL before and after
> thr mifgration.
>
> Best Regards,
> Strahil NikolovOn Jul 23, 2019 16:41, Derek Atkins 
> wrote:
> >
> > Hi,
> >
> > If I understand it correctly, the HE Hosts try to ping (or SSH, or
> > otherwise reach) the Engine host.  If it reaches it, then it passes the
> > liveness check. If it cannot reach it, then it fails.  So to me this
> error
> > means that there is some configuration, somewhere, that is trying to
> reach
> > the engine on the old address (which fails when the engine has the new
> > address).
> >
> > I do not know where in the *host* configuration this data lives, so I
> > cannot suggest where you need to change it.
> >
> > Can 10.16.248.x reach 10.8.236.x and vice-versa?
> >
> > Maybe multi-home the engine on both networks for now until you figure it
> out?
> >
> > -derek
> >
> > On Tue, July 23, 2019 9:13 am, carl langlois wrote:
> > > Hi,
> > >
> > > We have managed to stabilize the DNS udpate in out network. Now the
> > > current
> > > situation is.
> > > I have 3 hosts that can run the engine (hosted-engine).
> > > They were all in the 10.8.236.x. Now i have moved one of them in the
> > > 10.16.248.x.
> > >
> > > If i boot the engine on one of the host that is in the 10.8.236.x the
> > > engine is going up with status "good". I can access the engine UI. I
> can
> > > see all my hosts even the one in the 10.16.248.x network.
> > >
> > > But if i boot the engine on the hosted-engine host that was switch to
> the
> > > 10.16.248.x the engine is booting. I can ssh to it but the status is
> > > always
> > > " fail for liveliness check".
> > > The main difference is that when i boot on the host that is in the
> > > 10.16.248.x network the engine gets a address in the 248.x network.
> > >
> > > On the engine i have this in the
> > > /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
> > > 019-07-23
> > >
> 09:05:30|MFzehi|YYTDiS|jTq2w8|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
>
> > > not sample data, oVirt Engine is not updating the statistics. Please
> check
> > > your oVirt Engine status.|9704
> > > the engine.log seems okey.
> > >
> > > So i need to understand what this " liveliness check" do(or try to do)
> so
> > > i
> > > can investigate why the engine status is not becoming good.
> > >
> > > The initial deployment was done in the 10.8.236.x network. Maybe is as
> > > something to do with that.
> > >
> > > Thanks & Regards
> > >
> > > Carl
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Thu, Jul 18, 2019 at 8:53 AM Miguel Duarte de Mora Barroso <
> > > mdbarr...@redhat.com> wrote:
> > >
> > >> On Thu, Jul 18, 2019 at 2:50 PM Miguel Duarte de Mora Barroso
> > >>  wrote:
> > >> >
> > >> > On Thu, Jul 18, 2019 at 1:57 PM carl langlois <
> crl.langl...@gmail.com>
> > >> wrote:
> > >> > >
> > >> > > Hi Miguel,
> > >> > >
> > >> > > I have managed to change the config for the ovn-controler.
> > >> > > with those commands
> > >> > >  ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=ssl:
> > >> 10.16.248.74:6642
> > >> > >  ovs-vsctl set Open_vSwitch .
> external-ids:ovn-encap-ip=10.16.248.65
> > >> > > and restating the services
> > >> >
> > >> > Yes, that's what the script is supposed to do, check [0].
> > >> >
> > >> > Not sure why running vdsm-tool didn't work for you.
> > >> >
> > >> > >
> > >> > > But even with this i still have the "fail for liveliness check"
> when
> > >> starting the ovirt engine. But one thing  i notice with our new
> network
> > >> is
> > >> that the reverse DNS does not work(IP -> hostname). The forward is
> > >> working
> > >> fine. I am trying to see with our IT why it is not working.
> > >> >
> > >> > Do you guys use OVN? If not, you could disable the provider,
> install
> > >> > the hosted-engine VM, then, if needed, re-add / re-activate it .
> > >>
> > >> I'm assuming it fails for the same reason you've stated initially  -
> > >> i.e. ovn-controller is involve

[ovirt-users] Re: major network changes

2019-07-23 Thread Strahil
According to another post in the mailing list, the Engine Hosts (that has 
ovirt-ha-agent/ovirt-ha-broker running) is checking 
http://{fqdn}/ovirt-engine/services/health

As the IP is changed, I think you need to check the URL before and after thr 
mifgration.

Best Regards,
Strahil NikolovOn Jul 23, 2019 16:41, Derek Atkins  wrote:
>
> Hi, 
>
> If I understand it correctly, the HE Hosts try to ping (or SSH, or 
> otherwise reach) the Engine host.  If it reaches it, then it passes the 
> liveness check. If it cannot reach it, then it fails.  So to me this error 
> means that there is some configuration, somewhere, that is trying to reach 
> the engine on the old address (which fails when the engine has the new 
> address). 
>
> I do not know where in the *host* configuration this data lives, so I 
> cannot suggest where you need to change it. 
>
> Can 10.16.248.x reach 10.8.236.x and vice-versa? 
>
> Maybe multi-home the engine on both networks for now until you figure it out? 
>
> -derek 
>
> On Tue, July 23, 2019 9:13 am, carl langlois wrote: 
> > Hi, 
> > 
> > We have managed to stabilize the DNS udpate in out network. Now the 
> > current 
> > situation is. 
> > I have 3 hosts that can run the engine (hosted-engine). 
> > They were all in the 10.8.236.x. Now i have moved one of them in the 
> > 10.16.248.x. 
> > 
> > If i boot the engine on one of the host that is in the 10.8.236.x the 
> > engine is going up with status "good". I can access the engine UI. I can 
> > see all my hosts even the one in the 10.16.248.x network. 
> > 
> > But if i boot the engine on the hosted-engine host that was switch to the 
> > 10.16.248.x the engine is booting. I can ssh to it but the status is 
> > always 
> > " fail for liveliness check". 
> > The main difference is that when i boot on the host that is in the 
> > 10.16.248.x network the engine gets a address in the 248.x network. 
> > 
> > On the engine i have this in the 
> > /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log 
> > 019-07-23 
> > 09:05:30|MFzehi|YYTDiS|jTq2w8|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
> >  
> > not sample data, oVirt Engine is not updating the statistics. Please check 
> > your oVirt Engine status.|9704 
> > the engine.log seems okey. 
> > 
> > So i need to understand what this " liveliness check" do(or try to do) so 
> > i 
> > can investigate why the engine status is not becoming good. 
> > 
> > The initial deployment was done in the 10.8.236.x network. Maybe is as 
> > something to do with that. 
> > 
> > Thanks & Regards 
> > 
> > Carl 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > On Thu, Jul 18, 2019 at 8:53 AM Miguel Duarte de Mora Barroso < 
> > mdbarr...@redhat.com> wrote: 
> > 
> >> On Thu, Jul 18, 2019 at 2:50 PM Miguel Duarte de Mora Barroso 
> >>  wrote: 
> >> > 
> >> > On Thu, Jul 18, 2019 at 1:57 PM carl langlois  
> >> wrote: 
> >> > > 
> >> > > Hi Miguel, 
> >> > > 
> >> > > I have managed to change the config for the ovn-controler. 
> >> > > with those commands 
> >> > >  ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=ssl: 
> >> 10.16.248.74:6642 
> >> > >  ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-ip=10.16.248.65 
> >> > > and restating the services 
> >> > 
> >> > Yes, that's what the script is supposed to do, check [0]. 
> >> > 
> >> > Not sure why running vdsm-tool didn't work for you. 
> >> > 
> >> > > 
> >> > > But even with this i still have the "fail for liveliness check" when 
> >> starting the ovirt engine. But one thing  i notice with our new network 
> >> is 
> >> that the reverse DNS does not work(IP -> hostname). The forward is 
> >> working 
> >> fine. I am trying to see with our IT why it is not working. 
> >> > 
> >> > Do you guys use OVN? If not, you could disable the provider, install 
> >> > the hosted-engine VM, then, if needed, re-add / re-activate it . 
> >> 
> >> I'm assuming it fails for the same reason you've stated initially  - 
> >> i.e. ovn-controller is involved; if it is not, disregard this msg :) 
> >> > 
> >> > [0] - 
> >> https://github.com/oVirt/ovirt-provider-ovn/blob/master/driver/scripts/setup_ovn_controller.sh#L24
> >>  
> >> > 
> >> > > 
> >> > > Regards. 
> >> > > Carl 
> >>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UA6TWA5UHXT6NSLB7ZTAYWGMANQL3BED/


[ovirt-users] USB turns off and wont come alive

2019-07-23 Thread Darin Schmidt
I have 2 usb controllers installed via a riser card plugged into a M.2 slot. 
(Like those used for mining with GPU's). Everything works great for a long 
time, then my Windows 10 VM freezes and I lose all USB activity. LSPCI still 
sees the devices:

0a:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
0b:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
43:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller
44:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller

I assume a and b are the same physical card as its supposed to have 2 channels 
each, same with 44 and 43.

I tried for both 44 and 43

echo "1" > /sys/bus/pci/devices/\:43\:00.0/remove
echo "1" > /sys/bus/pci/rescan
echo "1" > /sys/bus/pci/devices/\:43\:00.0/reset

This did not work. I cannot determine if its just the VM thats no longer 
seeing/using the hardware or if its the hardware itself. I wonder if its a 
power state thing as well? Nothing I plug into the card seems to be recognized. 
Any Suggestions? rebooting the VM doesnt help either. It appears the hardware 
is functioning, but anything you plug into it isnt being detected.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4ST2T3NEL3EP54ZWR5KF5RW4VHUCVFN6/


[ovirt-users] Re: major network changes

2019-07-23 Thread Strahil
Hi Carl,

I think there is another thread here related to the migration to another 
network.

As far as I know, the check liveliness try's to access the ovirt's health page.
Does the new engine's ip has A/PTR record setup?

Also, check the engine logs, once the HostedEngine VM is up and running.

Best Regards,
Strahil NikolovOn Jul 23, 2019 16:13, carl langlois  
wrote:
>
> Hi,
>
> We have managed to stabilize the DNS udpate in out network. Now the current 
> situation is. 
> I have 3 hosts that can run the engine (hosted-engine).
> They were all in the 10.8.236.x. Now i have moved one of them in the 
> 10.16.248.x.
>
> If i boot the engine on one of the host that is in the 10.8.236.x the engine 
> is going up with status "good". I can access the engine UI. I can see all my 
> hosts even the one in the 10.16.248.x network.
>
> But if i boot the engine on the hosted-engine host that was switch to the 
> 10.16.248.x the engine is booting. I can ssh to it but the status is always " 
> fail for liveliness check".
> The main difference is that when i boot on the host that is in the 
> 10.16.248.x network the engine gets a address in the 248.x network.
>
> On the engine i have this in the 
> /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
> 019-07-23 
> 09:05:30|MFzehi|YYTDiS|jTq2w8|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
>  not sample data, oVirt Engine is not updating the statistics. Please check 
> your oVirt Engine status.|9704
> the engine.log seems okey.
>
> So i need to understand what this " liveliness check" do(or try to do) so i 
> can investigate why the engine status is not becoming good.
>
> The initial deployment was done in the 10.8.236.x network. Maybe is as 
> something to do with that.
>
> Thanks & Regards
>
> Carl
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Thu, Jul 18, 2019 at 8:53 AM Miguel Duarte de Mora Barroso 
>  wrote:
>>
>> On Thu, Jul 18, 2019 at 2:50 PM Miguel Duarte de Mora Barroso
>>  wrote:
>> >
>> > On Thu, Jul 18, 2019 at 1:57 PM carl langlois  
>> > wrote:
>> > >
>> > > Hi Miguel,
>> > >
>> > > I have managed to change the config for the ovn-controler.
>> > > with those commands
>> > >  ovs-vsctl set Open_vSwitch . 
>> > >external-ids:ovn-remote=ssl:10.16.248.74:6642
>> > >  ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-ip=10.16.248.65
>> > > and restating the services
>> >
>> > Yes, that's what the script is supposed to do, check [0].
>> >
>> > Not sure why running vdsm-tool didn't work for you.
>> >
>> > >
>> > > But even with this i still have the "fail for liveliness check" when 
>> > > starting the ovirt engine. But one thing  i notice with our new network 
>> > > is that the reverse DNS does not work(IP -> hostname). The forward is 
>> > > working fine. I am trying to see with our IT why it is not working.
>> >
>> > Do you guys use OVN? If not, you could disable the provider, install
>> > the hosted-engine VM, then, if needed, re-add / re-activate it .
>>
>> I'm assuming it fails for the same reason you've stated initially  -
>> i.e. ovn-controller is involved; if it is not, disregard this msg :)
>> >
>> > [0] - 
>> > https://github.com/oVirt/ovirt-provider-ovn/blob/master/driver/scripts/setup_ovn_controller.sh#L24
>> >
>> > >
>> > > Regards.
>> > > Carl
>> > >
>> > > On Thu, Jul 18, 2019 at 4:03 AM Miguel Duarte de Mora Barroso 
>> > >  wrote:
>> > >>
>> > >> On Wed, Jul 17, 2019 at 7:07 PM carl langlois  
>> > >> wrote:
>> > >> >
>> > >> > Hi
>> > >> > Here is the output of the command
>> > >> >
>> > >> > [root@ovhost1 ~]# vdsm-tool --vvverbose ovn-config 10.16.248.74 
>> > >> > ovirtmgmt
>> > >> > MainThread::DEBUG::2019-07-17 13:02:52,___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RP3F6XF3KZ4FOSZR4WXQVV4CH4WAIM5Z/


[ovirt-users] Re: Active-Passive DR: mutual for different storage domains possible?

2019-07-23 Thread Gianluca Cecchi
On Mon, Jul 8, 2019 at 10:39 AM Eyal Shenitzky  wrote:

> I don't see any reason not to do it in case the SD replicas are separated
> storage domain.
> Just note that for the DR, you should prepare a separated DC with a
> cluster.
>
> P.S - I most to admit that I didn't try this configuration - please share
> your results.
>
> Thanks for your insights Eyal.
I'm going ahead with the tests.
One question arose after creating disaster_recovery_maps.yml and the need
to populate all the "secondary_xxx" variable mappings.

In my scenario the primary DC DC1 in Site A has the same network
configuration of the primary DC DC2 in Site B.
In fact the main target is to reach better utilization of available
resources and so potentially VMs in DC1 communicates with VMs in DC2 in
normal conditions.
Now to configure DR I have to create a mapping of DC1 in Site B: if I want
to leverage hosts' resources in Site B I'm forced to set it to DC2,
correct? That is the current primary for its storage domain SD2, otherwise
I will have no hosts to assign to the cluster inside it... what is the risk
of overlapping of objects in this case (supposing I personally take care to
not have Vms in DC1 with same name of VMs in DC2, and the same for storage
domains' names)? I could have an object, such a disk id that during import
would overlap with existing objects n the database? Or will the engine
re-create new ids (for vnics, disks, ecc.) while importing them?

Other scenario could be to create inside Site B environment another
Datacenter with name DC1-DR, and I think I have to create also the same
logical networks of DC1 (and DC2 incidentally) and in case of DR I have to
take off one of the hosts of DC2 and assign it to DC1-DR

Opinions?

Thanks in advance,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G3G2VUNPH2KDSR4QHGIOQDTZSDSUTHEQ/


[ovirt-users] Re: major network changes

2019-07-23 Thread Derek Atkins
Hi,

If I understand it correctly, the HE Hosts try to ping (or SSH, or
otherwise reach) the Engine host.  If it reaches it, then it passes the
liveness check. If it cannot reach it, then it fails.  So to me this error
means that there is some configuration, somewhere, that is trying to reach
the engine on the old address (which fails when the engine has the new
address).

I do not know where in the *host* configuration this data lives, so I
cannot suggest where you need to change it.

Can 10.16.248.x reach 10.8.236.x and vice-versa?

Maybe multi-home the engine on both networks for now until you figure it out?

-derek

On Tue, July 23, 2019 9:13 am, carl langlois wrote:
> Hi,
>
> We have managed to stabilize the DNS udpate in out network. Now the
> current
> situation is.
> I have 3 hosts that can run the engine (hosted-engine).
> They were all in the 10.8.236.x. Now i have moved one of them in the
> 10.16.248.x.
>
> If i boot the engine on one of the host that is in the 10.8.236.x the
> engine is going up with status "good". I can access the engine UI. I can
> see all my hosts even the one in the 10.16.248.x network.
>
> But if i boot the engine on the hosted-engine host that was switch to the
> 10.16.248.x the engine is booting. I can ssh to it but the status is
> always
> " fail for liveliness check".
> The main difference is that when i boot on the host that is in the
> 10.16.248.x network the engine gets a address in the 248.x network.
>
> On the engine i have this in the
> /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
> 019-07-23
> 09:05:30|MFzehi|YYTDiS|jTq2w8|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
> not sample data, oVirt Engine is not updating the statistics. Please check
> your oVirt Engine status.|9704
> the engine.log seems okey.
>
> So i need to understand what this " liveliness check" do(or try to do) so
> i
> can investigate why the engine status is not becoming good.
>
> The initial deployment was done in the 10.8.236.x network. Maybe is as
> something to do with that.
>
> Thanks & Regards
>
> Carl
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Thu, Jul 18, 2019 at 8:53 AM Miguel Duarte de Mora Barroso <
> mdbarr...@redhat.com> wrote:
>
>> On Thu, Jul 18, 2019 at 2:50 PM Miguel Duarte de Mora Barroso
>>  wrote:
>> >
>> > On Thu, Jul 18, 2019 at 1:57 PM carl langlois 
>> wrote:
>> > >
>> > > Hi Miguel,
>> > >
>> > > I have managed to change the config for the ovn-controler.
>> > > with those commands
>> > >  ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=ssl:
>> 10.16.248.74:6642
>> > >  ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-ip=10.16.248.65
>> > > and restating the services
>> >
>> > Yes, that's what the script is supposed to do, check [0].
>> >
>> > Not sure why running vdsm-tool didn't work for you.
>> >
>> > >
>> > > But even with this i still have the "fail for liveliness check" when
>> starting the ovirt engine. But one thing  i notice with our new network
>> is
>> that the reverse DNS does not work(IP -> hostname). The forward is
>> working
>> fine. I am trying to see with our IT why it is not working.
>> >
>> > Do you guys use OVN? If not, you could disable the provider, install
>> > the hosted-engine VM, then, if needed, re-add / re-activate it .
>>
>> I'm assuming it fails for the same reason you've stated initially  -
>> i.e. ovn-controller is involved; if it is not, disregard this msg :)
>> >
>> > [0] -
>> https://github.com/oVirt/ovirt-provider-ovn/blob/master/driver/scripts/setup_ovn_controller.sh#L24
>> >
>> > >
>> > > Regards.
>> > > Carl
>> > >
>> > > On Thu, Jul 18, 2019 at 4:03 AM Miguel Duarte de Mora Barroso <
>> mdbarr...@redhat.com> wrote:
>> > >>
>> > >> On Wed, Jul 17, 2019 at 7:07 PM carl langlois
>> 
>> wrote:
>> > >> >
>> > >> > Hi
>> > >> > Here is the output of the command
>> > >> >
>> > >> > [root@ovhost1 ~]# vdsm-tool --vvverbose ovn-config 10.16.248.74
>> ovirtmgmt
>> > >> > MainThread::DEBUG::2019-07-17
>> 13:02:52,581::cmdutils::150::root::(exec_cmd) lshw -json -disable usb
>> -disable pcmcia -disable isapnp -disable ide -disable scsi -disable dmi
>> -disable memory -disable cpuinfo (cwd None)
>> > >> > MainThread::DEBUG::2019-07-17
>> 13:02:52,738::cmdutils::158::root::(exec_cmd) SUCCESS:  = ''; 
>> = 0
>> > >> > MainThread::DEBUG::2019-07-17
>> 13:02:52,741::routes::109::root::(get_gateway) The gateway 10.16.248.1
>> is
>> duplicated for the device ovirtmgmt
>> > >> > MainThread::DEBUG::2019-07-17
>> 13:02:52,742::routes::109::root::(get_gateway) The gateway 10.16.248.1
>> is
>> duplicated for the device ovirtmgmt
>> > >> > MainThread::DEBUG::2019-07-17
>> 13:02:52,742::cmdutils::150::root::(exec_cmd) /sbin/tc qdisc show (cwd
>> None)
>> > >> > MainThread::DEBUG::2019-07-17
>> 13:02:52,744::cmdutils::158::root::(exec_cmd) SUCCESS:  = ''; 
>> = 0
>> > >> > MainThread::DEBUG::2019-07-17
>> 13:02:52,745::cmdutils::150::root::(exec_cmd) /sbin/tc class show dev
>> enp2s0f1 classid 0:1388 (cwd None)
>> > >> > MainThrea

[ovirt-users] Re: major network changes

2019-07-23 Thread carl langlois
Hi,

We have managed to stabilize the DNS udpate in out network. Now the current
situation is.
I have 3 hosts that can run the engine (hosted-engine).
They were all in the 10.8.236.x. Now i have moved one of them in the
10.16.248.x.

If i boot the engine on one of the host that is in the 10.8.236.x the
engine is going up with status "good". I can access the engine UI. I can
see all my hosts even the one in the 10.16.248.x network.

But if i boot the engine on the hosted-engine host that was switch to the
10.16.248.x the engine is booting. I can ssh to it but the status is always
" fail for liveliness check".
The main difference is that when i boot on the host that is in the
10.16.248.x network the engine gets a address in the 248.x network.

On the engine i have this in the
/var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
019-07-23
09:05:30|MFzehi|YYTDiS|jTq2w8|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
not sample data, oVirt Engine is not updating the statistics. Please check
your oVirt Engine status.|9704
the engine.log seems okey.

So i need to understand what this " liveliness check" do(or try to do) so i
can investigate why the engine status is not becoming good.

The initial deployment was done in the 10.8.236.x network. Maybe is as
something to do with that.

Thanks & Regards

Carl


















On Thu, Jul 18, 2019 at 8:53 AM Miguel Duarte de Mora Barroso <
mdbarr...@redhat.com> wrote:

> On Thu, Jul 18, 2019 at 2:50 PM Miguel Duarte de Mora Barroso
>  wrote:
> >
> > On Thu, Jul 18, 2019 at 1:57 PM carl langlois 
> wrote:
> > >
> > > Hi Miguel,
> > >
> > > I have managed to change the config for the ovn-controler.
> > > with those commands
> > >  ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=ssl:
> 10.16.248.74:6642
> > >  ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-ip=10.16.248.65
> > > and restating the services
> >
> > Yes, that's what the script is supposed to do, check [0].
> >
> > Not sure why running vdsm-tool didn't work for you.
> >
> > >
> > > But even with this i still have the "fail for liveliness check" when
> starting the ovirt engine. But one thing  i notice with our new network is
> that the reverse DNS does not work(IP -> hostname). The forward is working
> fine. I am trying to see with our IT why it is not working.
> >
> > Do you guys use OVN? If not, you could disable the provider, install
> > the hosted-engine VM, then, if needed, re-add / re-activate it .
>
> I'm assuming it fails for the same reason you've stated initially  -
> i.e. ovn-controller is involved; if it is not, disregard this msg :)
> >
> > [0] -
> https://github.com/oVirt/ovirt-provider-ovn/blob/master/driver/scripts/setup_ovn_controller.sh#L24
> >
> > >
> > > Regards.
> > > Carl
> > >
> > > On Thu, Jul 18, 2019 at 4:03 AM Miguel Duarte de Mora Barroso <
> mdbarr...@redhat.com> wrote:
> > >>
> > >> On Wed, Jul 17, 2019 at 7:07 PM carl langlois 
> wrote:
> > >> >
> > >> > Hi
> > >> > Here is the output of the command
> > >> >
> > >> > [root@ovhost1 ~]# vdsm-tool --vvverbose ovn-config 10.16.248.74
> ovirtmgmt
> > >> > MainThread::DEBUG::2019-07-17
> 13:02:52,581::cmdutils::150::root::(exec_cmd) lshw -json -disable usb
> -disable pcmcia -disable isapnp -disable ide -disable scsi -disable dmi
> -disable memory -disable cpuinfo (cwd None)
> > >> > MainThread::DEBUG::2019-07-17
> 13:02:52,738::cmdutils::158::root::(exec_cmd) SUCCESS:  = '';  = 0
> > >> > MainThread::DEBUG::2019-07-17
> 13:02:52,741::routes::109::root::(get_gateway) The gateway 10.16.248.1 is
> duplicated for the device ovirtmgmt
> > >> > MainThread::DEBUG::2019-07-17
> 13:02:52,742::routes::109::root::(get_gateway) The gateway 10.16.248.1 is
> duplicated for the device ovirtmgmt
> > >> > MainThread::DEBUG::2019-07-17
> 13:02:52,742::cmdutils::150::root::(exec_cmd) /sbin/tc qdisc show (cwd None)
> > >> > MainThread::DEBUG::2019-07-17
> 13:02:52,744::cmdutils::158::root::(exec_cmd) SUCCESS:  = '';  = 0
> > >> > MainThread::DEBUG::2019-07-17
> 13:02:52,745::cmdutils::150::root::(exec_cmd) /sbin/tc class show dev
> enp2s0f1 classid 0:1388 (cwd None)
> > >> > MainThread::DEBUG::2019-07-17
> 13:02:52,747::cmdutils::158::root::(exec_cmd) SUCCESS:  = '';  = 0
> > >> > MainThread::DEBUG::2019-07-17
> 13:02:52,766::cmdutils::150::root::(exec_cmd)
> /usr/share/openvswitch/scripts/ovs-ctl status (cwd None)
> > >> > MainThread::DEBUG::2019-07-17
> 13:02:52,777::cmdutils::158::root::(exec_cmd) SUCCESS:  = '';  = 0
> > >> > MainThread::DEBUG::2019-07-17
> 13:02:52,778::vsctl::67::root::(commit) Executing commands:
> /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge --
> list Port -- list Interface
> > >> > MainThread::DEBUG::2019-07-17
> 13:02:52,778::cmdutils::150::root::(exec_cmd) /usr/bin/ovs-vsctl
> --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list
> Interface (cwd None)
> > >> > MainThread::DEBUG::2019-07-17
> 13:02:52,799::cmdutils::158::root::(exec_cmd) SUCCESS:  = '';  = 0
> >

[ovirt-users] Attach the snapshot to the backup virtual machine and activate the disk

2019-07-23 Thread smidhunraj
This is the doubt regarding to the rest api 



Can you please tell me what would be the response of the api request of step 4 
((will it be a success or a error))
=

Attach the snapshot to the backup virtual machine and activate the disk:

 POST /api/vms/----/disks/ HTTP/1.1
 Accept: application/xml
 Content-type: application/xml

 
 
 true
 


==
 

if we are  setting up a  bare VM without any backup mechanism in it and try to 
attach a vm to it.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OLXHVPK7KHRSU4ITIWEZWGST45P7SKDA/


[ovirt-users] Re: Error exporting into ova

2019-07-23 Thread Gianluca Cecchi
On Fri, Jul 19, 2019 at 5:59 PM Gianluca Cecchi 
wrote:

> On Fri, Jul 19, 2019 at 4:14 PM Gianluca Cecchi 
> wrote:
>
>> On Fri, Jul 19, 2019 at 3:15 PM Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>>
>>>
>>> In engine.log the first error I see is 30 minutes after start
>>>
>>> 2019-07-19 12:25:31,563+02 ERROR
>>> [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] Ansible
>>> playbook execution failed: Timeout occurred while executing Ansible
>>> playbook.
>>>
>>
>> In the mean time, as the playbook seems this one ( I run the job from
>> engine) : /usr/share/ovirt-engine/playbooks/ovirt-ova-export.yml
>>
>>
> Based on what described in bugzilla
> https://bugzilla.redhat.com/show_bug.cgi?id=1697301
> I created at the moment the file
> /etc/ovirt-engine/engine.conf.d/99-ansible-playbook-timeout.conf
> with
> ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT=80
> and restarted the engine and the python script to verify
>
> Just to see if it completes, even if in my case with a 30Gb preallocated
> disk, the source problem is qemu-img convert command very slow in I/O.
> It reads from iscsi multipath (2 paths) with 2x3MB/s and it writes on nfs
> If I run a dd command from iscsi device mapper device to an nfs file I
> have 140MB/s rate that is what expected based on my storage array
> performances and my network.
>
> Not understood why the qemu-img command is so slow
> The question still applies in case I have to do an appliance from a VM
> with a very big disk, where the copy could potentially have an elapsed of
> more that 30 minutes...
> Gianluca
>
>
I confirm that setting  ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT was the
solution.
I got the ova completed:

Starting to export Vm enginecopy1 as a Virtual Appliance 7/19/19 5:53:05 PM
Vm enginecopy1 was exported successfully as a Virtual Appliance to path
/save_ova/base/dump/myvm2.ova on Host ov301 7/19/19 6:58:07 PM

I have to understand why the conversion of the pre-allocated disk is so
slow, because simulating I/O from iSCSI lun where VM disks live to the NFS
share gives me about 110MB/s
I'm going to update to 4.3.4, just to see if there is any bug fixed. The
same operation on vSphere have an elapsed of 5 minutes.
What is the eta for 4.3.5?

One notice:
if I manually create a snapshot of the same VM and then clone the snapshot,
the process is this one
vdsm  5713 20116  6 10:50 ?00:00:04 /usr/bin/qemu-img convert
-p -t none -T none -f raw
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b
-O raw -W
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/d13a5c43-0138-4cbb-b663-f3ad5f9f5983/fd4e1b08-15fe-45ee-ab12-87dea2d29bc4

and its speed is quite better (up to 100MB/s read and 100MB/s write) with a
total elapsed of 6 minutes and 30 seconds.

during the ova generation the process was instead:
 vdsm 13505 13504  3 14:24 ?00:01:26 qemu-img convert -T none
-O qcow2
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b
/dev/loop0

could it be the "-O qcow2" the reason? Why qcow2 if origin is preallocated
(raw)?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UFB6ZACCJIN3DSRL4WJX4JLPVM2NSQEO/