[ovirt-users] Re: Cluster upgrade
Well, never did that before ... Anyway, thanks for pointing that out. Best regards, Misak Khachatryan On Thu, Feb 7, 2019 at 3:32 PM Martin Perina wrote: > > > On Thu, Feb 7, 2019 at 12:24 PM Misak Khachatryan > wrote: > >> Thanks Martin, >> >> you are right, on all hosts which i did upgrade ovirt-4.3 repo is not >> present. Seems like a bug. >> > > This is not a bug, you need to update repos on hosts manually prior to the > upgrade. > >> >> Best regards, >> Misak Khachatryan >> >> >> On Tue, Feb 5, 2019 at 10:03 PM Martin Perina wrote: >> >>> >>> >>> On Tue, 5 Feb 2019, 14:54 Misak Khachatryan >> >>>> Hi, >>>> >>>> I've successfully upgraded to 4.3, but when I'm trying to upgrade >>>> Cluster version I'm getting this: >>>> >>>> "Error while executing action: Cannot change Cluster Compatibility >>>> Version to higher version when there are active Hosts with lower version. >>>> -Please move Host virt2 with lower version to maintenance first." >>>> >>> >>> It seems that on host virt2 you have installed VDSM which doesn't >>> support higher cluster version. Please try to upgrade the host before >>> upgrading the cluster. >>> >>> >>>> Any clues? >>>> >>>> Best regards, >>>> Misak Khachatryan >>>> ___ >>>> Users mailing list -- users@ovirt.org >>>> To unsubscribe send an email to users-le...@ovirt.org >>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>> oVirt Code of Conduct: >>>> https://www.ovirt.org/community/about/community-guidelines/ >>>> List Archives: >>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6R64SOEIYXFGTDTWOWZDHREJDUKL6IEP/ >>>> >>> > > -- > Martin Perina > Associate Manager, Software Engineering > Red Hat Czech s.r.o. > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3KHEOJKEMCGVSDMLIH3EUMYFTR43OZSO/
[ovirt-users] Re: Cluster upgrade
Thanks Martin, you are right, on all hosts which i did upgrade ovirt-4.3 repo is not present. Seems like a bug. Best regards, Misak Khachatryan On Tue, Feb 5, 2019 at 10:03 PM Martin Perina wrote: > > > On Tue, 5 Feb 2019, 14:54 Misak Khachatryan >> Hi, >> >> I've successfully upgraded to 4.3, but when I'm trying to upgrade Cluster >> version I'm getting this: >> >> "Error while executing action: Cannot change Cluster Compatibility >> Version to higher version when there are active Hosts with lower version. >> -Please move Host virt2 with lower version to maintenance first." >> > > It seems that on host virt2 you have installed VDSM which doesn't support > higher cluster version. Please try to upgrade the host before upgrading the > cluster. > > >> Any clues? >> >> Best regards, >> Misak Khachatryan >> ___ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-le...@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6R64SOEIYXFGTDTWOWZDHREJDUKL6IEP/ >> > ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KWLDFPZOFGMYBFDZMBDQ3OVRRLX27NLC/
[ovirt-users] Cluster upgrade
Hi, I've successfully upgraded to 4.3, but when I'm trying to upgrade Cluster version I'm getting this: "Error while executing action: Cannot change Cluster Compatibility Version to higher version when there are active Hosts with lower version. -Please move Host virt2 with lower version to maintenance first." Any clues? Best regards, Misak Khachatryan ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6R64SOEIYXFGTDTWOWZDHREJDUKL6IEP/
Re: [ovirt-users] VM paused due unknown storage error
Bump. Best regards, Misak Khachatryan On Wed, Jan 31, 2018 at 2:28 PM, Misak Khachatryan wrote: > And sorry - yes, all hosts are active. > > Best regards, > Misak Khachatryan > > > On Wed, Jan 31, 2018 at 9:17 AM, Sahina Bose wrote: >> Could you provide the output of "gluster volume status" and the gluster >> mount logs to check further? >> Are all the host shown as active in the engine (that is, is the monitoring >> working?) >> >> On Wed, Jan 31, 2018 at 1:07 AM, Misak Khachatryan wrote: >>> >>> Hi, >>> >>> After upgrade to 4.2 i'm getting "VM paused due unknown storage >>> error". When i was upgrading i had some gluster problem with one of >>> the hosts, which i was fixed readding it to gluster peers. Now i see >>> something weir in bricks configuration, see attachment - one of the >>> bricks uses 0% of space. >>> >>> How I can diagnose this? Nothing wrong in logs as I can see. >>> >>> >>> >>> >>> Best regards, >>> Misak Khachatryan >>> >>> ___ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] VM paused due unknown storage error
And sorry - yes, all hosts are active. Best regards, Misak Khachatryan On Wed, Jan 31, 2018 at 9:17 AM, Sahina Bose wrote: > Could you provide the output of "gluster volume status" and the gluster > mount logs to check further? > Are all the host shown as active in the engine (that is, is the monitoring > working?) > > On Wed, Jan 31, 2018 at 1:07 AM, Misak Khachatryan wrote: >> >> Hi, >> >> After upgrade to 4.2 i'm getting "VM paused due unknown storage >> error". When i was upgrading i had some gluster problem with one of >> the hosts, which i was fixed readding it to gluster peers. Now i see >> something weir in bricks configuration, see attachment - one of the >> bricks uses 0% of space. >> >> How I can diagnose this? Nothing wrong in logs as I can see. >> >> >> >> >> Best regards, >> Misak Khachatryan >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] VM paused due unknown storage error
Hi, After upgrade to 4.2 i'm getting "VM paused due unknown storage error". When i was upgrading i had some gluster problem with one of the hosts, which i was fixed readding it to gluster peers. Now i see something weir in bricks configuration, see attachment - one of the bricks uses 0% of space. How I can diagnose this? Nothing wrong in logs as I can see. Best regards, Misak Khachatryan ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] OVS error logs after upgrade to 4.2
Hello Marcin, Thank you for the info! Best regards, Misak Khachatryan On Thu, Dec 28, 2017 at 4:26 PM, Marcin Mirecki wrote: > Hello Misak, > > The openvswitch team tells me this is a known ovs problem. > It is fixed by patch: > https://github.com/openvswitch/ovs/commit/bbf219ef584a102fde5150defab3a769a6a44981 > merged in master/branch 2.8/branch 2.7. > Looking at git history this is not yet released. It should be included in > 2.7.4 when it's out. > > Thanks, > Marcin > > > > On Thu, Dec 28, 2017 at 10:53 AM, Misak Khachatryan > wrote: >> >> Hi Mor, >> >> submitted: https://bugzilla.redhat.com/show_bug.cgi?id=1529481 >> >> I've collected logs but they are 1662.7M in size. >> >> Best regards, >> Misak Khachatryan >> >> >> On Wed, Dec 27, 2017 at 6:44 PM, Mor Kalfon wrote: >> > Hello Misak, >> > >> > Could you please file a bug about those error messages that you receive >> > from >> > OVS? >> > You can use the log collector tool >> > >> > (https://www.ovirt.org/documentation/admin-guide/chap-Utilities/#the-log-collector-tool) >> > which gathers all the required logs for us to investigate this issue. >> > >> > Thanks for reporting this issue! >> > >> > On Wed, Dec 27, 2017 at 11:03 AM, Misak Khachatryan >> > wrote: >> >> >> >> On Wed, Dec 27, 2017 at 12:42 PM, Dan Kenigsberg >> >> wrote: >> >> > On Wed, Dec 27, 2017 at 8:49 AM, Misak Khachatryan >> >> > wrote: >> >> >> Hi, >> >> >> >> >> >> It's not on log file, it's from automatic email sent by cron daemon. >> >> >> This one from logrotate. >> >> > >> >> > Would you file a bug about this daily logrotate spam? >> >> > >> >> >> >> Sure, will do. >> >> >> >> >> >> >> >> I'd like to migrate my network to OVS, but as i can't find any guide >> >> >> for that, it's a bit scary. >> >> > >> >> > Why would you like to do that? OVN is useful for big deployments, >> >> > that >> >> > have many isolated networks. It is not universally recommended, as it >> >> > uses more CPU. >> >> > >> >> >> >> No particular reason, thought that will be future in oVIRT networking, >> >> also i work in relatively big ISP with many PoP and DCs in many >> >> cities. And I'm interested to try it some time. >> >> >> >> >> >> >> >> Best regards, >> >> >> Misak Khachatryan >> >> >> >> >> >> >> >> >> On Tue, Dec 26, 2017 at 3:29 PM, Dan Kenigsberg >> >> >> wrote: >> >> >>> On Tue, Dec 26, 2017 at 8:35 AM, Misak Khachatryan >> >> >>> >> >> >>> wrote: >> >> >>>> Hi, >> >> >>>> >> >> >>>> After upgrade to 4.2 I started getting this error from engine: >> >> >>>> >> >> >>>> /etc/cron.daily/logrotate: >> >> >>>> >> >> >>>> 2017-12-25T23:12:02Z|1|unixctl|WARN|failed to connect to >> >> >>>> /var/run/openvswitch/ovnnb_db.19883.ctl >> >> >>>> ovs-appctl: cannot connect to >> >> >>>> "/var/run/openvswitch/ovnnb_db.19883.ctl" (No such file or >> >> >>>> directory) >> >> >>>> 2017-12-25T23:12:02Z|1|unixctl|WARN|failed to connect to >> >> >>>> /var/run/openvswitch/ovnsb_db.19891.ctl >> >> >>>> ovs-appctl: cannot connect to >> >> >>>> "/var/run/openvswitch/ovnsb_db.19891.ctl" (No such file or >> >> >>>> directory) >> >> >>>> >> >> >>>> >> >> >>>> Seems harmless as i don't use OVS, but how to fix it? >> >> >>> >> >> >>> By default, ovirt-4.2 installs and configure OVN (which uses OVS). >> >> >>> You >> >> >>> can turn it off on Engine host by running >> >> >>> systemctl stop ovirt-provider-ovn ovn-northd openvswitch >> >> > >> >> > did you try that? >> >> > >> >> >> >> No, but is correct way to disable it completely? >> >> >> >> >>> >> >> >>> but I'd appreciate your help to understand in which log file do you >> >> >>> see these warnings? >> >> > >> >> >>> Have you already disabled openvswitch? >> >> > >> >> > have you ^^ ? >> >> >> >> No, what is a correct way to do it? >> > >> > >> > >> > >> > -- >> > Mor Kalfon >> > RHV Networking Team >> > Red Hat IL-Raanana >> > Tel: +972-54-6514148 >> > > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] OVS error logs after upgrade to 4.2
Hi Mor, submitted: https://bugzilla.redhat.com/show_bug.cgi?id=1529481 I've collected logs but they are 1662.7M in size. Best regards, Misak Khachatryan On Wed, Dec 27, 2017 at 6:44 PM, Mor Kalfon wrote: > Hello Misak, > > Could you please file a bug about those error messages that you receive from > OVS? > You can use the log collector tool > (https://www.ovirt.org/documentation/admin-guide/chap-Utilities/#the-log-collector-tool) > which gathers all the required logs for us to investigate this issue. > > Thanks for reporting this issue! > > On Wed, Dec 27, 2017 at 11:03 AM, Misak Khachatryan > wrote: >> >> On Wed, Dec 27, 2017 at 12:42 PM, Dan Kenigsberg >> wrote: >> > On Wed, Dec 27, 2017 at 8:49 AM, Misak Khachatryan >> > wrote: >> >> Hi, >> >> >> >> It's not on log file, it's from automatic email sent by cron daemon. >> >> This one from logrotate. >> > >> > Would you file a bug about this daily logrotate spam? >> > >> >> Sure, will do. >> >> >> >> >> I'd like to migrate my network to OVS, but as i can't find any guide >> >> for that, it's a bit scary. >> > >> > Why would you like to do that? OVN is useful for big deployments, that >> > have many isolated networks. It is not universally recommended, as it >> > uses more CPU. >> > >> >> No particular reason, thought that will be future in oVIRT networking, >> also i work in relatively big ISP with many PoP and DCs in many >> cities. And I'm interested to try it some time. >> >> >> >> >> Best regards, >> >> Misak Khachatryan >> >> >> >> >> >> On Tue, Dec 26, 2017 at 3:29 PM, Dan Kenigsberg >> >> wrote: >> >>> On Tue, Dec 26, 2017 at 8:35 AM, Misak Khachatryan >> >>> wrote: >> >>>> Hi, >> >>>> >> >>>> After upgrade to 4.2 I started getting this error from engine: >> >>>> >> >>>> /etc/cron.daily/logrotate: >> >>>> >> >>>> 2017-12-25T23:12:02Z|1|unixctl|WARN|failed to connect to >> >>>> /var/run/openvswitch/ovnnb_db.19883.ctl >> >>>> ovs-appctl: cannot connect to >> >>>> "/var/run/openvswitch/ovnnb_db.19883.ctl" (No such file or directory) >> >>>> 2017-12-25T23:12:02Z|1|unixctl|WARN|failed to connect to >> >>>> /var/run/openvswitch/ovnsb_db.19891.ctl >> >>>> ovs-appctl: cannot connect to >> >>>> "/var/run/openvswitch/ovnsb_db.19891.ctl" (No such file or directory) >> >>>> >> >>>> >> >>>> Seems harmless as i don't use OVS, but how to fix it? >> >>> >> >>> By default, ovirt-4.2 installs and configure OVN (which uses OVS). You >> >>> can turn it off on Engine host by running >> >>> systemctl stop ovirt-provider-ovn ovn-northd openvswitch >> > >> > did you try that? >> > >> >> No, but is correct way to disable it completely? >> >> >>> >> >>> but I'd appreciate your help to understand in which log file do you >> >>> see these warnings? >> > >> >>> Have you already disabled openvswitch? >> > >> > have you ^^ ? >> >> No, what is a correct way to do it? > > > > > -- > Mor Kalfon > RHV Networking Team > Red Hat IL-Raanana > Tel: +972-54-6514148 > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] OVS error logs after upgrade to 4.2
Hi Marcin, Here is the output: [root@ovirt-engine ~]# ps -ef|grep 'ovnsb_db\|ovnnb_db' root 19883 19882 0 Dec20 ?00:01:22 ovsdb-server --detach --monitor -vconsole:off --log-file=/var/log/openvswitch/ovsdb-server-nb.log --remote=punix:/run/openvswitch/ovnnb_db.sock --pidfile=/run/openvswitch/ovnnb_db.pid --remote=db:OVN_Northbound,NB_Global,co nnections --unixctl=ovnnb_db.ctl --private-key=db:OVN_Northbound,SSL,private_key --certificate=db:OVN_Northbound,SSL,certificate --ca-cert=db:OVN_Northbound,SSL,ca_cert /var/lib/openvswitch/ovnnb_db.db root 19891 19890 0 Dec20 ?00:01:23 ovsdb-server --detach --monitor -vconsole:off --log-file=/var/log/openvswitch/ovsdb-server-sb.log --remote=punix:/run/openvswitch/ovnsb_db.sock --pidfile=/run/openvswitch/ovnsb_db.pid --remote=db:OVN_Southbound,SB_Global,co nnections --unixctl=ovnsb_db.ctl --private-key=db:OVN_Southbound,SSL,private_key --certificate=db:OVN_Southbound,SSL,certificate --ca-cert=db:OVN_Southbound,SSL,ca_cert /var/lib/openvswitch/ovnsb_db.db root 19897 19896 0 Dec20 ?00:00:00 ovn-northd -vconsole:emer -vsyslog:err -vfile:info --ovnnb-db=unix:/run/openvswitch/ovnnb_db.sock --ovnsb-db=unix:/run/openvswitch/ovnsb_db.sock --no-chdir --log-file=/var/log/openvswitch/ovn-northd.log --pidfile=/run/openv switch/ovn-northd.pid --detach --monitor root 30800 30786 0 11:29 pts/000:00:00 grep --color=auto ovnsb_db\|ovnnb_db [root@ovirt-engine ~]# And here is the list of files in /var/run/openvswitch/ [root@ovirt-engine ~]# ll /var/run/openvswitch/ total 20 srwxr-x---. 1 root root 0 Dec 20 20:32 db.sock srwxr-x---. 1 root root 0 Dec 20 20:32 ovnnb_db.ctl -rw-r--r--. 1 root root 6 Dec 20 20:32 ovnnb_db.pid srwxr-x---. 1 root root 0 Dec 20 20:32 ovnnb_db.sock srwxr-x---. 1 root root 0 Dec 20 20:32 ovn-northd.19897.ctl -rw-r--r--. 1 root root 6 Dec 20 20:32 ovn-northd.pid srwxr-x---. 1 root root 0 Dec 20 20:32 ovnsb_db.ctl -rw-r--r--. 1 root root 6 Dec 20 20:32 ovnsb_db.pid srwxr-x---. 1 root root 0 Dec 20 20:32 ovnsb_db.sock srwxr-x---. 1 root root 0 Dec 20 20:32 ovsdb-server.19806.ctl -rw-r--r--. 1 root root 6 Dec 20 20:32 ovsdb-server.pid srwxr-x---. 1 root root 0 Dec 20 20:32 ovs-vswitchd.19845.ctl -rw-r--r--. 1 root root 6 Dec 20 20:32 ovs-vswitchd.pid Best regards, Misak Khachatryan On Wed, Dec 27, 2017 at 8:11 PM, Marcin Mirecki wrote: > Hello Misak, > > This error hints that your ovn databases are not up (or at least can not be > connected to). > Could you please check if the following command gives any output: > > ps -ef|grep 'ovnsb_db\|ovnnb_db' > > The databases can be started (if not active) using: >/usr/share/openvswitch/scripts/ovn-ctl start_ovsdb > > Could you please also check the content of: > /var/run/openvswitch/ > > The filenames don't look normal (usually they do not contain any numbers as > part of them), please give me some time to check this. > > Thanks, > Marcin > > > > > > On Wed, Dec 27, 2017 at 3:44 PM, Mor Kalfon wrote: >> >> Hello Misak, >> >> Could you please file a bug about those error messages that you receive >> from OVS? >> You can use the log collector tool >> (https://www.ovirt.org/documentation/admin-guide/chap-Utilities/#the-log-collector-tool) >> which gathers all the required logs for us to investigate this issue. >> >> Thanks for reporting this issue! >> >> On Wed, Dec 27, 2017 at 11:03 AM, Misak Khachatryan >> wrote: >>> >>> On Wed, Dec 27, 2017 at 12:42 PM, Dan Kenigsberg >>> wrote: >>> > On Wed, Dec 27, 2017 at 8:49 AM, Misak Khachatryan >>> > wrote: >>> >> Hi, >>> >> >>> >> It's not on log file, it's from automatic email sent by cron daemon. >>> >> This one from logrotate. >>> > >>> > Would you file a bug about this daily logrotate spam? >>> > >>> >>> Sure, will do. >>> >>> >> >>> >> I'd like to migrate my network to OVS, but as i can't find any guide >>> >> for that, it's a bit scary. >>> > >>> > Why would you like to do that? OVN is useful for big deployments, that >>> > have many isolated networks. It is not universally recommended, as it >>> > uses more CPU. >>> > >>> >>> No particular reason, thought that will be future in oVIRT networking, >>> also i work in relatively big ISP with many PoP and DCs in many >>> cities. And I'm interested to try it some time. >>> >>> >> >>> >> Best regards, >>> >> Misak Khachatryan >>> >> >>> >> >>> >&g
Re: [ovirt-users] OVS error logs after upgrade to 4.2
On Wed, Dec 27, 2017 at 12:42 PM, Dan Kenigsberg wrote: > On Wed, Dec 27, 2017 at 8:49 AM, Misak Khachatryan wrote: >> Hi, >> >> It's not on log file, it's from automatic email sent by cron daemon. >> This one from logrotate. > > Would you file a bug about this daily logrotate spam? > Sure, will do. >> >> I'd like to migrate my network to OVS, but as i can't find any guide >> for that, it's a bit scary. > > Why would you like to do that? OVN is useful for big deployments, that > have many isolated networks. It is not universally recommended, as it > uses more CPU. > No particular reason, thought that will be future in oVIRT networking, also i work in relatively big ISP with many PoP and DCs in many cities. And I'm interested to try it some time. >> >> Best regards, >> Misak Khachatryan >> >> >> On Tue, Dec 26, 2017 at 3:29 PM, Dan Kenigsberg wrote: >>> On Tue, Dec 26, 2017 at 8:35 AM, Misak Khachatryan wrote: >>>> Hi, >>>> >>>> After upgrade to 4.2 I started getting this error from engine: >>>> >>>> /etc/cron.daily/logrotate: >>>> >>>> 2017-12-25T23:12:02Z|1|unixctl|WARN|failed to connect to >>>> /var/run/openvswitch/ovnnb_db.19883.ctl >>>> ovs-appctl: cannot connect to >>>> "/var/run/openvswitch/ovnnb_db.19883.ctl" (No such file or directory) >>>> 2017-12-25T23:12:02Z|1|unixctl|WARN|failed to connect to >>>> /var/run/openvswitch/ovnsb_db.19891.ctl >>>> ovs-appctl: cannot connect to >>>> "/var/run/openvswitch/ovnsb_db.19891.ctl" (No such file or directory) >>>> >>>> >>>> Seems harmless as i don't use OVS, but how to fix it? >>> >>> By default, ovirt-4.2 installs and configure OVN (which uses OVS). You >>> can turn it off on Engine host by running >>> systemctl stop ovirt-provider-ovn ovn-northd openvswitch > > did you try that? > No, but is correct way to disable it completely? >>> >>> but I'd appreciate your help to understand in which log file do you >>> see these warnings? > >>> Have you already disabled openvswitch? > > have you ^^ ? No, what is a correct way to do it? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] OVS error logs after upgrade to 4.2
Hi, It's not on log file, it's from automatic email sent by cron daemon. This one from logrotate. I'd like to migrate my network to OVS, but as i can't find any guide for that, it's a bit scary. Best regards, Misak Khachatryan On Tue, Dec 26, 2017 at 3:29 PM, Dan Kenigsberg wrote: > On Tue, Dec 26, 2017 at 8:35 AM, Misak Khachatryan wrote: >> Hi, >> >> After upgrade to 4.2 I started getting this error from engine: >> >> /etc/cron.daily/logrotate: >> >> 2017-12-25T23:12:02Z|1|unixctl|WARN|failed to connect to >> /var/run/openvswitch/ovnnb_db.19883.ctl >> ovs-appctl: cannot connect to >> "/var/run/openvswitch/ovnnb_db.19883.ctl" (No such file or directory) >> 2017-12-25T23:12:02Z|1|unixctl|WARN|failed to connect to >> /var/run/openvswitch/ovnsb_db.19891.ctl >> ovs-appctl: cannot connect to >> "/var/run/openvswitch/ovnsb_db.19891.ctl" (No such file or directory) >> >> >> Seems harmless as i don't use OVS, but how to fix it? > > By default, ovirt-4.2 installs and configure OVN (which uses OVS). You > can turn it off on Engine host by running > systemctl stop ovirt-provider-ovn ovn-northd openvswitch > > but I'd appreciate your help to understand in which log file do you > see these warnings? Have you already disabled openvswitch? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] OVS error logs after upgrade to 4.2
Hi, After upgrade to 4.2 I started getting this error from engine: /etc/cron.daily/logrotate: 2017-12-25T23:12:02Z|1|unixctl|WARN|failed to connect to /var/run/openvswitch/ovnnb_db.19883.ctl ovs-appctl: cannot connect to "/var/run/openvswitch/ovnnb_db.19883.ctl" (No such file or directory) 2017-12-25T23:12:02Z|1|unixctl|WARN|failed to connect to /var/run/openvswitch/ovnsb_db.19891.ctl ovs-appctl: cannot connect to "/var/run/openvswitch/ovnsb_db.19891.ctl" (No such file or directory) Seems harmless as i don't use OVS, but how to fix it? Best regards, Misak Khachatryan ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Minor issue upgrading to 4.2
Hi, I'm not completely sure, but i think i have firewalld before. Anyway, I changed type to firewalld in cluster and reinstalled all my hosts from engine, as i don't have Host Console either. Best regards, Misak Khachatryan On Mon, Dec 25, 2017 at 6:16 PM, Chris Adams wrote: > Once upon a time, Misak Khachatryan said: >> It seems me too in the same situation, my cluster shows firewall type >> as iptables, and my firewalld status is on hosts: > > Do you know if you had firewalld installed before upgrading? You should > be able to tell by checking your /var/log/yum.log. > > I suspect that the issue is that oVirt pulls in firewalld, and the > firewalld RPM sets itself to run by default, plus it happens to be > started after iptables (and so blows away iptables rules). > > See if this fixes it for you: > > # systemctl stop firewalld.service > # systemctl disable firewalld.service > # systemctl restart iptables.service > > -- > Chris Adams > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Minor issue upgrading to 4.2
Hi, It seems me too in the same situation, my cluster shows firewall type as iptables, and my firewalld status is on hosts: systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) The problem i hit that one of my VM gets paused second time due storage error. 3 host hyperconverged cluster with glusterfs, oVirt 4.2 Best regards, Misak Khachatryan On Sun, Dec 24, 2017 at 3:26 PM, Yaniv Kaul wrote: > Sounds like https://bugzilla.redhat.com/show_bug.cgi?id=1511013 - can you > confirm? > Y. > > > On Sat, Dec 23, 2017 at 1:56 AM, Chris Adams wrote: >> >> I upgraded a CentOS 7 oVirt 4.1.7 (initially installed as 3.5 if it >> matters) test oVirt cluster to 4.2.0, and ran into one minor issue. The >> update installed firewalld on the host, which was set to start on boot. >> This replaced the iptables rules with a blank firewalld setup that only >> allowed SSH, which kept the host from working. >> >> Stopping and disabling firewalld, then reloading iptables, got the host >> back working. >> >> In a quick search, I didn't see anything noting that firewalld was now >> required, and it didn't seem to be configured correctly if oVirt was >> trying to use it. >> >> -- >> Chris Adams >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] [ovirt-announce] [Call for feedback] share also your successful upgrades experience
Did upgrade to 4.2 yesterday. Everything wen smoothly except few glitches. I have 4 host install - 3 gluster and one node with local storage. One of the gluster servers failed to start it's brick with Peer reject status, solved very fast by googling. On the node i hit old bug, can't upgrade it since 4.1.5 version, finally it seems i just need to reinstall it from scratch. Best regards, Misak Khachatryan On Thu, Dec 21, 2017 at 5:35 PM, Sandro Bonazzola wrote: > Hi, > now that oVirt 4.2.0 has been released, we're starting to see some reports > about issues that for now are related to not so common deployments. > We'd also like to get some feedback from those who upgraded to this > amazing release without any issue and add these positive feedback under our > developers (digital) Christmas tree as a gift for the effort put in this > release. > Looking forward to your positive reports! > > Not having positive feedback? Let us know too! > We are putting an effort in the next weeks to promptly assist whoever hit > troubles during or after the upgrade. Let us know in this users@ovirt.org > mailing list (preferred) or on IRC using irc.oftc.net server and #ovirt > channel. > > We are also closely monitoring bugzilla.redhat.com for new bugs on oVirt > project, so you can report issues there as well. > > Thanks, > -- > > SANDRO BONAZZOLA > > ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D > > Red Hat EMEA <https://www.redhat.com/> > <https://red.ht/sig> > TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> > > > ___ > Announce mailing list > annou...@ovirt.org > http://lists.ovirt.org/mailman/listinfo/announce > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2
Hi, I don't mean on the fly upgrade, just confused that i should stop all VMs at once, as i understood procedure. If it's possible to do per VM, it's perfectly OK for me. Thank you Nir for clarification. Best regards, Misak Khachatryan On Thu, Nov 16, 2017 at 2:05 AM, Nir Soffer wrote: > On Wed, Nov 15, 2017 at 8:58 AM Misak Khachatryan wrote: >> >> Hi, >> >> will it be a more clean approach? I can't tolerate full stop of all >> VMs just to enable it, seems too disastrous for real production >> environment. Will it be some migration mechanisms in future? > > > You can enable it per vm, you don't need to stop all of them. But I think > we do not support upgrading a machine with running vms, so upgrading > requires: > > 1. migrating vms from the host you want to upgrade > 2. upgrading the host > 3. stopping the vm you want to upgrade to libgfapi > 4. starting this vm on the upgraded host > > Theoretically qemu could switch from one disk to another, but I'm not > sure this is supported when switching to the same disk using different > transports. I know it is not supported now to mirror a network drive to > another network drive. > > The old disk is using: > > > file="/rhev/data-center/mnt/server:_volname/sd_id/images/img_id/vol_id"/> > > name="qemu" type="raw"/> > > > The new disk should use: > > > protocol="gluster"> > > > name="qemu" type="raw"/> > > > Adding qemu-block mailing list. > > Nir > >> >> >> Best regards, >> Misak Khachatryan >> >> >> On Fri, Nov 10, 2017 at 12:35 AM, Darrell Budic >> wrote: >> > You do need to stop the VMs and restart them, not just issue a reboot. I >> > havn’t tried under 4.2 yet, but it works in 4.1.6 that way for me. >> > >> > >> > From: Alessandro De Salvo >> > Subject: Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2 >> > Date: November 9, 2017 at 2:35:01 AM CST >> > To: users@ovirt.org >> > >> > >> > Hi again, >> > >> > OK, tried to stop all the vms, except the engine, set engine-config -s >> > LibgfApiSupported=true (for 4.2 only) and restarted the engine. >> > >> > When I tried restarting the VMs they are still not using gfapi, so it >> > does >> > not seem to help. >> > >> > Cheers, >> > >> > >> > Alessandro >> > >> > >> > >> > Il 09/11/17 09:12, Alessandro De Salvo ha scritto: >> > >> > Hi, >> > where should I enable gfapi via the UI? >> > The only command I tried was engine-config -s LibgfApiSupported=true but >> > the >> > result is what is shown in my output below, so it’s set to true for >> > v4.2. Is >> > it enough? >> > I’ll try restarting the engine. Is it really needed to stop all the VMs >> > and >> > restart them all? Of course this is a test setup and I can do it, but >> > for >> > production clusters in the future it may be a problem. >> > Thanks, >> > >> >Alessandro >> > >> > Il giorno 09 nov 2017, alle ore 07:23, Kasturi Narra >> > ha >> > scritto: >> > >> > Hi , >> > >> > The procedure to enable gfapi is below. >> > >> > 1) stop all the vms running >> > 2) Enable gfapi via UI or using engine-config command >> > 3) Restart ovirt-engine service >> > 4) start the vms. >> > >> > Hope you have not missed any !! >> > >> > Thanks >> > kasturi >> > >> > On Wed, Nov 8, 2017 at 11:58 PM, Alessandro De Salvo >> > wrote: >> >> >> >> Hi, >> >> >> >> I'm using the latest 4.2 beta release and want to try the gfapi access, >> >> but I'm currently failing to use it. >> >> >> >> My test setup has an external glusterfs cluster v3.12, not managed by >> >> oVirt. >> >> >> >> The compatibility flag is correctly showing gfapi should be enabled >> >> with >> >> 4.2: >> >> >> >> # engine-config -g LibgfApiSupported >> >> LibgfApiSupported: false version: 3.6 >> >
Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2
Hi, will it be a more clean approach? I can't tolerate full stop of all VMs just to enable it, seems too disastrous for real production environment. Will it be some migration mechanisms in future? Best regards, Misak Khachatryan On Fri, Nov 10, 2017 at 12:35 AM, Darrell Budic wrote: > You do need to stop the VMs and restart them, not just issue a reboot. I > havn’t tried under 4.2 yet, but it works in 4.1.6 that way for me. > > > From: Alessandro De Salvo > Subject: Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2 > Date: November 9, 2017 at 2:35:01 AM CST > To: users@ovirt.org > > > Hi again, > > OK, tried to stop all the vms, except the engine, set engine-config -s > LibgfApiSupported=true (for 4.2 only) and restarted the engine. > > When I tried restarting the VMs they are still not using gfapi, so it does > not seem to help. > > Cheers, > > > Alessandro > > > > Il 09/11/17 09:12, Alessandro De Salvo ha scritto: > > Hi, > where should I enable gfapi via the UI? > The only command I tried was engine-config -s LibgfApiSupported=true but the > result is what is shown in my output below, so it’s set to true for v4.2. Is > it enough? > I’ll try restarting the engine. Is it really needed to stop all the VMs and > restart them all? Of course this is a test setup and I can do it, but for > production clusters in the future it may be a problem. > Thanks, > >Alessandro > > Il giorno 09 nov 2017, alle ore 07:23, Kasturi Narra ha > scritto: > > Hi , > > The procedure to enable gfapi is below. > > 1) stop all the vms running > 2) Enable gfapi via UI or using engine-config command > 3) Restart ovirt-engine service > 4) start the vms. > > Hope you have not missed any !! > > Thanks > kasturi > > On Wed, Nov 8, 2017 at 11:58 PM, Alessandro De Salvo > wrote: >> >> Hi, >> >> I'm using the latest 4.2 beta release and want to try the gfapi access, >> but I'm currently failing to use it. >> >> My test setup has an external glusterfs cluster v3.12, not managed by >> oVirt. >> >> The compatibility flag is correctly showing gfapi should be enabled with >> 4.2: >> >> # engine-config -g LibgfApiSupported >> LibgfApiSupported: false version: 3.6 >> LibgfApiSupported: false version: 4.0 >> LibgfApiSupported: false version: 4.1 >> LibgfApiSupported: true version: 4.2 >> >> The data center and cluster have the 4.2 compatibility flags as well. >> >> However, when starting a VM with a disk on gluster I can still see the >> disk is mounted via fuse. >> >> Any clue of what I'm still missing? >> >> Thanks, >> >> >>Alessandro >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Different link speeds in LACP LAG?
Hi, JunOS supports LAG over links with different speeds, so if you have MX series routers in-between, you can try to accomplish that. But it's always better to be on safe side, it's very risky to use in production, IMHO. Best regards, Misak Khachatryan On Thu, Sep 14, 2017 at 2:41 PM, Yaniv Kaul wrote: > > > On Thu, Sep 14, 2017 at 1:21 AM, Chris Adams wrote: >> >> I have a small oVirt setup for one customer, with two servers each >> connected to a two-switch stack with 1G links. Now the customer would >> like to upgrade the server links to 10G. My question is this: can I add >> a 10G NIC and do this with minimal "fuss" by just adding the 10G links >> to the same LAG, then removing the 1G links? I would have the host in >> maintenance mode no matter what. > > > I highly doubt that's feasible. They usually are in the same speeds... > Y. > >> >> >> I haven't checked the switch to see if it'll support that yet, figured >> I'd start on the oVirt side. >> >> -- >> Chris Adams >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] oVirt engine with different VM id
Ok, i did a right click on storage domain and did destroy. It's get's imported and Engine VM too. Now it seems OK, Thank you very much. Best regards, Misak Khachatryan On Thu, Aug 31, 2017 at 5:11 PM, Misak Khachatryan wrote: > Hi, > > it's grayed out on web interface, is there any other way? Trying to > detach gives error > > VDSM command DetachStorageDomainVDS failed: Storage domain does not > exist: (u'c44343af-cc4a-4bb7-a548-0c6f609d60d5',) > Failed to detach Storage Domain hosted_storage from Data Center > Default. (User: admin@internal-authz) > > > Best regards, > Misak Khachatryan > > > On Thu, Aug 31, 2017 at 4:22 PM, Martin Sivak wrote: >> Hi, >> >> you can remote the hosted engine storage domain from the engine as >> well. It should also be re-imported. >> >> We had cases where destroying the domain ended up with a locked SD, >> but removing the SD and re-importing is the proper way here. >> >> Best regards >> >> PS: Re-adding the mailing list, we should really set a proper Reply-To >> header.. >> >> Martin Sivak >> >> On Thu, Aug 31, 2017 at 2:07 PM, Misak Khachatryan wrote: >>> Hi, >>> >>> I would love to, but: >>> >>> Error while executing action: >>> >>> HostedEngine: >>> >>> Cannot remove VM. The relevant Storage Domain's status is Inactive. >>> >>> it seems i should somehow fix storage domain first ... >>> >>> engine=# update storage_domain_static set id = >>> '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c' where id = >>> 'c44343af-cc4a-4bb7-a548-0c6f609d60d5'; >>> ERROR: update or delete on table "storage_domain_static" violates >>> foreign key constraint "disk_profiles_storage_domain_id_fkey" on table >>> "disk_profiles" >>> DETAIL: Key (id)=(c44343af-cc4a-4bb7-a548-0c6f609d60d5) is still >>> referenced from table "disk_profiles". >>> >>> engine=# update disk_profiles set storage_domain_id = >>> '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c' where id = >>> 'a6d71571-a13a-415b-9f97-635f17cbe67d'; >>> ERROR: insert or update on table "disk_profiles" violates foreign key >>> constraint "disk_profiles_storage_domain_id_fkey" >>> DETAIL: Key (storage_domain_id)=(2e2820f3-8c3d-487d-9a56-1b8cd278ec6c) >>> is not present in table "storage_domain_static". >>> >>> engine=# select * from storage_domain_static; >>> id | storage >>> | storage_name | storage_domain_type | storage_type | >>> storage_domain_format_type | _create_date | >>> _update_date | recoverable | last_time_used_as_maste >>> r | storage_description | storage_comment | wipe_after_delete | >>> warning_low_space_indicator | critical_space_action_blocker | >>> first_metadata_device | vg_metadata_device | discard_after_delete >>> --+--++-+--++---+---+-+ >>> --+-+-+---+-+---+---++-- >>> 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 | >>> ceab03af-7220-4d42-8f5c-9b557f5d29af | ovirt-image-repository | >>>4 |8 | 0 | 2016-11-02 >>> 21:27:22.118586+04 | | t | >>> | | | f | >>> | | >>> || f >>> 51c903f6-df83-4510-ac69-c164742ca6e7 | >>> 34b72ce0-6ad7-4180-a8a1-2acfd45824d7 | iso| >>>2 |7 | 0 | 2016-11-02 >>> 23:26:21.296635+04 | | t | >>> 0 | | | f | >>> 10 | 5 | >>> || f >>> ece1f05c-97c9-4482-a1a5-914397cddd35 | >>> dd38f31f-7bdc-463c-9ae4-fcd4dc8c99fd | export | >>>3 |1 | 0 | 2
Re: [ovirt-users] oVirt engine with different VM id
Hi, it's grayed out on web interface, is there any other way? Trying to detach gives error VDSM command DetachStorageDomainVDS failed: Storage domain does not exist: (u'c44343af-cc4a-4bb7-a548-0c6f609d60d5',) Failed to detach Storage Domain hosted_storage from Data Center Default. (User: admin@internal-authz) Best regards, Misak Khachatryan On Thu, Aug 31, 2017 at 4:22 PM, Martin Sivak wrote: > Hi, > > you can remote the hosted engine storage domain from the engine as > well. It should also be re-imported. > > We had cases where destroying the domain ended up with a locked SD, > but removing the SD and re-importing is the proper way here. > > Best regards > > PS: Re-adding the mailing list, we should really set a proper Reply-To > header.. > > Martin Sivak > > On Thu, Aug 31, 2017 at 2:07 PM, Misak Khachatryan wrote: >> Hi, >> >> I would love to, but: >> >> Error while executing action: >> >> HostedEngine: >> >> Cannot remove VM. The relevant Storage Domain's status is Inactive. >> >> it seems i should somehow fix storage domain first ... >> >> engine=# update storage_domain_static set id = >> '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c' where id = >> 'c44343af-cc4a-4bb7-a548-0c6f609d60d5'; >> ERROR: update or delete on table "storage_domain_static" violates >> foreign key constraint "disk_profiles_storage_domain_id_fkey" on table >> "disk_profiles" >> DETAIL: Key (id)=(c44343af-cc4a-4bb7-a548-0c6f609d60d5) is still >> referenced from table "disk_profiles". >> >> engine=# update disk_profiles set storage_domain_id = >> '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c' where id = >> 'a6d71571-a13a-415b-9f97-635f17cbe67d'; >> ERROR: insert or update on table "disk_profiles" violates foreign key >> constraint "disk_profiles_storage_domain_id_fkey" >> DETAIL: Key (storage_domain_id)=(2e2820f3-8c3d-487d-9a56-1b8cd278ec6c) >> is not present in table "storage_domain_static". >> >> engine=# select * from storage_domain_static; >> id | storage >> | storage_name | storage_domain_type | storage_type | >> storage_domain_format_type | _create_date | >> _update_date | recoverable | last_time_used_as_maste >> r | storage_description | storage_comment | wipe_after_delete | >> warning_low_space_indicator | critical_space_action_blocker | >> first_metadata_device | vg_metadata_device | discard_after_delete >> --+--++-+--++---+---+-+ >> --+-+-+---+-+---+---++-- >> 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 | >> ceab03af-7220-4d42-8f5c-9b557f5d29af | ovirt-image-repository | >>4 |8 | 0 | 2016-11-02 >> 21:27:22.118586+04 | | t | >> | | | f | >> | | >> || f >> 51c903f6-df83-4510-ac69-c164742ca6e7 | >> 34b72ce0-6ad7-4180-a8a1-2acfd45824d7 | iso| >>2 |7 | 0 | 2016-11-02 >> 23:26:21.296635+04 | | t | >> 0 | | | f | >> 10 | 5 | >> || f >> ece1f05c-97c9-4482-a1a5-914397cddd35 | >> dd38f31f-7bdc-463c-9ae4-fcd4dc8c99fd | export | >>3 |1 | 0 | 2016-12-14 >> 11:28:15.736746+04 | 2016-12-14 11:33:12.872562+04 | t | >> 0 | Export | | f | >> 10 | 5 | >> || f >> 07ea2089-a82b-4ca1-9c8b-54e3895b2ed4 | >> d1e9e3c8-aaf3-43de-ae80-101e5bd2574f | data | >>0 |7 | 4 | 2016-11-02 >> 23:24:43.402629+04 | 2017-02-22 17:20:42.721092+04 | t | >> 0 | | | f
[ovirt-users] oVirt engine with different VM id
kActive': True, 'network': 'ovirtmgmt', 'alias': 'net0', 'spec Params': {}, 'deviceId': 'd348a068-063b-4a40-9119-a3d34f6c7db4', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'al ias': 'ide0-1-0', 'specParams': {}, 'readonly': 'True', 'deviceId': 'e738b50b-c200-4429-8489-4519325339c7', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'poolI D': '----', 'volumeInfo': {'path': 'engine/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c/images/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d', 'protocol': 'gluster', 'hosts': [{'port': '0', 'transport': 'tcp', 'name': ' virt1'}, {'port': '0', 'transport': 'tcp', 'name': 'virt2'}, {'port': '0', 'transport': 'tcp', 'name': 'virt3'}]}, 'index': '0', 'iface': 'virtio', 'apparentsize': '62277025792', 'specParams': {}, 'imageID': '5deeac2d-18d7-4622-9371-ebf965d2bd6b', 'readonly': 'False', 's hared': 'exclusive', 'truesize': '3255476224', 'type': 'disk', 'domainID': '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c', 'reqsize': '0', 'format': 'raw', 'deviceId': '5deeac2d-18d7-4622-9371-ebf965d2bd6b', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'path': '/var/run/vdsm/storage/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d', 'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder': '1', 'v olumeID': '60aa51b7-32eb-41a9-940d-9489b0375a3d', 'alias': 'virtio-disk0', 'volumeChain': [{'domainID': '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c', 'leaseOffset': 0, 'volumeID': '60aa51b7-32eb-41a9-940d-9489b0375a3d', 'leasePath': '/rhev/data-center/mnt/glusterSD/virt1:_engi ne/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c/images/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d.lease', 'imageID': '5deeac2d-18d7-4622-9371-ebf965d2bd6b', 'path': '/rhev/data-center/mnt/glusterSD/virt1:_engine/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c /images/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d'}]}] guestDiskMapping = {'5deeac2d-18d7-4622-9': {'name': '/dev/vda'}, 'QEMU_DVD-ROM_QM3': {'name': '/dev/sr0'}} vmType = kvm display = vnc memSize = 16384 cpuType = Westmere spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir smp = 4 vmName = HostedEngine clientIp = maxVCpus = 16 [root@virt3 ~]# [root@virt3 ~]# hosted-engine --vm-status !! Cluster is in GLOBAL MAINTENANCE mode !! --== Host 1 status ==-- conf_on_shared_storage : True Status up-to-date : True Hostname : virt1.management.gnc.am Host ID: 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped: False Local maintenance : False crc32 : ef49e5b4 local_conf_timestamp : 7515 Host timestamp : 7512 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=7512 (Thu Aug 31 15:14:59 2017) host-id=1 score=3400 vm_conf_refresh_time=7515 (Thu Aug 31 15:15:01 2017) conf_on_shared_storage=True maintenance=False state=GlobalMaintenance stopped=False --== Host 3 status ==-- conf_on_shared_storage : True Status up-to-date : True Hostname : virt3 Host ID: 3 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped: False Local maintenance : False crc32 : 4a85111c local_conf_timestamp : 102896 Host timestamp : 102893 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=102893 (Thu Aug 31 15:14:46 2017) host-id=3 score=3400 vm_conf_refresh_time=102896 (Thu Aug 31 15:14:49 2017) conf_on_shared_storage=True maintenance=False state=GlobalMaintenance stopped=False !! Cluster is in GLOBAL MAINTENANCE mode !! Also my storage domain for hosted engine is inactive, can't activate it it gives this error in web console: VDSM command GetImagesListVDS failed: Storage domain does not exist: (u'c44343af-cc4a-4bb7-a548-0c6f609d60d5',) It seems I should fiddle with database a bit more, but is't scary thing for me. Any help? Best regards, Misak Khachatryan ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] oVirt web interface events console sorting
Hello, my events started appear in reverse order lower part of web interface. Anybody have same issues? Best regards, Misak Khachatryan ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Python errors with ovirt 4.1.4
Hi, I have same error on my node install. Best regards, Misak Khachatryan, Network Administration and Monitoring Department Manager, GNC- ALFA CJSC 1 Khaghaghutyan str., Abovyan, 2201 Armenia Tel: +374 60 46 99 70 (9670), Mob.: +374 55 19 98 40 URL:www.rtarmenia.am On Wed, Aug 2, 2017 at 1:48 PM, david caughey wrote: > Hi Folks, > > I'm testing out the new version with the 4.1.4 ovirt iso and am getting > errors directly after install: > > Last login: Wed Aug 2 10:17:56 2017 > Traceback (most recent call last): > File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main > "__main__", fname, loader, pkg_name) > File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code > exec code in run_globals > File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in > > CliApplication() > File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in > CliApplication > return cmdmap.command(args) > File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in > command > return self.commands[command](**kwargs) > File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 102, in > motd > machine_readable=True).output, self.machine).write() > File "/usr/lib/python2.7/site-packages/nodectl/status.py", line 51, in > __init__ > self._update_info(status) > File "/usr/lib/python2.7/site-packages/nodectl/status.py", line 78, in > _update_info > if "ok" not in status.lower(): > AttributeError: Status instance has no attribute 'lower' > Admin Console: https://192.168.122.61:9090/ > > The admin console seems to work fine. > > Are these issues serious or can they be ignored. > > BR/David > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] node ng upgrade failed
Hi, I've had same problem with at least two or three releases. If you need i can provide some details, but my setup is also pretty much standard without any custom partitioning. Best regards, Misak Khachatryan, Network Administration and Monitoring Department Manager, GNC- ALFA CJSC 1 Khaghaghutyan str., Abovyan, 2201 Armenia Tel: +374 60 46 99 70 (9670), Mob.: +374 93 19 98 40 URL:www.rtarmenia.am<http://www.rtarmenia.am> On Mon, Jul 10, 2017 at 4:12 PM, Ryan Barry mailto:rba...@redhat.com>> wrote: Ok, so Python may be confused here. As one final question, what about: lvm lvs ? On Mon, Jul 10, 2017 at 8:10 AM, Grundmann, Christian mailto:christian.grundm...@fabasoft.com>> wrote: cat /etc/fstab # # /etc/fstab # Created by anaconda on Wed Dec 14 15:19:56 2016 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/onn_cs-kvm-001/ovirt-node-ng-4.1.2-0.20170523.0+1 / xfs defaults,discard 0 0 UUID=7899e80f-5066-4eca-a7d6-afa52a512039 /boot ext4 defaults1 2 UUID=42B1-C1A3 /boot/efi vfat umask=0077,shortname=winnt 0 0 /dev/mapper/onn_cs--kvm--001-var /var xfs defaults,discard 0 0 /dev/mapper/onn_cs--kvm--001-swap swapswapdefaults 0 0 mount sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=49382320k,nr_inodes=12345580,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/mapper/onn_cs--kvm--001-ovirt--node--ng--4.1.2--0.20170523.0+1 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=34,pgrp=1,timeout=300,minproto=5,maxproto=5,direct) debugfs on /sys/kernel/debug type debugfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel) nfsd on /proc/fs/nfsd type nfsd (rw,relatime) /dev/mapper/3600605b003af3f301666cedd1935887c2 on /boot type ext4 (rw,relatime,seclabel,data=ordered) /dev/mapper/onn_cs--kvm--001-var on /var type xfs (rw,relatime,seclabel,attr2,discard,inode64,logbsize=64k,sunit=128,swidth=128,noquota) /dev/mapper/3600605b003af3f301666cedd1935887c1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=9881988k,mode=700) Von: Ryan Barry [mailto:rba...@redhat.com<mailto:rba...@redhat.com>] Gesendet: Montag, 10. Juli 2017 14:09 An: Grundmann, Christian mailto:christian.grundm...@fabasoft.com>> Cc: Yuval Turgeman mailto:yuv...@redhat.com>>; users@ovirt.org<mailto:users@ovirt.org> Betreff: Re: [ovirt-users] node ng upgrade failed What does `mount` look like? I'm wondering whether /var/log is already a partition/in fstab or whether os.path.ismount is confused here. On Mon, Jul 10, 2017 at 7:48 AM, Grundmann, Christian mailto:christian.grundm...@fabasoft.com>> wrot
Re: [ovirt-users] Nested KVM for oVirt 4.1.2
On Mon, May 29, 2017 at 12:33 PM, Sandro Bonazzola mailto:sbona...@redhat.com>> wrote: On Mon, May 29, 2017 at 10:21 AM, mailto:ov...@fateknollogee.com>> wrote: I assume people are using oVirt in production? Sure, I was just wondering why you were running in nested virtualization :-) Being your use case a "playground" environment, I can suggest you to have a look at Lago http://lago.readthedocs.io/en/stable/ and at Lago demo at https://github.com/lago-project/lago-demo to help you preparing an isolated test environment for your learning. Hello, I work for ISP, and we use EVE-NG for our virtual lab, which is very useful to test changes we plan to use in network. EVE NG recommend to have nested virtualization enabled, as it's itself also some kind of hypervisor. Best regards, Misak Khachatryan, Network Administration and Monitoring Department Manager, GNC- ALFA CJSC 1 Khaghaghutyan str., Abovyan, 2201 Armenia Tel: +374 60 46 99 70 (9670), Mob.: +374 93 19 98 40 URL:www.rtarmenia.am<http://www.rtarmenia.am/> On 2017-05-29 04:13, Sandro Bonazzola wrote: On Mon, May 29, 2017 at 12:12 AM, mailto:ov...@fateknollogee.com>> wrote: http://community.redhat.com/blog/2013/08/testing-ovirt-3-3-with-nested-kvm/ [1] I have one CentOS7 host (physical) & 3x oVirt nodes 4.1.2 (these are vm's). Hi, can you please share the use case for this setup? I have installed vdsm-hook-nestedvm on the host. Should I install vdsm-hook-macspoof on the 3x node vm's? ___ Users mailing list Users@ovirt.org<mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users [2] -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA [3] [4] TRIED. TESTED. TRUSTED. [5] Links: -- [1] http://community.redhat.com/blog/2013/08/testing-ovirt-3-3-with-nested-kvm/ [2] http://lists.ovirt.org/mailman/listinfo/users [3] https://www.redhat.com/ [4] https://red.ht/sig [5] https://redhat.com/trusted -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA<https://www.redhat.com/> [https://www.redhat.com/profiles/rh/themes/redhatdotcom/img/logo-red-hat-black.png]<https://red.ht/sig> TRIED. TESTED. TRUSTED.<https://redhat.com/trusted> ___ Users mailing list Users@ovirt.org<mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Upgrade hypervisor to 4.1.1.1
Is it node setup? Today i tried to upgrade my one node cluster, after that VM's fail to start, it turns out that selinux prevents virtlogd to start. ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd semodule -i my-virtlogd.pp /sbin/restorecon -v /etc/libvirt/virtlogd.conf fixed things for me, YMMV. Best regards, Misak Khachatryan On Mon, Apr 10, 2017 at 2:03 PM, Sandro Bonazzola wrote: > Can you please provide a full sos report from that host? > > On Sun, Apr 9, 2017 at 8:38 PM, Sandro Bonazzola > wrote: > >> Adding node team. >> >> Il 09/Apr/2017 15:43, "eric stam" ha scritto: >> >> Yesterday I executed an upgrade on my hypervisor to version 4.1.1.1 >> After the upgrade, it is impossible to start a virtual machine on it. >> The messages I found: Failed to connect socket to >> '/var/run/libvirt/virtlogd-sock': Connection refused >> >> [root@vm-1 log]# hosted-engine --vm-status | grep -i engine >> >> Engine status : {"reason": "bad vm status", >> "health": "bad", "vm": "down", "detail": "down"} >> >> state=EngineUnexpectedlyDown >> >> The redhead version: CentOS Linux release 7.3.1611 (Core) >> >> Is this a known problem? >> >> Regards, Eric >> >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> > > > -- > > SANDRO BONAZZOLA > > ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D > > Red Hat EMEA <https://www.redhat.com/> > <https://red.ht/sig> > TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] oVirt node local storage behavior
Bump, please look. Best regards, Misak Khachatryan On Fri, Mar 24, 2017 at 10:47 AM, Misak Khachatryan wrote: > Hello, > > I have one server where i was installe oVirt node 4.1 and used in my > infrastructure. It's used as single server cluster with local storage > configured. > > I created separate volume for it and mounted as separate directory: > > > [root@virt4 ~]# lvs > LV VG Attr LSize Pool > Origin Data% Meta% Move Log Cpy%Sync > Convert > local_storage onn Vwi-aotz-- 350.00g pool00 > 7.27 > ovirt-node-ng-4.1.0-0.20170201.0 onn Vri---tz-k 425.37g pool00 > ovirt-node-ng-4.1.0-0.20170201.0+1 onn Vwi---tz-- 425.37g pool00 > ovirt-node-ng-4.1.0-0.20170201.0 > ovirt-node-ng-4.1.0-0.20170316.0 onn Vwi---tz-k 425.37g pool00 root > ovirt-node-ng-4.1.0-0.20170316.0+1 onn Vwi---tz-- 425.37g pool00 > ovirt-node-ng-4.1.0-0.20170316.0 > ovirt-node-ng-4.1.1-0.20170322.0 onn Vri---tz-k 425.37g pool00 > ovirt-node-ng-4.1.1-0.20170322.0+1 onn Vwi-aotz-- 425.37g pool00 > ovirt-node-ng-4.1.1-0.20170322.0 2.17 > pool00 onn twi-aotz-- 440.37g > 18.44 9.91 > root onn Vwi---tz-- 425.37g pool00 > swap onn -wi-ao 7.88g > varonn Vwi-aotz-- 15.00g pool00 > 16.28 > > mount: > > /dev/mapper/onn-local_storage on /local_storage type xfs > (rw,relatime,seclabel,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) > > [root@virt4 ~]# ll / > drwxr-xr-x. 3 vdsm kvm 76 Mar 23 09:02 local_storage > > All operations was done through cockpit UI > > With 4.1.1 upgrade i decided to upgrade it. After server reboot i > discovered that my local storage folder was cleared, so i need to > recreate all VMs and disks from scratch. > > Is this a bug or I did something wrong? > > > Best regards, > Misak Khachatryan ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] oVirt node local storage behavior
Hello, I have one server where i was installe oVirt node 4.1 and used in my infrastructure. It's used as single server cluster with local storage configured. I created separate volume for it and mounted as separate directory: [root@virt4 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert local_storage onn Vwi-aotz-- 350.00g pool00 7.27 ovirt-node-ng-4.1.0-0.20170201.0 onn Vri---tz-k 425.37g pool00 ovirt-node-ng-4.1.0-0.20170201.0+1 onn Vwi---tz-- 425.37g pool00 ovirt-node-ng-4.1.0-0.20170201.0 ovirt-node-ng-4.1.0-0.20170316.0 onn Vwi---tz-k 425.37g pool00 root ovirt-node-ng-4.1.0-0.20170316.0+1 onn Vwi---tz-- 425.37g pool00 ovirt-node-ng-4.1.0-0.20170316.0 ovirt-node-ng-4.1.1-0.20170322.0 onn Vri---tz-k 425.37g pool00 ovirt-node-ng-4.1.1-0.20170322.0+1 onn Vwi-aotz-- 425.37g pool00 ovirt-node-ng-4.1.1-0.20170322.0 2.17 pool00 onn twi-aotz-- 440.37g 18.44 9.91 root onn Vwi---tz-- 425.37g pool00 swap onn -wi-ao 7.88g varonn Vwi-aotz-- 15.00g pool00 16.28 mount: /dev/mapper/onn-local_storage on /local_storage type xfs (rw,relatime,seclabel,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) [root@virt4 ~]# ll / drwxr-xr-x. 3 vdsm kvm 76 Mar 23 09:02 local_storage All operations was done through cockpit UI With 4.1.1 upgrade i decided to upgrade it. After server reboot i discovered that my local storage folder was cleared, so i need to recreate all VMs and disks from scratch. Is this a bug or I did something wrong? Best regards, Misak Khachatryan ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] oVirt backup and VM pauses.
Thank You Gianluca, reading it right now !!! Best regards, Misak Khachatryan On Thu, Mar 9, 2017 at 1:27 PM, Gianluca Cecchi wrote: > On Thu, Mar 9, 2017 at 7:34 AM, Maton, Brett > wrote: >> >> I noticed similar behavior, I think if the memory state isn't backed-up >> then the machines won't be paused. >> Not seen an option in script to 'flip' the flag, but then again I haven't >> looked at the script in enough detail to see what needs changing. >> > > > See my solution in another thread here: > http://lists.ovirt.org/pipermail/users/2017-March/080338.html > > HIH, > Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] oVirt backup and VM pauses.
Hello, I'm using oVirt and recently added backup functionality using this script: https://github.com/wefixit-AT/oVirtBackup This script runs every day from cron, but i noticed very bad behavior with the machines. Script on some stage pauses them, and it leads to problems. I have NTP stratum 2 server as a virtual machine, which I added to pool.ntp.org. When machine pauses, it loses time sync for a while and NTP.ORG monitoring system removes this machine from pool. And it take about a 24 hours to get good score to be included int the pool again, then next backup time arrives, and so on. So my question is - any suggestion to prevent this behavior or any other backup solution without pausing available? Best regards, Misak Khachatryan ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Hot to force glusterfs to use RDMA?
Hello, hmm, i saw that i'm not using RDMA, how i can safely tune it? I have 3 server setup with GlusterFS: [root@virt1 ~]# gluster volume info Volume Name: data Type: Replicate Volume ID: d53c2202-0dba-4973-960e-4642d41bcdd8 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: virt1:/gluster/brick2/data Brick2: virt2:/gluster/brick2/data Brick3: virt3:/gluster/brick2/data (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 1 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on ovirt 4.1 Thanks in advance? Best regards, Misak Khachatryan On Thu, Mar 2, 2017 at 4:06 PM, Arman Khalatyan wrote: > BTW RDMA is working as expected: > root@clei26 ~]# qperf clei22.vib tcp_bw tcp_lat > tcp_bw: > bw = 475 MB/sec > tcp_lat: > latency = 52.8 us > [root@clei26 ~]# > > thank you beforehand. > Arman. > > > On Thu, Mar 2, 2017 at 12:54 PM, Arman Khalatyan wrote: >> >> just for reference: >> gluster volume info >> >> Volume Name: GluReplica >> Type: Replicate >> Volume ID: ee686dfe-203a-4caa-a691-26353460cc48 >> Status: Started >> Snapshot Count: 0 >> Number of Bricks: 1 x (2 + 1) = 3 >> Transport-type: tcp,rdma >> Bricks: >> Brick1: 10.10.10.44:/zclei22/01/glu >> Brick2: 10.10.10.42:/zclei21/01/glu >> Brick3: 10.10.10.41:/zclei26/01/glu (arbiter) >> Options Reconfigured: >> network.ping-timeout: 30 >> server.allow-insecure: on >> storage.owner-gid: 36 >> storage.owner-uid: 36 >> cluster.data-self-heal-algorithm: full >> features.shard: on >> cluster.server-quorum-type: server >> cluster.quorum-type: auto >> network.remote-dio: enable >> cluster.eager-lock: enable >> performance.stat-prefetch: off >> performance.io-cache: off >> performance.read-ahead: off >> performance.quick-read: off >> performance.readdir-ahead: on >> nfs.disable: on >> >> >> >> [root@clei21 ~]# gluster volume status >> Status of volume: GluReplica >> Gluster process TCP Port RDMA Port Online >> Pid >> >> -- >> Brick 10.10.10.44:/zclei22/01/glu 49158 49159 Y >> 15870 >> Brick 10.10.10.42:/zclei21/01/glu 49156 49157 Y >> 17473 >> Brick 10.10.10.41:/zclei26/01/glu 49153 49154 Y >> 18897 >> Self-heal Daemon on localhost N/A N/AY >> 17502 >> Self-heal Daemon on 10.10.10.41 N/A N/AY >> 13353 >> Self-heal Daemon on 10.10.10.44 N/A N/AY >> 32745 >> >> Task Status of Volume GluReplica >> >> -- >> There are no active volume tasks >> >> >> On Thu, Mar 2, 2017 at 12:52 PM, Arman Khalatyan >> wrote: >>> >>> I am not able to mount with RDMA over cli >>> Are there some volfile parameters needs to be tuned? >>> /usr/bin/mount -t glusterfs -o >>> backup-volfile-servers=10.10.10.44:10.10.10.42:10.10.10.41,transport=rdma >>> 10.10.10.44:/GluReplica /mnt >>> >>> [2017-03-02 11:49:47.795511] I [MSGID: 100030] [glusterfsd.c:2454:main] >>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.9 >>> (args: /usr/sbin/glusterfs --volfile-server=10.10.10.44 >>> --volfile-server=10.10.10.44 --volfile-server=10.10.10.42 >>> --volfile-server=10.10.10.41 --volfile-server-transport=rdma >>> --volfile-id=/GluReplica.rdma /mnt) >>> [2017-03-02 11:49:47.812699] I [MSGID: 101190] >>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with >>> index 1 >>> [2017-03-02 11:49:47.825210] I [MSGID: 101190] >>> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with >>> index 2 >>> [2017-03-02 11:49:47.828996] W [MSGID: 103071] >>> [rdma.c:4589:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event >>> channel cr
Re: [ovirt-users] Disable guest agent not installed warning
Thanks to all for the replies, it's very minor annoying cosmetic thing, no big deal. Best regards, Misak Khachatryan, Network Administration and Monitoring Department Manager, GNC- ALFA CJSC 1 Khaghaghutyan str., Abovyan, 2201 Armenia Tel: +374 60 46 99 70 (9670), Mob.: +374 93 19 98 40 URL:www.rtarmenia.am On Fri, Feb 24, 2017 at 7:16 PM, Arik Hadas wrote: > > > On Thu, Feb 23, 2017 at 7:48 PM, Karli Sjöberg wrote: >> >> >> Den 23 feb. 2017 6:10 em skrev Nir Soffer : >> > >> > On Thu, Feb 23, 2017 at 6:47 PM, Karli Sjöberg >> > wrote: >> > > >> > > Den 23 feb. 2017 4:09 em skrev Nir Soffer : >> > >> >> > >> On Thu, Feb 23, 2017 at 4:38 PM, Karli Sjöberg >> > >> wrote: >> > >> > On Thu, 2017-02-23 at 13:46 +, Misak Khachatryan wrote: >> > >> >> Hi there. Anybody knows how to suppress guest agent not installed >> > >> >> warning for VM? I have one VM based on FreeBSD, where i can't >> > >> >> install >> > >> >> it and the warning is just annoying. >> > >> >> > >> I learned today that HA does not work properly if guest agent is not >> > >> installed, >> > >> so you should be careful about the warning. >> > >> >> > >> Nir >> > > >> > > Uhm, what makes you say that exactly? I've been running oVirt since >> > > god >> > > knows how long now, faced countless of times where HA worked perfectly >> > > fine. >> > > If this is a new "feature", you have my sincerest thumbs down on that >> > > one:) >> > >> > If the vm is terminated on the host, we will treat that as normal >> > termination >> > (e.g. poweroff within the vm), and will not start the vm on another >> > host. >> > >> > You probably do not notice this since nobody is killing your vms. >> > >> > Nir >> >> So a 'kill -9' or qemu bug, huh? Good to know, but I consider it a small >> risk. > > Note that what Nir was referring to [1,2] is not something new, is unrelated > to the original question (the warning shown when connecting using a spice > console to a VM with no guest agent has nothing to do with its > high-availability) and is indeed minor. > > Back to the original question, I'm afraid there is no way to prevent this > warning today. > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1384007 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1418927 > >> BTW Misak, you should be able to get the guest agent running in FreeBSD >> 11, since they added VirtIO serial now, haven't tried it myself, a bit >> preoccupied at the moment. >> >> /K >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Disable guest agent not installed warning
Ther problem is that it's not a FreeBSD Machine, it's FreeBSD based Juniper Virtual Router Reflector image, which i cannot modify by any way. Best regards, Misak Khachatryan, Network Administration and Monitoring Department Manager, GNC- ALFA CJSC 1 Khaghaghutyan str., Abovyan, 2201 Armenia Tel: +374 60 46 99 70 (9670), Mob.: +374 93 19 98 40 URL:www.rtarmenia.am On Thu, Feb 23, 2017 at 9:48 PM, Karli Sjöberg wrote: > > Den 23 feb. 2017 6:10 em skrev Nir Soffer : >> >> On Thu, Feb 23, 2017 at 6:47 PM, Karli Sjöberg >> wrote: >> > >> > Den 23 feb. 2017 4:09 em skrev Nir Soffer : >> >> >> >> On Thu, Feb 23, 2017 at 4:38 PM, Karli Sjöberg >> >> wrote: >> >> > On Thu, 2017-02-23 at 13:46 +, Misak Khachatryan wrote: >> >> >> Hi there. Anybody knows how to suppress guest agent not installed >> >> >> warning for VM? I have one VM based on FreeBSD, where i can't >> >> >> install >> >> >> it and the warning is just annoying. >> >> >> >> I learned today that HA does not work properly if guest agent is not >> >> installed, >> >> so you should be careful about the warning. >> >> >> >> Nir >> > >> > Uhm, what makes you say that exactly? I've been running oVirt since god >> > knows how long now, faced countless of times where HA worked perfectly >> > fine. >> > If this is a new "feature", you have my sincerest thumbs down on that >> > one:) >> >> If the vm is terminated on the host, we will treat that as normal >> termination >> (e.g. poweroff within the vm), and will not start the vm on another host. >> >> You probably do not notice this since nobody is killing your vms. >> >> Nir > > So a 'kill -9' or qemu bug, huh? Good to know, but I consider it a small > risk. > > BTW Misak, you should be able to get the guest agent running in FreeBSD 11, > since they added VirtIO serial now, haven't tried it myself, a bit > preoccupied at the moment. > > /K > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Disable guest agent not installed warning
Sorry for duplicate posts, messed up between accounts. The problem is that it's not a FreeBSD Machine, it's FreeBSD based Juniper Virtual Router Reflector image, which i cannot modify by any way. It's proprietary product, only part of it is FreeBSD (mostly kernel and base environment). Best regards, Misak Khachatryan On Thu, Feb 23, 2017 at 9:48 PM, Karli Sjöberg wrote: > > Den 23 feb. 2017 6:10 em skrev Nir Soffer : >> >> On Thu, Feb 23, 2017 at 6:47 PM, Karli Sjöberg >> wrote: >> > >> > Den 23 feb. 2017 4:09 em skrev Nir Soffer : >> >> >> >> On Thu, Feb 23, 2017 at 4:38 PM, Karli Sjöberg >> >> wrote: >> >> > On Thu, 2017-02-23 at 13:46 +, Misak Khachatryan wrote: >> >> >> Hi there. Anybody knows how to suppress guest agent not installed >> >> >> warning for VM? I have one VM based on FreeBSD, where i can't >> >> >> install >> >> >> it and the warning is just annoying. >> >> >> >> I learned today that HA does not work properly if guest agent is not >> >> installed, >> >> so you should be careful about the warning. >> >> >> >> Nir >> > >> > Uhm, what makes you say that exactly? I've been running oVirt since god >> > knows how long now, faced countless of times where HA worked perfectly >> > fine. >> > If this is a new "feature", you have my sincerest thumbs down on that >> > one:) >> >> If the vm is terminated on the host, we will treat that as normal >> termination >> (e.g. poweroff within the vm), and will not start the vm on another host. >> >> You probably do not notice this since nobody is killing your vms. >> >> Nir > > So a 'kill -9' or qemu bug, huh? Good to know, but I consider it a small > risk. > > BTW Misak, you should be able to get the guest agent running in FreeBSD 11, > since they added VirtIO serial now, haven't tried it myself, a bit > preoccupied at the moment. > > /K > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Disable guest agent not installed warning
Hi there. Anybody knows how to suppress guest agent not installed warning for VM? I have one VM based on FreeBSD, where i can't install it and the warning is just annoying. Best regards, Misak Khachatryan, ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] ovirt 4.0.4?==?utf-8?q? : unable to upload images
Sorry, i did restart on wrong host, not virt1. Wrong console. Thank you very much for help! Best regards, Misak Khachatryan On Wed, Nov 2, 2016 at 2:20 PM, Misak Khachatryan wrote: > Hmm, > > I tried same command on virt2 and virt3, and service present there. > After that upload started. > How i can fix virt1 ? > > Best regards, > Misak Khachatryan > > > On Wed, Nov 2, 2016 at 2:17 PM, Misak Khachatryan wrote: >> [root@misak ~]# service ovirt-imageio-daemon restart >> Redirecting to /bin/systemctl restart ovirt-imageio-daemon.service >> Failed to restart ovirt-imageio-daemon.service: Unit >> ovirt-imageio-daemon.service not found. >> >> Seems service not present. >> Centos 7 >> >> >> >> Best regards, >> Misak Khachatryan >> >> >> On Wed, Nov 2, 2016 at 1:08 PM, Misak Khachatryan >> wrote: >>> Hello, >>> >>> same problem: >>> >>> 3 host cluster, log of one hosts, same on two others: >>> >>> [root@virt2 ~]# journalctl -x -u ovirt-imageio-daemon >>> -- Logs begin at Mon 2016-10-31 12:45:30 +04, end at Wed 2016-11-02 >>> 13:03:28 +04. -- >>> Oct 31 12:45:38 virt2.management.gnc.am systemd[1]: Starting oVirt >>> ImageIO Daemon... >>> -- Subject: Unit ovirt-imageio-daemon.service has begun start-up >>> -- Defined-By: systemd >>> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel >>> -- >>> -- Unit ovirt-imageio-daemon.service has begun starting up. >>> Oct 31 12:45:38 virt2.management.gnc.am systemd[1]: Started oVirt >>> ImageIO Daemon. >>> -- Subject: Unit ovirt-imageio-daemon.service has finished start-up >>> -- Defined-By: systemd >>> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel >>> -- >>> -- Unit ovirt-imageio-daemon.service has finished starting up. >>> -- >>> -- The start-up result is done. >>> Nov 01 13:21:24 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / >>> - - [01/Nov/2016 13:21:24] "PUT >>> /tickets/d122778d-beb4-426f-8074-70e033253670 HTTP/1.1" 200 0 >>> Nov 01 13:21:44 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / >>> - - [01/Nov/2016 13:21:44] "DELETE >>> /tickets/d122778d-beb4-426f-8074-70e033253670 HTTP/1.1" 204 0 >>> Nov 01 13:28:48 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / >>> - - [01/Nov/2016 13:28:48] "PUT >>> /tickets/87eeb8c4-2184-4793-8843-5835e2fac025 HTTP/1.1" 200 0 >>> Nov 01 13:29:03 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / >>> - - [01/Nov/2016 13:29:03] "DELETE >>> /tickets/87eeb8c4-2184-4793-8843-5835e2fac025 HTTP/1.1" 204 0 >>> Nov 01 13:37:50 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / >>> - - [01/Nov/2016 13:37:50] "PUT >>> /tickets/5d78deac-b373-43c7-b0ef-9dc94ef3324b HTTP/1.1" 200 0 >>> Nov 01 13:38:12 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / >>> - - [01/Nov/2016 13:38:12] "DELETE >>> /tickets/5d78deac-b373-43c7-b0ef-9dc94ef3324b HTTP/1.1" 204 0 >>> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >>> Traceback (most recent call last): >>> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >>> File "/usr/lib64/python2.7/SocketServer.py", line 593, in >>> process_request_thread >>> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >>> self.finish_request(request, client_address) >>> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >>> File "/usr/lib64/python2.7/SocketServer.py", line 334, in >>> finish_request >>> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >>> self.RequestHandlerClass(request, client_address, self) >>> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >>> File "/usr/lib64/python2.7/SocketServer.py", line 649, in __init__ >>> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >>> self.handle() >>> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >>> File "/usr/lib64/python2.7/wsgiref/simple_server.py", line 116, in >>> handle >>> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >>> self.raw_requestline = self.rfile.readline() >>> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >>> File "/usr/lib64/python2.7/socket.py", li
Re: [ovirt-users] ovirt 4.0.4?==?utf-8?q? : unable to upload images
Hmm, I tried same command on virt2 and virt3, and service present there. After that upload started. How i can fix virt1 ? Best regards, Misak Khachatryan On Wed, Nov 2, 2016 at 2:17 PM, Misak Khachatryan wrote: > [root@misak ~]# service ovirt-imageio-daemon restart > Redirecting to /bin/systemctl restart ovirt-imageio-daemon.service > Failed to restart ovirt-imageio-daemon.service: Unit > ovirt-imageio-daemon.service not found. > > Seems service not present. > Centos 7 > > > > Best regards, > Misak Khachatryan > > > On Wed, Nov 2, 2016 at 1:08 PM, Misak Khachatryan > wrote: >> Hello, >> >> same problem: >> >> 3 host cluster, log of one hosts, same on two others: >> >> [root@virt2 ~]# journalctl -x -u ovirt-imageio-daemon >> -- Logs begin at Mon 2016-10-31 12:45:30 +04, end at Wed 2016-11-02 >> 13:03:28 +04. -- >> Oct 31 12:45:38 virt2.management.gnc.am systemd[1]: Starting oVirt >> ImageIO Daemon... >> -- Subject: Unit ovirt-imageio-daemon.service has begun start-up >> -- Defined-By: systemd >> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel >> -- >> -- Unit ovirt-imageio-daemon.service has begun starting up. >> Oct 31 12:45:38 virt2.management.gnc.am systemd[1]: Started oVirt >> ImageIO Daemon. >> -- Subject: Unit ovirt-imageio-daemon.service has finished start-up >> -- Defined-By: systemd >> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel >> -- >> -- Unit ovirt-imageio-daemon.service has finished starting up. >> -- >> -- The start-up result is done. >> Nov 01 13:21:24 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / >> - - [01/Nov/2016 13:21:24] "PUT >> /tickets/d122778d-beb4-426f-8074-70e033253670 HTTP/1.1" 200 0 >> Nov 01 13:21:44 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / >> - - [01/Nov/2016 13:21:44] "DELETE >> /tickets/d122778d-beb4-426f-8074-70e033253670 HTTP/1.1" 204 0 >> Nov 01 13:28:48 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / >> - - [01/Nov/2016 13:28:48] "PUT >> /tickets/87eeb8c4-2184-4793-8843-5835e2fac025 HTTP/1.1" 200 0 >> Nov 01 13:29:03 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / >> - - [01/Nov/2016 13:29:03] "DELETE >> /tickets/87eeb8c4-2184-4793-8843-5835e2fac025 HTTP/1.1" 204 0 >> Nov 01 13:37:50 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / >> - - [01/Nov/2016 13:37:50] "PUT >> /tickets/5d78deac-b373-43c7-b0ef-9dc94ef3324b HTTP/1.1" 200 0 >> Nov 01 13:38:12 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / >> - - [01/Nov/2016 13:38:12] "DELETE >> /tickets/5d78deac-b373-43c7-b0ef-9dc94ef3324b HTTP/1.1" 204 0 >> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >> Traceback (most recent call last): >> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >> File "/usr/lib64/python2.7/SocketServer.py", line 593, in >> process_request_thread >> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >> self.finish_request(request, client_address) >> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >> File "/usr/lib64/python2.7/SocketServer.py", line 334, in >> finish_request >> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >> self.RequestHandlerClass(request, client_address, self) >> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >> File "/usr/lib64/python2.7/SocketServer.py", line 649, in __init__ >> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >> self.handle() >> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >> File "/usr/lib64/python2.7/wsgiref/simple_server.py", line 116, in >> handle >> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >> self.raw_requestline = self.rfile.readline() >> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >> File "/usr/lib64/python2.7/socket.py", line 447, in readline >> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >> data = self._sock.recv(self._rbufsize) >> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >> File "/usr/lib64/python2.7/ssl.py", line 736, in recv >> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >> return self.read(buflen) >> Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: >> File "/usr/lib64/python2.7/ssl.py", line 630, in read >> Nov 01 14:
Re: [ovirt-users] ovirt 4.0.4?==?utf-8?q? : unable to upload images
[root@misak ~]# service ovirt-imageio-daemon restart Redirecting to /bin/systemctl restart ovirt-imageio-daemon.service Failed to restart ovirt-imageio-daemon.service: Unit ovirt-imageio-daemon.service not found. Seems service not present. Centos 7 Best regards, Misak Khachatryan On Wed, Nov 2, 2016 at 1:08 PM, Misak Khachatryan wrote: > Hello, > > same problem: > > 3 host cluster, log of one hosts, same on two others: > > [root@virt2 ~]# journalctl -x -u ovirt-imageio-daemon > -- Logs begin at Mon 2016-10-31 12:45:30 +04, end at Wed 2016-11-02 > 13:03:28 +04. -- > Oct 31 12:45:38 virt2.management.gnc.am systemd[1]: Starting oVirt > ImageIO Daemon... > -- Subject: Unit ovirt-imageio-daemon.service has begun start-up > -- Defined-By: systemd > -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel > -- > -- Unit ovirt-imageio-daemon.service has begun starting up. > Oct 31 12:45:38 virt2.management.gnc.am systemd[1]: Started oVirt > ImageIO Daemon. > -- Subject: Unit ovirt-imageio-daemon.service has finished start-up > -- Defined-By: systemd > -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel > -- > -- Unit ovirt-imageio-daemon.service has finished starting up. > -- > -- The start-up result is done. > Nov 01 13:21:24 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / > - - [01/Nov/2016 13:21:24] "PUT > /tickets/d122778d-beb4-426f-8074-70e033253670 HTTP/1.1" 200 0 > Nov 01 13:21:44 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / > - - [01/Nov/2016 13:21:44] "DELETE > /tickets/d122778d-beb4-426f-8074-70e033253670 HTTP/1.1" 204 0 > Nov 01 13:28:48 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / > - - [01/Nov/2016 13:28:48] "PUT > /tickets/87eeb8c4-2184-4793-8843-5835e2fac025 HTTP/1.1" 200 0 > Nov 01 13:29:03 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / > - - [01/Nov/2016 13:29:03] "DELETE > /tickets/87eeb8c4-2184-4793-8843-5835e2fac025 HTTP/1.1" 204 0 > Nov 01 13:37:50 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / > - - [01/Nov/2016 13:37:50] "PUT > /tickets/5d78deac-b373-43c7-b0ef-9dc94ef3324b HTTP/1.1" 200 0 > Nov 01 13:38:12 virt2.management.gnc.am ovirt-imageio-daemon[1575]: / > - - [01/Nov/2016 13:38:12] "DELETE > /tickets/5d78deac-b373-43c7-b0ef-9dc94ef3324b HTTP/1.1" 204 0 > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > Traceback (most recent call last): > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > File "/usr/lib64/python2.7/SocketServer.py", line 593, in > process_request_thread > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > self.finish_request(request, client_address) > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > File "/usr/lib64/python2.7/SocketServer.py", line 334, in > finish_request > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > self.RequestHandlerClass(request, client_address, self) > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > File "/usr/lib64/python2.7/SocketServer.py", line 649, in __init__ > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > self.handle() > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > File "/usr/lib64/python2.7/wsgiref/simple_server.py", line 116, in > handle > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > self.raw_requestline = self.rfile.readline() > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > File "/usr/lib64/python2.7/socket.py", line 447, in readline > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > data = self._sock.recv(self._rbufsize) > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > File "/usr/lib64/python2.7/ssl.py", line 736, in recv > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > return self.read(buflen) > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > File "/usr/lib64/python2.7/ssl.py", line 630, in read > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: v > = self._sslobj.read(len or 1024) > Nov 01 14:12:22 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > SSLError: [SSL: TLSV1_ALERT_UNKNOWN_CA] tlsv1 alert unknown ca > (_ssl.c:1936) > Nov 01 14:12:27 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > 192.168.1.75 - - [01/Nov/2016 14:12:27] "GET / HTTP/1.1" 404 119 > Nov 01 14:12:27 virt2.management.gnc.am ovirt-imageio-daemon[1575]: > 192.168.1.75 - - [01/Nov/2016 14:12:27] "GET /favicon.ico HTTP/1.1" > 404 130 >
Re: [ovirt-users] ovirt 4.0.4?==?utf-8?q? : unable to upload images
shed session: expiration: '1478077167', imaged-host-uri: 'https://virt1.management.gnc.am:54322', proxy-ticket: 'eyJzYWx0IjoiTjdxdnFucE83MUU9IiwiZGF0YSI6IntcbiAgXCJuYmZcIiA6...MjAx NjExMDIwNzU5MjciLCJ2YWxpZFRvIjoiMjAxNjExMDIwODU5MjcifQ==', session-id: '177decb1-1fe5-4195-924c-2f6c3bdf2849', transfer-ticket: '7d83b9c5-2e0b-4ef2-a004-ee6c7b4d7901' (Thread-18 ) INFO 2016-11-02 07:59:39,661 connectionpool:735:requests.packages.urllib3.connectionpool:(_new_conn) Starting new HTTPS connection (1): virt1.management.gnc.am (Thread-18 ) ERROR 2016-11-02 07:59:39,669 image_handler:186:root:(make_imaged_request) Failed communicating with vdsm-imaged: An SSL error occurred. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/image_handler.py", line 177, in make_imaged_request timeout=timeout, stream=stream) File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 576, in send r = adapter.send(request, **kwargs) File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 431, in send raise SSLError(e, request=request) SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:765) ^C It seems that browser is not a problem but connection between ovirt-imageio-proxy on engine and vdsm on hosts. Best regards, Misak Khachatryan, Network Administration and Monitoring Department Manager, GNC- ALFA CJSC 1 Khaghaghutyan str., Abovyan, 2201 Armenia Tel: +374 60 46 99 70 (9670), Mob.: +374 93 19 98 40 URL:www.rtarmenia.am On Wed, Nov 2, 2016 at 12:41 PM, Amit Aviram wrote: > > > On Wed, Nov 2, 2016 at 8:48 AM, Yedidyah Bar David wrote: >> >> On Tue, Nov 1, 2016 at 10:42 PM, Claude Durocher >> wrote: >> > We have a setup with ovirt 4.0.4 (hosted engine) and I try to import a >> > Ubuntu qcow cloud image. When I try to import, it stops with the error >> > 'Unable to upload image to disk ... due to a network error. Make sure >> > ovirt-imageio-proxy service is installed and configured, and >> > ovirt-engine's >> > certificate is registered as a valid CA in the browser'. >> > >> > The ovirt-imageio-proxy service is running on the engine. No errors in >> > the >> > log (just a mention that the service has started up). I also imported >> > the >> > server CA in my browser (Firefox 49 on Ubuntu). >> >> Please check: >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1317253 >> >> https://www.ovirt.org/develop/release-management/features/storage/image-upload/ >> >> Do you have on your host 'ovirt-imageio-daemon' installed and running? >> >> Did you manually configure iptables on the host? If so, you need port >> 54322 open. >> >> If still stuck, please check/attach: >> >> On engine machine: >> >> /var/log/ovirt-engine/* >> /var/log/ovirt-imageio-proxy/* >> /var/log/httpd/* >> >> On host: >> >> /var/log/messages >> journalctl >> >> Thanks, > > Please also attach your browser logs. > >> >> -- >> Didi >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] ovirt 4.0.4?==?utf-8?q? : unable to upload images
shed session: expiration: '1478077167', imaged-host-uri: 'https://virt1.management.gnc.am:54322', proxy-ticket: 'eyJzYWx0IjoiTjdxdnFucE83MUU9IiwiZGF0YSI6IntcbiAgXCJuYmZcIiA6...MjAx NjExMDIwNzU5MjciLCJ2YWxpZFRvIjoiMjAxNjExMDIwODU5MjcifQ==', session-id: '177decb1-1fe5-4195-924c-2f6c3bdf2849', transfer-ticket: '7d83b9c5-2e0b-4ef2-a004-ee6c7b4d7901' (Thread-18 ) INFO 2016-11-02 07:59:39,661 connectionpool:735:requests.packages.urllib3.connectionpool:(_new_conn) Starting new HTTPS connection (1): virt1.management.gnc.am (Thread-18 ) ERROR 2016-11-02 07:59:39,669 image_handler:186:root:(make_imaged_request) Failed communicating with vdsm-imaged: An SSL error occurred. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/image_handler.py", line 177, in make_imaged_request timeout=timeout, stream=stream) File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 576, in send r = adapter.send(request, **kwargs) File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 431, in send raise SSLError(e, request=request) SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:765) ^C It seems that browser is not a problem but connection between ovirt-imageio-proxy on engine and vdsm on hosts. Best regards, Misak Khachatryan On Wed, Nov 2, 2016 at 12:41 PM, Amit Aviram wrote: > > > On Wed, Nov 2, 2016 at 8:48 AM, Yedidyah Bar David wrote: >> >> On Tue, Nov 1, 2016 at 10:42 PM, Claude Durocher >> wrote: >> > We have a setup with ovirt 4.0.4 (hosted engine) and I try to import a >> > Ubuntu qcow cloud image. When I try to import, it stops with the error >> > 'Unable to upload image to disk ... due to a network error. Make sure >> > ovirt-imageio-proxy service is installed and configured, and >> > ovirt-engine's >> > certificate is registered as a valid CA in the browser'. >> > >> > The ovirt-imageio-proxy service is running on the engine. No errors in >> > the >> > log (just a mention that the service has started up). I also imported >> > the >> > server CA in my browser (Firefox 49 on Ubuntu). >> >> Please check: >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1317253 >> >> https://www.ovirt.org/develop/release-management/features/storage/image-upload/ >> >> Do you have on your host 'ovirt-imageio-daemon' installed and running? >> >> Did you manually configure iptables on the host? If so, you need port >> 54322 open. >> >> If still stuck, please check/attach: >> >> On engine machine: >> >> /var/log/ovirt-engine/* >> /var/log/ovirt-imageio-proxy/* >> /var/log/httpd/* >> >> On host: >> >> /var/log/messages >> journalctl >> >> Thanks, > > Please also attach your browser logs. > >> >> -- >> Didi >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users