[ovirt-users] TroubleshootingoVirt Node deploy FQDN not reachable
In a recent thread, Roberto mentioned seeing the error message "FQDN Not Reachable" when trying to deploy oVirt Node 4.4.1, but was able to get past that error when using ovirt-node-ng-installer-4.4.2-2020080612.el8.iso. I experienced the same problems on oVirt Node install 4.4.1, so I tried the latest release of 4.4.2. When that failed, I went back and installed from the exact same image as Roberto said worked on the 4.4.2 branch: ovirt-node-ng-installer-4.4.2-2020080612.el8.iso Unfortunately, that's still not working for me - so that tells me I'm probably doing something wrong. Given the following facts: [root@dev1-centos ~]# hostname dev1-centos.office.barredowlweb.com [root@dev1-centos ~]# host dev1-centos.office.barredowlweb.com dev1-centos.office.barredowlweb.com has address 192.168.2.96 I am trying to install oVirt using the Hyperconverged Gluster Wizard for a Single Node. In the "Host1" box, I enter the full hostname: dev1-centos.office.barredowlweb.com And that's when I get the FQDN error message. Am I missing something here? Sent with ProtonMail Secure Email. publickey - dmwhite823@protonmail.com - 0x320CD582.asc Description: application/pgp-keys signature.asc Description: OpenPGP digital signature ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DERTVLGRNPJ4NKXEU5BGTHNJQDOPAB4E/
[ovirt-users] Re: i/o wait and slow system
Libgfapi bypasses the context switching from User space to kernel to user space (FUSE) , so it gets better performance. I can't find the previous communication ,so can you share your volume settings again ? Best Regards, Strahil Nikolov В неделя, 23 август 2020 г., 21:45:22 Гринуич+3, info--- via Users написа: Setting cluster.choose-local to on helped a lot to improve the read performance. Write performance still bad. Am I right that this look then more like a glusterfs issue and not something what need to be changed on ovirt (libgfapi) or on VMs. Changing tcp offloading did not make any difference. - ethtool --offload enp1s0f0 rx off tx off - ethtool --offload enp1s0f0 rx on tx on MTU 9000 is fine and ping is working. - ping -M do -s 8972 another-gluster-node ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23FIY4DLT5CBZFJV7TTDHHLI5SAO3JWL/ ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EN57VNXQ4M3QXLPACINEU67QS52G7JSN/
[ovirt-users] oVirt HCI 4.4.1 DR cannot import target volume after disaster
Hi, I have installed two HCI platforms. Then I set up the disaster-recovery configuration On source platform I did : # gluster volume set all cluster.enable-shared-storage enable # gluster system:: execute gsec_create # gluster volume geo-replication *source* gnodesen2-01.example.com::dest create push-pem # gluster volume geo-replication *source* gnodesen2-01.example.com::dest config use_meta_volume true On target platform I did : # gluster volume set *dest* features.shard enable I then realized these two tests : *1 - Sync data using the schedule_georep.py* a - Use the : "python3 /usr/share/glusterfs/scripts/schedule_georep.py source gnodesen2-01.example.com dest " command to sync data between the volumes b - After the sync is complete, execute "gluster volume dest set features.read-only off" c - Execute *import* (not create) *dest* volume on the target platform manually through the Admin-UI, Result : The domain could not be added and I get this error on vdsm log : " 2020-08-24 10:18:05,644+0100 ERROR (jsonrpc/5) [storage.HSM] Unexpected error (hsm:2843) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line 456, in getVersion version = self.getMetaParam(DMDK_VERSION) File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line 452, in getMetaParam return self._metadata[key] File "/usr/lib/python3.6/site-packages/vdsm/storage/persistent.py", line 114, in __getitem__ return dec(self._dict[key]) File "/usr/lib/python3.6/site-packages/vdsm/storage/persistent.py", line 225, in __getitem__ return self._metadata[key] KeyError: 'VERSION' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 2829, in getStorageDomainsList dom = sdCache.produce(sdUUID=sdUUID) File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 115, in produce domain.getRealDomain() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 51, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 139, in _realProduce domain = self._findDomain(sdUUID) File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 156, in _findDomain return findMethod(sdUUID) File "/usr/lib/python3.6/site-packages/vdsm/storage/glusterSD.py", line 62, in findDomain return GlusterStorageDomain(GlusterStorageDomain.findDomainPath(sdUUID)) File "/usr/lib/python3.6/site-packages/vdsm/storage/fileSD.py", line 385, in __init__ manifest = self.manifestClass(domainPath) File "/usr/lib/python3.6/site-packages/vdsm/storage/fileSD.py", line 162, in __init__ sd.StorageDomainManifest.__init__(self, sdUUID, domaindir, metadata) File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line 352, in __init__ version = self.getVersion() * File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line 458, in getVersionraise se.MetaDataKeyNotFoundError("key={}".format(DMDK_VERSION))vdsm.storage.exception.MetaDataKeyNotFoundError: Meta Data key not found error: ('key=VERSION',)* 2020-08-24 10:18:05,644+0100 INFO (jsonrpc/5) [vdsm.api] FINISH getStorageDomainsList return={'domlist': []} from=:::10.80.101.27,48282, flow_id=5209d576-ee09-4de3-bc46-0c186c6ad52a, task_id=bc93aaae-5449-4d0d-9884-f47ea542a963 (api:54) 2020-08-24 10:18:05,645+0100 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getStorageDomains succeeded in 3.01 seconds (__init__:312) 2020-08-24 10:18:05,754+0100 INFO (jsonrpc/4) [vdsm.api] START disconnectStorageServer(domType=7, spUUID='----', conList=[{'password': '', 'vfs_type': 'glusterfs', 'port': '', 'mnt_options': 'backup-volfile-servers=10.90.101.23:10.90.101.25', 'iqn': '', 'connection': '10.90.101.21:/dest', 'ipv6_enabled': 'false', 'id': 'cc24df41-19b0-41ed-ae85-e091e40cc612', 'user': '', 'tpgt': '1'}], options=None) from=:::10.80.101.27,48282, flow_id=5fcaf7d7-d3b6-4430-b5f9-28e6bd95656a, task_id=73654a43-c86b-4250-95d7-3e3399e5bc72 (api:48) 2020-08-24 10:18:05,754+0100 INFO (jsonrpc/4) [storage.Mount] unmounting /rhev/data-center/mnt/glusterSD/10.90.101.21:_dest (mount:215) " *Note :* I had to modify the line 105 from /usr/share/glusterfs/scripts/schedule_georep.py to be able to use it with python3 : - ey = "_".join([func.func_name] + list(args)) + key = "_".join([func.__name__] + list(args)) *2 - Sync data using the adminUI :* a - Create a schedule on the adminUI and wait until the sync is complete. b - Switch off read-only on target volume : gluster volume set dest features.read-only off (on the target volume) c - Execute ansible failover scripts : ansible-playbook dr-rhv-failover.yml --tags="fail_over" command (on the ansible machine) Result : 020-08-24 15:51:58,874+0100 INFO (jsonrpc/0) [IOProcessClient] (3a5d37
[ovirt-users] Re: Error exporting into ova
Hi Gianluca. We have a problem wit "export as ova" too. In our env we back up all the vm with a python script that run an export. If we run multiple export at the same time, also on different datacenter[but same engine], the wait for each other and do not run in async mode. If only one of those export takes 10h, all of them taskes 10+h too. seems that all the export have to end a step of the playbook to go on, but we see only one "nasible-playbook" process at a time. Have you got any hint? Regards, Tommaso. Il 23/07/2019 11:21, Gianluca Cecchi ha scritto: On Fri, Jul 19, 2019 at 5:59 PM Gianluca Cecchi mailto:gianluca.cec...@gmail.com>> wrote: On Fri, Jul 19, 2019 at 4:14 PM Gianluca Cecchi mailto:gianluca.cec...@gmail.com>> wrote: On Fri, Jul 19, 2019 at 3:15 PM Gianluca Cecchi mailto:gianluca.cec...@gmail.com>> wrote: In engine.log the first error I see is 30 minutes after start 2019-07-19 12:25:31,563+02 ERROR [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] Ansible playbook execution failed: Timeout occurred while executing Ansible playbook. In the mean time, as the playbook seems this one ( I run the job from engine) : /usr/share/ovirt-engine/playbooks/ovirt-ova-export.yml Based on what described in bugzilla https://bugzilla.redhat.com/show_bug.cgi?id=1697301 I created at the moment the file /etc/ovirt-engine/engine.conf.d/99-ansible-playbook-timeout.conf with ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT=80 and restarted the engine and the python script to verify Just to see if it completes, even if in my case with a 30Gb preallocated disk, the source problem is qemu-img convert command very slow in I/O. It reads from iscsi multipath (2 paths) with 2x3MB/s and it writes on nfs If I run a dd command from iscsi device mapper device to an nfs file I have 140MB/s rate that is what expected based on my storage array performances and my network. Not understood why the qemu-img command is so slow The question still applies in case I have to do an appliance from a VM with a very big disk, where the copy could potentially have an elapsed of more that 30 minutes... Gianluca I confirm that setting ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT was the solution. I got the ova completed: Starting to export Vm enginecopy1 as a Virtual Appliance 7/19/19 5:53:05 PM Vm enginecopy1 was exported successfully as a Virtual Appliance to path /save_ova/base/dump/myvm2.ova on Host ov301 7/19/19 6:58:07 PM I have to understand why the conversion of the pre-allocated disk is so slow, because simulating I/O from iSCSI lun where VM disks live to the NFS share gives me about 110MB/s I'm going to update to 4.3.4, just to see if there is any bug fixed. The same operation on vSphere have an elapsed of 5 minutes. What is the eta for 4.3.5? One notice: if I manually create a snapshot of the same VM and then clone the snapshot, the process is this one vdsm 5713 20116 6 10:50 ? 00:00:04 /usr/bin/qemu-img convert -p -t none -T none -f raw /rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b -O raw -W /rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/d13a5c43-0138-4cbb-b663-f3ad5f9f5983/fd4e1b08-15fe-45ee-ab12-87dea2d29bc4 and its speed is quite better (up to 100MB/s read and 100MB/s write) with a total elapsed of 6 minutes and 30 seconds. during the ova generation the process was instead: vdsm 13505 13504 3 14:24 ? 00:01:26 qemu-img convert -T none -O qcow2 /rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b /dev/loop0 could it be the "-O qcow2" the reason? Why qcow2 if origin is preallocated (raw)? Gianluca ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UFB6ZACCJIN3DSRL4WJX4JLPVM2NSQEO/ -- -- Shellrent - Il primo hosting italiano Security First *Tommaso De Marchi* /COO - Chief Operating Officer/ Shellrent Srl Via dell'Edilizia, 19 - 36100 Vicenza Tel. 0444321155 | Fax 04441492177 ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/abou
[ovirt-users] hosted engine migration
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot! hosts status: normal vm migration: hosted engine vm migration:___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHOMYHTDCUUIU3VQTQTLF/
[ovirt-users] Re: oVirt node 4.4.1 deploy FQDN not reachable
I ran out of time to finish properly testing things Saturday evening (I'm in Eastern Time in the States), and wasn't able to spend any time on it Sunday or Monday. I intend to finish testing this evening (for the RC2 image that Roberto found worked), and will update the list at that point. I want to make sure the problem isn't me. Sent with ProtonMail Secure Email. ‐‐‐ Original Message ‐‐‐ On Tuesday, August 25, 2020 3:47 AM, Yedidyah Bar David wrote: > On Sun, Aug 23, 2020 at 3:25 AM David White via Users users@ovirt.org wrote: > > > Getting the same problem on 4.4.2-2020081922. > > I'll try the image that Roberto found to work, and will report back. > > Thanks. > > Perhaps one of you would like to open a bug, and/or check/share > relevant logs when this happens? > > Best regards, > > > Perhaps I'm still too new to this. :) > > Sent with ProtonMail Secure Email. > > ‐‐‐ Original Message ‐‐‐ > > On Saturday, August 22, 2020 7:12 PM, David White via Users users@ovirt.org > > wrote: > > I'm running into the same problem. > > I just wiped my CentOS 8.2 system, and in place of that, installed oVirt > > Node 4.4.1. > > I'm downloading 4.4.2-2020081922 now. > > Sent with ProtonMail Secure Email. > > ‐‐‐ Original Message ‐‐‐ > > On Friday, August 7, 2020 11:55 AM, Roberto Nunin robnu...@gmail.com wrote: > > Il giorno ven 7 ago 2020 alle ore 12:59 Roberto Nunin robnu...@gmail.com ha > > scritto: > > > > > Hi all > > > I have an issue while trying to deploy hyperconverged solution on three > > > ovirt node boxes. > > > ISO used is ovirt-node-ng-installer-4.4.1-2020072310.el8.iso. > > > When from cockpit I choose gluster deployment, I have a form where I can > > > insert both gluster fqdn names and public fqdn names (that is what I > > > need, due to distinct network cards & networks) > > > If I insert right names, that are resolved by nodes, I receive, anyway, > > > FQDN is not reachable below Host1 entries. > > > As already stated, these names are certainly resolved by DNS used. > > > Any hints about ? > > > > Using ovirt-node-ng-installer-4.4.2-2020080612.el8.iso (4.4.2 RC2) the same > > issue do not happen. > > -- > > Roberto Nunin > > > > Users mailing list -- users@ovirt.org > > To unsubscribe send an email to users-le...@ovirt.org > > Privacy Statement: https://www.ovirt.org/privacy-policy.html > > oVirt Code of Conduct: > > https://www.ovirt.org/community/about/community-guidelines/ > > List Archives: > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/FUJOBNZDE2B6H6J2GNXXYG7X7GQHJRSH/ > > -- > > Didi publickey - dmwhite823@protonmail.com - 0x320CD582.asc Description: application/pgp-keys signature.asc Description: OpenPGP digital signature ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMJGD4MTHRJ7AA6IE4XERTZSQ4NVT6QD/
[ovirt-users] Re: Hosted Engine stuck in Firmware
On Sun, Aug 23, 2020 at 7:45 AM Vinícius Ferrão via Users wrote: > > Hello, I’ve an strange issue with oVirt 4.4.1 > > The hosted engine is stuck in the UEFI firmware trying to “never” boot. > > I think this happened when I changed the default VM mode for the cluster > inside the datacenter. If you think this indeed is the root cause, then perhaps: > > There’s a way to fix this without redeploying the engine? If you happen to have backup copies of /var/run/ovirt-hosted-engine-ha/vm.conf , you can try: hosted-engine --vm-start --vm-conf=somefile If this works, update the VM/cluster/whatever back to a good state from the engine (after it's up), and wait to make sure it updated vm.conf on /var before you try to shutdown/start the engine vm again. Best regards, -- Didi ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4SIBCYGFYEVTRECEDNB65VKIOIDCVGMU/
[ovirt-users] Re: oVirt node 4.4.1 deploy FQDN not reachable
On Sun, Aug 23, 2020 at 3:25 AM David White via Users wrote: > > Getting the same problem on 4.4.2-2020081922. > > I'll try the image that Roberto found to work, and will report back. Thanks. Perhaps one of you would like to open a bug, and/or check/share relevant logs when this happens? Best regards, > Perhaps I'm still too new to this. :) > > > Sent with ProtonMail Secure Email. > > ‐‐‐ Original Message ‐‐‐ > On Saturday, August 22, 2020 7:12 PM, David White via Users > wrote: > > I'm running into the same problem. > I just wiped my CentOS 8.2 system, and in place of that, installed oVirt Node > 4.4.1. > > I'm downloading 4.4.2-2020081922 now. > > > Sent with ProtonMail Secure Email. > > ‐‐‐ Original Message ‐‐‐ > On Friday, August 7, 2020 11:55 AM, Roberto Nunin wrote: > > > > Il giorno ven 7 ago 2020 alle ore 12:59 Roberto Nunin ha > scritto: >> >> Hi all >> >> I have an issue while trying to deploy hyperconverged solution on three >> ovirt node boxes. >> ISO used is ovirt-node-ng-installer-4.4.1-2020072310.el8.iso. >> >> When from cockpit I choose gluster deployment, I have a form where I can >> insert both gluster fqdn names and public fqdn names (that is what I need, >> due to distinct network cards & networks) >> >> If I insert right names, that are resolved by nodes, I receive, anyway, FQDN >> is not reachable below Host1 entries. >> >> As already stated, these names are certainly resolved by DNS used. >> Any hints about ? >> > > Using ovirt-node-ng-installer-4.4.2-2020080612.el8.iso (4.4.2 RC2) the same > issue do not happen. > > > -- > Roberto Nunin > > > > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/FUJOBNZDE2B6H6J2GNXXYG7X7GQHJRSH/ -- Didi ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/477HELXB44BDRSQQLCE75GY2P7QQMAFX/
[ovirt-users] Re: Actual oVirt install questions
On Sat, Aug 22, 2020 at 4:01 PM David White via Users wrote: > > >by standalone do you mean all-in-one? or separate engine and hosts? > > I THINK all-in-one. :) > > > The following are the commands that I ran: > > sudo yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm > sudo yum module -y enable javapackages-tools pki-deps postgresql:12 > sudo yum install ovirt-engine > sudo engine-setup > sudo mkdir -p /data/images > sudo mkdir /data/iso > sudo chown 36:36 /data/iso > sudo chmod 0755 /data/iso > > >In 4.4 the change in imageio means it's not compatible with all-in-one > >anymore. > > Yes, in reading through your recent email thread here, that's what I picked > up on. > I was pretty sure that I hit the same bug, but wasn't positive. > > So, what's the point of all-in-one if you cannot upload ISOs and boot VMs off > of ISOs? > Is there an alternative way to setup a VM in all-in-one, such as boot from > PXE or something? > > > Regardless, the all-in-one setup was just for learning purposes. > I may try a different install approach, and try to get the self-hosted engine > working. That said, I'm still unclear on the exact differences between the > "self-hosted engine" and the standalone Manager. I'll go re-read earlier > responses to my questions on that, as well as the glossary of sorts that Didi > was so kind to write in your earlier thread on the imageio issue. Let me clarify (again?): All-in-one, meaning - both (standalone) engine and actual VM processes (vdsm/libvirtd/etc. and your VM's qemu processes) on the same machine - is not supported since 4.0 and is not tested routinely and systematically. The fact that it does usually work, and the fact that people sometimes report successes with this, is very nice to know, but does not change the situation :-) If all you have is a single physical machine that you want to use for oVirt, you have, basically, two options: 1. Use HCI (meaning, hosted-engine + gluster). That's, I guess, what most people do, at least in production. When it works (usually, and we definitely aim for always!), it's the easiest way to install and get things going, but is more complex under the hood and harder to understand/debug if you do not know quite well how things work internally. 2. Use some other virtualization method - I think most people use plain virt-manager - to create separate VMs, and imitate multiple-machines setup using these VMs. E.g. one machine for the (standalone) engine, another for storage (e.g. nfs/iscsi/gluster), a few more as hosts (with nested-kvm), etc. Latter, (2.), is what we use in CI - see projects lago and ovirt-system-tests if interested. Re the imageio upload issue: If it's indeed a result of using all-in-one, then please wait until Michael's bug is fixed, or see the other thread, or see above for using not-all-in-one :-) (But feel free to keep debugging this with Michael's help if both of you feel like it!). Best regards, > > > ‐‐‐ Original Message ‐‐‐ > On Saturday, August 22, 2020 7:49 AM, Michael Jones > wrote: > > > I see you are using 4.4 standalone; > > > > > by standalone do you mean all-in-one? or separate engine and hosts? > > > > > In 4.4 the change in imageio means it's not compatible with all-in-one > > anymore, i'll be raising a bug request to support this later today (but > > officially all-in-one support ended in 4.0 so i don't know if the bug > > will be actioned). > > > > > Kind Regards, > > > > > Mike > > > > > On 22/08/2020 12:06, David White via Users wrote: > > > > > > Ok, that at least got the certificate trusted. > > > Thank you for the fast response on that! > > > The certificate is now installed and trusted, and I removed the exception > > > in Firefox. > > > (Screenshots attached) > > > Unfortunately, the upload is still not working. > > > I'm still getting the same error message that "Connection to > > > ovirt-imageio-proxy service has failed." > > > > > Users mailing list -- users@ovirt.org > > To unsubscribe send an email to users-le...@ovirt.org > > Privacy Statement: https://www.ovirt.org/privacy-policy.html > > oVirt Code of Conduct: > > https://www.ovirt.org/community/about/community-guidelines/ > > List Archives: > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/QSUGX7LWYR7WSGEIRDE7LARKTSLC7NPA/ > > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/ETYIYCPOQ7UATD5DHO3SL6DECIFRAIPQ/ -- Didi ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ov
[ovirt-users] Re: Actual oVirt install questions
On Sat, Aug 22, 2020 at 3:19 AM David White wrote: > > > Please see my reply from a few minutes ago to the thread > > "[ovirt-users] ovirt-imageio : can't upload / download". > > Thank you. > I read through the "ovirt-imageio: can't upload / download" thread, and your > brief glossary was very helpful. Perhaps it would make sense to put some > basic terminology like that somewhere in the official oVirt docs? I feel like > there are a million different ways to install and configure oVirt, and it has > taken me quite a bit of time to figure out the differences. I agree this makes sense. Now searched and failed to find such a page. Please open a doc bug to add it. Thanks! There is such a page for RHV, which should be around 99% applicable, which you can read for the time being: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/product_guide/introduction Best regards, > > > If you do this mainly for learning, then I suggest that you first play with > > a standalone engine. > I took your advice, and figured out how to get the "standalone engine" > installed and running. I'm now able to log into the oVirt Admin Portal, view > / create Datacenters, etc.. > > At the risk of creating a redundant thread, I'm going to reply to this thread > about a problem I'm having uploading ISOs through the Admin Portal. I can't > do it! Perhaps my issue is related to the same issue Michael is facing in the > thread I just referenced - ovirt-imageio: can't upload/download. > > I have the standalone install: > > [dwhite@dev1-centos ~]$ sudo yum info ovirt-engine | grep Version > Version : 4.4.1.10 > [dwhite@dev1-centos ~]$ sudo systemctl status ovirt-imageio > ● ovirt-imageio.service - oVirt ImageIO Daemon >Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio.service; enabled; > vendor preset: disabled) >Active: active (running) since Fri 2020-08-21 19:16:28 EDT; 49min ago > Main PID: 1435 (ovirt-imageio) > Tasks: 3 (limit: 409555) >Memory: 18.8M >CGroup: /system.slice/ovirt-imageio.service >└─1435 /usr/libexec/platform-python -s /usr/bin/ovirt-imageio > > Aug 21 19:16:28 dev1-centos.office.barredowlweb.com systemd[1]: Starting > oVirt ImageIO Daemon... > Aug 21 19:16:28 dev1-centos.office.barredowlweb.com systemd[1]: Started oVirt > ImageIO Daemon. > > This is of course a new, default install. > > I have created a data directory for VM images, as well as ISOs: > [dwhite@dev1-centos ~]$ ls -la /data/ > total 0 > drwxr-xr-x. 4 vdsm kvm 31 Aug 19 20:49 . > dr-xr-xr-x. 19 root root 260 Aug 19 20:28 .. > drwxr-xr-x. 3 vdsm kvm 50 Aug 21 20:12 images > drwxr-xr-x. 3 vdsm kvm 50 Aug 21 20:12 iso > > I am now trying to upload an ISO through the Admin Portal by going to Storage > -> Disks -> Upload -> Start. > > My client machine is Ubuntu. > > I click "Test Connection" and I get the error message: > Connection to ovirt-imageio-proxy service has failed. Make sure the service > is installed, configured, and ovirt-engine certificate is registered as a > valid CA in the browser. > > I downloaded the certificate by clicking on the link provided, and saved it > into /usr/share/ca-certificates/mozilla/ > I then ran: sudo update-ca-certificates > > Then I reloaded the oVirt admin portal. > I'm still getting the error message. > > Any advice here? > Or am I running into the same problem that Michael has run into? > -- Didi ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/CUFDZI63QS5U2BS3VCRW6SVEDTHCEZHG/