[ovirt-users] Re: ISO Upload in in Paused by System Status

2021-08-04 Thread Yedidyah Bar David
On Thu, Aug 5, 2021 at 1:44 AM  wrote:

> I'm attempting to upload an ISO that is approximately 9GB in size.  I've
> succssfullly started th upload process via the oVirt Management
> Console/Disk.  The upload started however, it now has a status of "Paused
> by System".  My storage type is set to NFS Data.
>
> Is something happing the back ground that contributing to the "Paused by
> System"?   I have mor the 84TB of space available so I don't think it is a
> space issue.  Is there something that I need to do?  At this time I'm going
> to wait and see if it moves forward on its own.
>
> Please provide me with any help or direction.
>

Do you use the internal CA or an external one?
If an external, did you strictly follow the procedure to replace the CA?
Did you import the CA cert to your browser?

Please search the list archives for similar issues. Thanks.

Good luck and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2KGOMROMEIXOCIUPZ3ZAIPLJU5TNTJ5K/


[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-04 Thread KK CHN
Appreciate all  for sharing the valuable information.

1.  I am downloading centos 8  as the Python Ovirt SDK  installation says
it works on  Centos 8 and Need to setup a VM with this OS and install
 ovirt Python SDK on this   VM.   The requirement is that   this
Centos 8 VM should able to communicate with the   Rhevm 4.1  Host node
where the ovirt shell ( Rhevm Shell [connected] #is
 availableright ?

2.  pinging to the  host with "Rhevm Shell [connected]# "   and  that
should  be ssh ed  from the CentOS 8 VM where python3 and oVirt SDK
installed and going to execute the  script  (with ovirt configuration file
on this VM.).  Is these two connectivity checks are enough for executing
the script ?  or any other protocols need to be enabled in the firewall
between these two machine?



3.  while googling  I saw a post
https://users.ovirt.narkive.com/CeEW3lcj/ovirt-users-clone-and-export-vm-by-ovirt-shell


action vm myvm export --storage_domain-name myexport

Will this command export ?  and which format it will export to  the export
domain ?
 Is there any  option to provide with this command to  specify  any
supported format the vm image  to be exported  ?

 Thisneed to be executed from "Rhevm Shell [connected]# "   TTY  right
?



On Wed, Aug 4, 2021 at 1:00 PM Vojtech Juranek  wrote:

> On Wednesday, 4 August 2021 03:54:36 CEST KK CHN wrote:
> > On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer  wrote:
> > > On Tue, Aug 3, 2021 at 7:29 PM KK CHN  wrote:
> > > > I have asked our VM maintainer to run the  command
> > > >
> > > > # virsh -r dumpxml vm-name_blah//as Super user
> > > >
> > > > But no output :   No matching domains found that was the TTY  output
> on
> > >
> > > that rhevm node when I executed the command.
> > >
> > > > Then I tried to execute #  virsh list //  it doesn't list any VMs
> > >
> > > !!!   ( How come this ? Does the Rhevm node need to enable any CLI
> with
> > > License key or something to list Vms or  to dumpxml   with   virsh ? or
> > > its
> > > CLI commands ?
> > >
> > > RHV undefine the vms when they are not running.
> > >
> > > > Any way I want to know what I have to ask the   maintainerto
> provide
> > >
> > > a working a working  CLI   or ? which do the tasks expected to do with
> > > command line utilities in rhevm.
> > >
> > > If the vm is not running you can get the vm configuration from ovirt
> > >
> > > using the API:
> > > GET /api/vms/{vm-id}
> > >
> > > You may need more API calls to get info about the disks, follow the
> > > 
> > > in the returned xml.
> > >
> > > > I have one more question :Which command can I execute on an rhevm
> > >
> > > node  to manually export ( not through GUI portal) a   VMs to
>  required
> > > format  ?
> > >
> > > > For example;   1.  I need to get  one  VM and disks attached to it
> as
> > >
> > > raw images.  Is this possible how?
> > >
> > > > and another2. VM and disk attached to it as  Ova or( what other
> good
> > >
> > > format) which suitable to upload to glance ?
> > >
> > > Arik can add more info on exporting.
> > >
> > > >   Each VMs are around 200 to 300 GB with disk volumes ( so where
> should
> > >
> > > be the images exported to which path to specify ? to the host node(if
> the
> > > host doesn't have space  or NFS mount ? how to specify the target
> location
> > > where the VM image get stored in case of NFS mount ( available ?)
> > >
> > > You have 2 options:
> > > - Download the disks using the SDK
> > > - Export the VM to OVA
> > >
> > > When exporting to OVA, you will always get qcow2 images, which you can
> > > later
> > > convert to raw using "qemu-img convert"
> > >
> > > When downloading the disks, you control the image format, for example
> > > this will download
> > >
> > > the disk in any format, collapsing all snapshots to the raw format:
> > >  $ python3
> > >
> > > /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
> > > -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
> > >
> > > To perform this which modules/packages need to be installed in the
> rhevm
> >
> > host node ?  Does the rhevm hosts come with python3 installed by default
> ?
> > or I need to install  python3 on rhevm node ?
>
> You don't have to install anything on oVirt hosts. SDK has to be installed
> on
> the machine from which you run the script. See
>
> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/README.adoc
>
> for more details, how to install and use it.
>
> > Then  using pip3 to install
> > the  download_disk.py / what the module name to install this sdk ?  any
> > dependency before installing this sdk ? like java need to be installed on
> > the rhevm node ?
> >
> > One doubt:  came across  virt v2v while google search,  can virtv2v  be
> > used in rhevm node to export VMs to images ?  or only from other
> > hypervisors   to rhevm only virt v2v supports ?
> >
> > This requires ovirt.conf file:   // ovirt.conf file need to be
> created
> > ? or 

[ovirt-users] Re: Data recovery from (now unused, but still mounted) Gluster Volume for a single VM

2021-08-04 Thread Strahil Nikolov via Users
*should be 2
 
 
  On Thu, Aug 5, 2021 at 7:42, Strahil Nikolov wrote:   
when you use 'remove-brick replica 1', you need to specify the removed bricks 
which should be 1 (data brick and arbiter).Something is mising in your 
description.
Best Regards,Strahil Nikolov
 
 
  On Thu, Aug 5, 2021 at 7:33, Strahil Nikolov via Users 
wrote:   ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K6DFRDDV5HTRZFLXKO5274AB4RUXOHV6/
  
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GQHJPBVCOT3N6AD2TJMF2RT4PANR2KPK/


[ovirt-users] Re: Data recovery from (now unused, but still mounted) Gluster Volume for a single VM

2021-08-04 Thread Strahil Nikolov via Users
when you use 'remove-brick replica 1', you need to specify the removed bricks 
which should be 1 (data brick and arbiter).Something is mising in your 
description.
Best Regards,Strahil Nikolov
 
 
  On Thu, Aug 5, 2021 at 7:33, Strahil Nikolov via Users 
wrote:   ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K6DFRDDV5HTRZFLXKO5274AB4RUXOHV6/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/26R3AM3EEAEQ5LTORWFX7RMY4F5YYI65/


[ovirt-users] Re: Data recovery from (now unused, but still mounted) Gluster Volume for a single VM

2021-08-04 Thread Strahil Nikolov via Users
First of all you diddn't 'mkfs.xfs -i size=512' . You just 'mkfs.xfs' , whis is 
not good and could have caused your VM problems. Also , check with xfs_info the 
isize of the FS.
You have to find the uuid of the disks of the affected VM.Then go to the 
removed host,and find that file -> this is the so called shard1.Then you need 
to find the gfid of the file.The easiest way is to go to the "dead" cluster and 
find the hard links in the .gluster directory.
Something like this:
ssh old host (the one specified in the remove-brick)cd 
/gluster_bricks/data/data//images/ls -li  
-> take the first numberfind /gluster_bricks/data/data -inum 
It should show you both the file and the gfid.
Then copy the file from images//file.Go to 
/gluster_bricks/data/data/.shardList all files of:
ls -l .*
These are your shards. Just cat first file + shards (in the number order) into 
another file.

This should be your VM disk.

Best Regards,Strahil Nikolov
 
 
  On Tue, Aug 3, 2021 at 12:58, David White via Users wrote:   
Hi Patrick,
This would be amazing, if possible.

Checking /gluster_bricks/data/data on the host where I've removed (but not 
replaced) the bricks, I see a single directory.
When I go into that directory, I see two directories:

dom_md
images

If I go into the images directory, I think I see the hash folders that you're 
referring to, and inside each of those, I see the 3 files you referenced.

Unfortunately, those files clearly don't have all of the data.
The parent folder for all of the hash folders is only 687M.

[root@cha1-storage data]# du -skh *
687M    31366488-d845-445b-b371-e059bf71f34f

And the "iso" files are small. The one I'm looking at now is only 19M.
It appears that most of the actual data is located in 
/gluster_bricks/data/data/.glusterfs, and all of those folders are totally 
random, incomprehensible directories that I'm not sure how to understand.

Perhaps you were on an older version of Gluster, and the actual data hierarchy 
is different?
I don't know. But I do see the 3 files you referenced, so that's a start, even 
if they are nowhere near the correct size.

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐

On Tuesday, August 3rd, 2021 at 1:49 AM, Patrick Lomakin 
 wrote:

> Greetings, I once wondered how data is stored between replicated bricks. 
> Specifically, how disks are stored on the storage domain in Gluster. I 
> checked a mounted brick via the standard path (path may be different) 
> /gluster/data/data and saw many directories there. Maybe the hierarchy is 
> different, can't check now. But in the end I got a list of directories. Each 
> directory name is a disk image hash. After going to a directory such as /HASH 
> there were 3 files. The first is a disk in raw/iso/qcow2 format (but the file 
> has no extension, I looked at the size) the other two files are the 
> configuration and metadata. I downloaded the disk image file (.iso) to my 
> computer via the curl command and service www.station307.com (no ads). And I 
> got the original .iso which uploaded to the storage domain through the hosted 
> engine interface. Maybe this way you can download the disk image to your 
> computer and then load it via the GUI and connect it to a virtual machine. 
> Good luck!
> 

> Users mailing list -- users@ovirt.org
> 

> To unsubscribe send an email to users-le...@ovirt.org
> 

> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> 

> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> 

> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/A6XITCEX5RNQB37YKDCR4EUKTV6W4HIR/
>   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K6DFRDDV5HTRZFLXKO5274AB4RUXOHV6/


[ovirt-users] Question about PCI storage passthrough for a single guest VM

2021-08-04 Thread Tony Pearce
I have configured a host with pci passthrough for GPU pass through. Using
this knowledge I went ahead and configured nvme SSD pci pass through. On
the guest, I partitioned and mounted the SSD without any issues.

Searching google for this exact setup I only see results about "local
storage" where local storage = using a disk image on the hosts storage. So
I have come here to try and find out if there are any concerns or gripes or
issues with using nvme pci pass through compared to local storage.

Some more detail about the setup:
I have 2 identical hosts (nvidia gpu and also nvme pci SSD). A few weeks
ago when I started researching converting one of these systems over (from
native ubuntu) to ovirt using gpu pci pass through I found the information
about local storage. I have 1 host (host #1) set up with local storage mode
and the guest VM is using a disk image on this local storage.
Host 2 has an identical hardware setup but I did not configure local
storage for this host. Instead, I have the ovirt host OS installed on a
SATA HDD and the nvme SSD is in pci pass through to a different guest
instance.

What I notice is Host 2 disk performance is approx. +30% increase over host
#1 when running simple dd tests to write data to the disk. So at first
glance it appears the nvme pci pass through gives better performance and
this is desired, but I have not seen any ovirt documentation that explains
that this is supported or any guidelines on configuring such a setup.

Aside from the usual caveats when running pci pass through, are there any
other gotchya's when running this type of setup (pci nvme ssd pass
through)? I am trying to discover any unknowns about this before I use this
for real data. I have no previous experience with this and this is my main
reason for emailing the group.

Any insight appreciated.

Kind regards,

Tony Pearce
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GCDFJNGI42EXPFV62DCO7SHXMUKPLSKH/


[ovirt-users] Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-04 Thread Vinícius Ferrão via Users
Hello,

Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each other?

The real issue is regarding GlusterFS. It seems to be a Mellanox issue, but I 
would like to know if there's something that we can do make both play nice on 
the same machine:

[root@rhvepyc2 ~]# dnf update --nobest
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:25 ago on Wed 04 Aug 2021 02:01:11 AM -03.
Dependencies resolved.

 Problem: both package mlnx-ofed-all-user-only-5.4-1.0.3.0.rhel8.4.noarch and 
mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsolete glusterfs-rdma
  - cannot install the best update candidate for package 
glusterfs-rdma-6.0-49.1.el8.x86_64
  - package ovirt-host-4.4.7-1.el8ev.x86_64 requires glusterfs-rdma, but none 
of the providers can be installed
  - package mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsoletes glusterfs-rdma 
provided by glusterfs-rdma-6.0-49.1.el8.x86_64
  - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires glusterfs(x86-64) = 
3.12.2-40.2.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) = 
6.0-15.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) = 
6.0-20.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.el8.x86_64 requires glusterfs(x86-64) = 
6.0-37.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.2.el8.x86_64 requires glusterfs(x86-64) = 
6.0-37.2.el8, but none of the providers can be installed
  - cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-15.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-20.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.2.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install the best update candidate for package 
ovirt-host-4.4.7-1.el8ev.x86_64
  - cannot install the best update candidate for package 
glusterfs-6.0-49.1.el8.x86_64
=
 PackageArchitectureVersion 
  RepositorySize
=
Installing dependencies:
 openvswitchx86_64  2.14.1-1.54103  
  mlnx_ofed_5.4-1.0.3.0_base17 M
 ovirt-openvswitch  noarch  2.11-1.el8ev
  rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms  8.7 k
 replacing  rhv-openvswitch.noarch 1:2.11-7.el8ev
 unboundx86_64  1.7.3-15.el8
  rhel-8-for-x86_64-appstream-rpms 895 k
Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
 glusterfs  x86_64  3.12.2-40.2.el8 
  rhel-8-for-x86_64-baseos-rpms558 k
 glusterfs  x86_64  6.0-15.el8  
  rhel-8-for-x86_64-baseos-rpms658 k
 glusterfs  x86_64  6.0-20.el8  
  rhel-8-for-x86_64-baseos-rpms659 k
 glusterfs  x86_64  6.0-37.el8  
  rhel-8-for-x86_64-baseos-rpms663 k
 glusterfs  x86_64  6.0-37.2.el8
  rhel-8-for-x86_64-baseos-rpms662 k
Skipping packages with broken dependencies:
 glusterfs-rdma x86_64  3.12.2-40.2.el8 
  rhel-8-for-x86_64-baseos-rpms 49 k
 glusterfs-rdma x86_64  6.0-15.el8  
  rhel-8-for-x86_64-baseos-rpms 46 k
 glusterfs-rdma x86_64  6.0-20.el8  
  rhel-8-for-x86_64-baseos-rpms 46 k
 glusterfs-rdma x86_64  6.0-37.2.el8
  rhel-8-for-x86_64-baseos-rpms 48 k
 glusterfs-rdma x86_64  6.0-37.el8  
  rhel-8-for-x86_64-baseos-rpms 48 k

Transaction Summary

[ovirt-users] ISO Upload in in Paused by System Status

2021-08-04 Thread louisb
I'm attempting to upload an ISO that is approximately 9GB in size.  I've 
succssfullly started th upload process via the oVirt Management Console/Disk.  
The upload started however, it now has a status of "Paused by System".  My 
storage type is set to NFS Data.  

Is something happing the back ground that contributing to the "Paused by 
System"?   I have mor the 84TB of space available so I don't think it is a 
space issue.  Is there something that I need to do?  At this time I'm going to 
wait and see if it moves forward on its own.  

Please provide me with any help or direction.

Thanks
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X2CYDZITCYRPWJIXHP32H6G3K2QI6OLQ/


[ovirt-users] Re: Terrible Disk Performance on Windows 10 VM

2021-08-04 Thread regloff
The storage domain lists the type as "Local on Host". It's been a while since I 
set it up, but I thought the engine deployment had an option for local storage. 
The VM's reside in a directory under /opt on the Physical Host. 

https://i.postimg.cc/z3mMNj3J/Capture.png

I forget which version I actually deployed.. but it's at 4.4.5.11-1.el8 now. 

Prior to installing oVirt, I was using KVM, but where I worked at the time 
migrated from Oracle's old 'OVM' to 'OLVM' which is basically an older version 
of oVirt with an Oracle icon :)

The Oracle version wouldn't allow for local storage in any form. So I was 
pleasantly surprised that the version of oVirt I installed at home did allow 
it. I already had a NFS share setup anticipating that I would need to use NFS 
for my home setup, but didn't actually need it. 

Thing is.. I'm just spoiled with SSD data transfer/access/write speeds now, 
lol. I do keep various data I don't access constantly on some spinning disks, 
like media, ISO files and such. But for OS booting speeds - completely 
accustomed to SSD!

I'll be looking forward to using those SSDs I have. They have been sitting for 
almost a year now in a drawer. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TNPZH3DAROIZ7IGCFHJGD7XNWOFIYW26/


[ovirt-users] Re: Terrible Disk Performance on Windows 10 VM

2021-08-04 Thread Gilboa Davara
On Tue, Aug 3, 2021 at 3:18 PM  wrote:

> Yes - local as in 5400 RPM SATA - standard desktop, slow storage.. :)
>
> It's still 'slow' being 5400 RPM SATA, but after setting the new VM to
> 'VirtIO-SCSI' and loading the driver, the performance is 'as expected'. I
> don't notice with with the Linux VMs because they don't do anything that
> requires a lot of disk I/O. Mostly Ansible/Python education and such.
>
> https://i.postimg.cc/28f764yb/Untitled.png
>
> I actually have some super fast Serial SCSI SSD drives I am going to use
> in the future. A storage vendor where I worked at ordered a bunch by
> mistake to upgrade our storage array and then left them sitting on-site for
> like 9 months. I contacted them to remind them we still had them in our
> data center and asked if they wanted to come and get them. I joked with our
> field engineer and told him if they didn't want them, I could find a use
> for them! He actually contacted his manager who gave us approval to just
> 'dispose' of them. So I thought why not recycle them? :)
>
> I'm in the process of moving soon for a new job. Once I get settled, I'm
> going to upgrade the storage I use for VMs. Either to those SSDs or maybe a
> small NAS device. Ideally.. a NAS device that can support Serial SCSI. I'll
> need to get a controller and a cable for them, but considering the
> performance... it should be well worth it. And no - I didn't get fired for
> swiping the drives! Too many years invested in IT for something that stupid
> and I'm just not that kind of person anyway. I took a position that's a bit
> more 'administrative' and less technical; but with better pay, so I want to
> keep my tech skills sharp, just because I enjoy it.
>
> This is just a 'home lab' - nothing that supports anything even remotely
> important. I'm so used to SSD now.. my desktop OS is on SSD, my CentOS
> machine is on SSD.. putting Windows on spinning platters is just painful
> anymore!
>

While I do have big oVirt setups running on pure SSD storage, I must admit
that Windows (and Linux VMs) are perfectly usable on HDD software RAIDs,
*if* everything is configured correctly (and you have a lot of RAM).
E.g. I'm typing this message on a Fedora VM running (w/ VFIO + nVidia GPU +
USB passthrough) on a pretty beefy 8 y/o Xeon machine with 6 x 2TB MDRAID
and ~10 other VMs (including Windows), and unless multiple VMs are trashing
the disks, I get near bare-metal performance. (I even run 3D games on this
VM...)

- Gilboa


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EVF6FR7Z46A2PI26EYJGBJBFF7LUGVX/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IV3WQFRI3FDXBCFRLGOEWLLKURY6SGTH/


[ovirt-users] Re: Terrible Disk Performance on Windows 10 VM

2021-08-04 Thread Gilboa Davara
On Tue, Aug 3, 2021 at 12:24 PM Tony Pearce  wrote:

> I believe "local" in this context is using the local ovirt Host OS disk as
> VM storage ie "local storage". The disk info mentioned "WDC WD40EZRZ-00G" =
> a single 4TB disk, at 5400RPM.
>
> OP the seek time on that disk will be high. How many VMs are running off
> it?
>
> Are you able to try other storage? If you could run some monitoring on the
> host, I'd expect to see low throughput and high delay on that disk.
>
> Regards,
>
> Tony Pearce
>
>
Stupid question:
I must be missing something.
AFAIR oVirt doesn't support local storage, one needs to choose localhost
NFS, single-host-Gluster to localhost-iSCSI.

Assuming I'm not mistaken, in my experience the type of storage domain used
has a considerable impact on performance, especially latency.
E.g. Running Bonnie++ on a Fedora Linux VM, single host GlusterFS has 2-3
times the latency of localhost NFS, and noticeably lower throughput.

Hence my question.
- Gilboa


>
> On Tue, 3 Aug 2021 at 16:54, Gilboa Davara  wrote:
>
>>
>> On Fri, Jul 30, 2021 at 5:17 PM  wrote:
>>
>>> This is a simple one desktop setup I use at home for being a nerd :)
>>>
>>> So it's a single host 'cluster' using local storage.
>>>
>>
>> Sorry for the late reply.
>> Define: local.
>> NFS, Gluster or ISCSI?
>>
>> - Gilboa
>>
>>
>>>
>>> Host Info:
>>> CentOS Linux 8 - 4.18.0-305.10.2.el8_4.x86_64 (I keep fairly well
>>> updated)
>>> Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz [Kaby Lake] {Skylake}, 14nm
>>> The disk it's on is: WDC WD40EZRZ-00G (5400 RPM platter disk) - it's not
>>> the fastest thing in the world, but it should be sufficient.
>>>
>>> VM info:
>>> Windows 10 Professional (Desktop not server)
>>> 6144 MB of RAM
>>> 2 Virtual CPUS
>>>  - Some settings I ran across for 'Performance' mode and a couple I had
>>> seen on some similar issues (the similar issues were quite dated)
>>> Running in headless mode
>>> I/O Threads enabled = 1
>>> Multi-Queues enabled
>>> Virt-IO-SCSI enabled
>>> Random Number generator enabled
>>> Added a custom property of 'viodiskcache' = writeback  (Didn't seem to
>>> make any significant improvement)
>>>
>>> As I type this though - I was going to add this link as it's what I
>>> followed to install the storage driver during the Windows install and then
>>> in the OS after that:
>>>
>>> https://access.redhat.com/solutions/17463
>>>
>>> I did notice something.. it says to create a new VM with the 'VirtIO
>>> disk interface' and I just noted my VM is setup as 'SATA'.
>>>
>>> Perhaps that is it. This is just my first attempt at running something
>>> other than a Linux Distro under oVirt. When I first installed the Windows
>>> guest, I didn't have the Virt-IO package downloaded initially. When Windows
>>> couldn't find a storage driver, I found this info out.
>>>
>>> I think I'll deploy a new Windows guest and try the 'VirtIO-SCSI'
>>> interface and see if my performance is any better. It's just a default
>>> install of Windows at this point, so that'll be easy. :)
>>>
>>> Will update this thread either way!
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4YC5E3MRPKJPFAAQDCTH5CWGPTTN77SU/
>>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CIZOXVW2N5ND4AW4DASH445WSUMVJ745/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XPWV6AMPOGDLWUTS2FC6LKRNUJAXU7TP/


[ovirt-users] Re: Accessing the oVirt Console Remotely

2021-08-04 Thread Yedidyah Bar David
On Tue, Aug 3, 2021 at 3:11 AM  wrote:

> I've installed ovirt on the my server (HPDL380Gen10) and was able to start
> the configuration process on the local machine.  I'm tried to access the
> ovirt console remotely via the web however I've had no success.  I'm using
> the same URL that is used locally, but when I execute the URL via my
> Firefox browser I get the message that it is unable to connect.


Which URL?


> I've drop the firewall on both the server and client machine as a possible
> solution but it did not work.
>
> Is there something extra that is needed to access the ovirt console
> remotely?
>

Please clarify "ovirt console".


>
> The FQDN is entered into my DNS, however, I did find that during the
> configuration process that IP was consumed by ovirtmgmt.


Only during the installation, or also after it's finished?


>   I was surprised to see that I'm just assuming that its apart of
> congiruation process.  I have other ports on my sever and I have entered
> them in my DNS also with the same name but a different IP address.
>
> What must I do to gain access to the ovirt console remotely, it would
> surely make the configuration process easy from my desk vs being in the
> server area.
>

1. I suppose that you deployed a hosted-engine.

2. It sounds like you might confuse the name of the _host_ and the name of
the _engine_ VM. Each has its own name/fqdn. The deploy process asks you
about both.

3. For accessing the engine web admin UI, you generally go to
https://$engine_fqdn/ovirt-engine/
.

4. You might also have cockpit configured, on port 9090, on the host. This
indeed might be firewalled, not sure, but is not needed for engine admin
access, only for accessing the specific host(s).

If this is still unclear, or not working, please clarify exactly what you
are trying to do and what exact error you get.

Thanks and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HAOGS3LXXL27LP5CRVDFLA2DYSDAFMBI/


[ovirt-users] Re: Restored engine backup: The provided authorization grant for the auth code has expired.

2021-08-04 Thread Yedidyah Bar David
On Tue, Aug 3, 2021 at 11:22 PM Nicolás  wrote:

> Hi,
>
> As I see this is an issue hard to get help on, I'll ask it otherwise:
>
> Alternatively to backup and restore, is there a way to migrate an
> oVirt-manager installation to other machine? We're trying to move the
> manager machine since the current physical machine is getting short of
> resources, and we already have a prepared physical host to migrate it to.
>
> If there's an alternative way to migrate it, I'd be very grateful if
> someone could shed some light on it.
>

I am not aware of a tested, documented alternative.


>
> Thanks.
>
> El 2/8/21 a las 13:02, Nicolás escribió:
>
> Hi Didi,
>
> El 7/4/21 a las 9:27, Yedidyah Bar David escribió:
>
> On Wed, Mar 24, 2021 at 12:07 PM Nicolás 
>  wrote:
>
> Hi,
>
> I'm restoring a full ovirt engine backup, having used the --scope=all
> option, for oVirt 4.3.
>
> I restored the backup on a fresh CentOS7 machine. The process went well,
> but when trying to log into the restored authentication system I get the
> following message which won't allow me to log in:
>
> The provided authorization grant for the auth code has expired.
>
> What does that mean and how can it be fixed?
>
> Can you please check also this thread:
>
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/thread/YH4J7GG7WLOLUFIADZPL6JOPDETJ23CZ/
>
> What version was used for backup, and what version for restore?
>
>
> For backup, version 4.3.8.2-1.el7 of ovirt-engine-tools-backup was used.
>
> For restore, version 4.3.10.4-1.el7 of ovirt-engine-tools-backup was used.
>
>
> Did you have a 3rd-party CA cert installed?
>
>
> I am using a custom LetsEncrypt certificate in apache. I have this
> certificate configured in httpd and ovirt-websocket-proxy, but it's exactly
> the same certificate I have configured in the oVirt installation that was
> backed up (as I understand it, it's not the same case than the one
> described in the link - I might be wrong). So I copied the same certificate
> on the other side too.
>
> Please verify that it was backed up and restored correctly, or manually
> reinstalled after restore.
>
>
> As per the logs, both processes ended correctly, no errors showed up. I
> also run the 'engine-setup' command on the restored machine, and it ended
> with no errors/warnings.
>
> I'm attaching an engine.log of the restored node in case it helps, from
> the moment I restart the engine and try to log in.
>
> Thanks for any help regarding this, as I can't figure out what else could
> be happening.
>
>
I suggest trying to follow the procedure for replacing the certificate from
scratch, as if it's not installed.

Better take a backup of /etc before you start, for reference/comparison.

Please upload large attachments such as the engine log to some file-sharing
service (e.g. dropbox google drive) and share a link. Thanks.

Best regards,

>
> Nicolás
>
> Good luck and best regards,
> --
> Didi
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3LV4X5KDDOST3VPJ5GDTHYOQTBAWLIJR/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/54CDGSH5DOHZJLCKLB625LC2FCFEH47H/
>


-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZACZLK3PVF34VTN2N3VXKQWUHR35EQBT/


[ovirt-users] Re: Self hosted engine installation - Failed to deploy the VM on ISCSI storage

2021-08-04 Thread Eric Szeto
Thanks for your reply,

[root@server ~]# tar tvf 
/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.4-20210720124053.1.el8.ova
-rw-r--r-- root/root  3725 2021-07-20 05:29 
master/vms/f2b9699d-5693-46a4-93ff-632c051dedef/f2b9699d-5693-46a4-93ff-632c051dedef.ovf
-rwxr-xr-x root/root 5256445952 2021-07-20 05:29 
images/6728db2a-1a44-44c5-a92d-c452d8ecb24f/6fa3346a-2fe2-48c1-b019-02f710496b9b
-rw-r--r-- root/root332 2021-07-20 05:29 
images/6728db2a-1a44-44c5-a92d-c452d8ecb24f/6fa3346a-2fe2-48c1-b019-02f710496b9b.meta

/var/tmp permission:
drwxrwxrwt. 14 root root 4096 Aug  3 14:15 tmp

is it what we expected? wondering if we have any workaround for this.

Thank you.

Regards,
Eric
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5IQFXZQQU6QYDGS3UWBVU4IPGW7T3AUC/


[ovirt-users] Question about pci passthrough for guest (SSD passthrough) ?

2021-08-04 Thread Tony Pearce
I have recently added a fresh installed host on 4.4, with 3 x nvidia gpu's
which have been passed through to a guest VM instance. This went very
smoothly and the guest can use all 3 host GPUs.

The next thing we did was to configure "local storage" so that the single
guest instance can make use of faster nvme storage (100,000 iops) compared
to the network iscsi storage which is rated at 35,000 iops.

The caveat with local storage is that I can only use the remaining free
space in /var/ for disk images. The result is the 1TB SSD has around
700GB remaining free space.

So I was wondering about simply passing through the nvme ssd (PCI) to the
guest, so the guest can utilise the fill SSD.

Are there any "gotcha's" with doing this other than the usual gpu
passthrough ones?

Also my apologies if this is duplicated. I originally asked this [1] a
couple of days ago but I am not sure what happened.

Kind regards,


Tony Pearce

[1] Question about pci pass-thru - Users - Ovirt List Archives

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WJKUWVIJCZT7LBSU5QI43GGTPEYIQSQN/


[ovirt-users] Re: import from vmware provider always failis

2021-08-04 Thread dhanaraj.ramesh--- via Users
check  the logs /var/log/vdsm/import//x by loging into specific host where 
the vm import getting, if there is a vcenter time out  happened, follow this 
https://bugzilla.redhat.com/show_bug.cgi?id=1848862

It looks like virt-v2v creates too many HTTP sessions to the VCenter and it 
results in 503 error in VMware's VCenter services.

* Workaround for the HTTP method *
I found a workaround to allow VCenter to accept as many sessions as possible 
you can change the file /etc/vmware-vpx/vpxd.cfg on the VCenter's server
and add inside the  XML tag the following XML:
"

  0

"

You shouldn't replace all the  contents, just add the above lines 
inside.
It will look something like that:

"
  
true

  0



  90
  vpxd

  

"
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PFNQGHKSWXRNNH2U6AMAMCR7RPK3ZGJU/


[ovirt-users] Re: posix storage migration issue on 4.4 cluster

2021-08-04 Thread Sketch

On Wed, 4 Aug 2021, Sketch wrote:


What doesn't work is live migration of running VMs between hosts running 
4.4.7 (or 4.4.6 before I updated) when their disks are on ceph.  It appears 
that vdsm attempts to launch the VM on the destination host, and it either 
fails to start or dies right after starting (not entirely clear from the 
logs).  Then the running VM gets paused due to a storage error.


After further investigation, I've found the problem appears to be selinux 
related.  Setting the systems to permissive mode allows VMs to be live 
migrated.  I tailed the audit logs on both hosts and found a couple of 
denies which probably explains the lack of useful errors in the vdsm logs, 
though I'm not sure how to fix the problem.


Source host:

type=AVC msg=audit(1628052789.412:3381): avc:  denied  { read } for  pid=570656 
comm="live_migration" name="6f82b02d-8c22-4d50-a30e-53511776354c" dev="ceph" 
ino=1099511715125 scontext=system_u:system_r:svirt_t:s0:c752,c884 
tcontext=system_u:object_r:svirt_image_t:s0:c411,c583 tclass=file permissive=0
type=AVC msg=audit(1628052790.557:3382): avc:  denied  { read } for  pid=570656 comm="worker" 
path="/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c"
 dev="ceph" ino=1099511715125 scontext=system_u:system_r:svirt_t:s0:c752,c884 
tcontext=system_u:object_r:svirt_image_t:s0:c411,c583 tclass=file permissive=0

# ls -lidZ 
/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c
1099511715125 -rw-rw. 1 vdsm kvm 
system_u:object_r:svirt_image_t:s0:c344,c764 52031193088 Aug  3 23:51 
/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c

Destination host:

type=AVC msg=audit(1628052787.312:1789): avc:  denied  { getattr } for  pid=115062 comm="qemu-kvm" 
name="/" dev="ceph" ino=1099511636351 scontext=system_u:system_r:svirt_t:s0:c411,c583 
tcontext=system_u:object_r:cephfs_t:s0 tclass=filesystem permissive=0

# ls -lidZ /rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore
1099511636351 drwxr-xr-x. 3 vdsm kvm unconfined_u:object_r:cephfs_t:s0 1 Aug  3 
23:14 /rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ALFLUXTZ4ZTVGWMYLQKBABR7LSIG2QDG/


[ovirt-users] import from vmware provider always failis

2021-08-04 Thread edp
Hi.

I have created a new vmware provider to connet to my vmware esxi node.

But I have this problem.

If I choose to import a vm from that provider, the process always fails.

But the errror message is generic:

"failed to import vm xyz to Data Center Default, Cluster Default"

I have tried to import both Linux and Windows Vms without success.

I can see that the import phase go on through importing virtual disk and then 
when the import process comes almost to the end I get the generic error stated 
above.

There is a place where I can see more detailed logs to solve this problem?

Thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P3KHCKJA6ADODLRFT5FXF466T7PQUD5X/


[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-04 Thread Vojtech Juranek
On Wednesday, 4 August 2021 03:54:36 CEST KK CHN wrote:
> On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer  wrote:
> > On Tue, Aug 3, 2021 at 7:29 PM KK CHN  wrote:
> > > I have asked our VM maintainer to run the  command
> > > 
> > > # virsh -r dumpxml vm-name_blah//as Super user
> > > 
> > > But no output :   No matching domains found that was the TTY  output on
> > 
> > that rhevm node when I executed the command.
> > 
> > > Then I tried to execute #  virsh list //  it doesn't list any VMs
> > 
> > !!!   ( How come this ? Does the Rhevm node need to enable any CLI  with
> > License key or something to list Vms or  to dumpxml   with   virsh ? or
> > its
> > CLI commands ?
> > 
> > RHV undefine the vms when they are not running.
> > 
> > > Any way I want to know what I have to ask the   maintainerto provide
> > 
> > a working a working  CLI   or ? which do the tasks expected to do with
> > command line utilities in rhevm.
> > 
> > If the vm is not running you can get the vm configuration from ovirt
> > 
> > using the API:
> > GET /api/vms/{vm-id}
> > 
> > You may need more API calls to get info about the disks, follow the
> > 
> > in the returned xml.
> > 
> > > I have one more question :Which command can I execute on an rhevm
> > 
> > node  to manually export ( not through GUI portal) a   VMs to   required
> > format  ?
> > 
> > > For example;   1.  I need to get  one  VM and disks attached to it  as
> > 
> > raw images.  Is this possible how?
> > 
> > > and another2. VM and disk attached to it as  Ova or( what other good
> > 
> > format) which suitable to upload to glance ?
> > 
> > Arik can add more info on exporting.
> > 
> > >   Each VMs are around 200 to 300 GB with disk volumes ( so where should
> > 
> > be the images exported to which path to specify ? to the host node(if the
> > host doesn't have space  or NFS mount ? how to specify the target location
> > where the VM image get stored in case of NFS mount ( available ?)
> > 
> > You have 2 options:
> > - Download the disks using the SDK
> > - Export the VM to OVA
> > 
> > When exporting to OVA, you will always get qcow2 images, which you can
> > later
> > convert to raw using "qemu-img convert"
> > 
> > When downloading the disks, you control the image format, for example
> > this will download
> > 
> > the disk in any format, collapsing all snapshots to the raw format:
> >  $ python3
> > 
> > /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
> > -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
> > 
> > To perform this which modules/packages need to be installed in the rhevm
> 
> host node ?  Does the rhevm hosts come with python3 installed by default ?
> or I need to install  python3 on rhevm node ? 

You don't have to install anything on oVirt hosts. SDK has to be installed on 
the machine from which you run the script. See 

https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/README.adoc

for more details, how to install and use it.

> Then  using pip3 to install
> the  download_disk.py / what the module name to install this sdk ?  any
> dependency before installing this sdk ? like java need to be installed on
> the rhevm node ?
> 
> One doubt:  came across  virt v2v while google search,  can virtv2v  be
> used in rhevm node to export VMs to images ?  or only from other
> hypervisors   to rhevm only virt v2v supports ?
> 
> This requires ovirt.conf file:   // ovirt.conf file need to be created
> ? or already there  in any rhevm node?

again, this has to be on the machine from which you run the script

> > $ cat ~/.config/ovirt.conf
> > [engine-dev]
> > engine_url = https://engine-dev
> > username = admin@internal
> > password = mypassword
> > cafile = /etc/pki/vdsm/certs/cacert.pem
> > 
> > Nir
> > 
> > > Thanks in advance
> > > 
> > > On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer  wrote:
> > >> On Mon, Aug 2, 2021 at 12:22 PM  wrote:
> > >> > I have  few VMs in   Redhat Virtualisation environment  RHeV ( using
> > 
> > Rhevm4.1 ) managed by a third party
> > 
> > >> > Now I am in the process of migrating  those VMs to  my cloud setup
> > 
> > with  OpenStack ussuri  version  with KVM hypervisor and Glance storage.
> > 
> > >> > The third party is making down each VM and giving the each VM image
> > 
> > with their attached volume disks along with it.
> > 
> > >> > There are three folders  which contain images for each VM .
> > >> > These folders contain the base OS image, and attached LVM disk images
> > 
> > ( from time to time they added hard disks  and used LVM for storing data )
> > where data is stored.
> > 
> > >> > Is there a way to  get all these images to be exported as  Single
> > 
> > image file Instead of  multiple image files from Rhevm it self.  Is this
> > possible ?
> > 
> > >> > If possible how to combine e all these disk images to a single image
> > 
> > and that image  can upload to our  cloud  glance storage as a single image
> > ?> 
> > >> It is not clear what 

[ovirt-users] Re: live merge of snapshots failed

2021-08-04 Thread g . vasilopoulos
here os the vdsm.log from the SPM
there is a report for the second disk of the vm but the first (the one which 
failes to merge does not seem to be anywhere)
2021-08-03 15:51:40,051+0300 INFO  (jsonrpc/7) [vdsm.api] START 
getVolumeInfo(sdUUID=u'96000ec9-e181-44eb-893f-e0a36e3a6775', 
spUUID=u'5da76866-7b7d-11eb-9913-00163e1f2643', 
imgUUID=u'205a30a3-fc06-4ceb-8ef2-018f16d4ccbb', 
volUUID=u'7611ebcf-5323-45ca-b16c-9302d0bdedc6', options=None) 
from=:::10.252.80.201,58850, flow_id=3bf9345d-fab2-490f-ba44-6aa014bbb743, 
task_id=be6c50d9-a8e4-4ef5-85cf-87a00d79d77e (api:48)
2021-08-03 15:51:40,052+0300 INFO  (jsonrpc/7) [storage.VolumeManifest] Info 
request: sdUUID=96000ec9-e181-44eb-893f-e0a36e3a6775 
imgUUID=205a30a3-fc06-4ceb-8ef2-018f16d4ccbb volUUID = 
7611ebcf-5323-45ca-b16c-9302d0bdedc6  (volume:240)
2021-08-03 15:51:40,081+0300 INFO  (jsonrpc/7) [storage.VolumeManifest] 
96000ec9-e181-44eb-893f-e0a36e3a6775/205a30a3-fc06-4ceb-8ef2-018f16d4ccbb/7611ebcf-5323-45ca-b16c-9302d0bdedc6
 info is {'status': 'OK', 'domain': '96000ec9-e181-44eb-893f-e0a36e3a6775', 
'voltype': 'LEAF', 'description': 
'{"DiskAlias":"anova.admin.uoc.gr_Disk2","DiskDescription":""}', 'parent': 
'----', 'format': 'RAW', 'generation': 0, 
'image': '205a30a3-fc06-4ceb-8ef2-018f16d4ccbb', 'disktype': 'DATA', 
'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '42949672960', 'children': 
[], 'pool': '', 'ctime': '1625846644', 'capacity': '42949672960', 'uuid': 
u'7611ebcf-5323-45ca-b16c-9302d0bdedc6', 'truesize': '42949672960', 'type': 
'PREALLOCATED', 'lease': {'path': 
'/dev/96000ec9-e181-44eb-893f-e0a36e3a6775/leases', 'owners': [], 'version': 
None, 'offset': 105906176}} (volume:279)
2021-08-03 15:51:40,081+0300 INFO  (jsonrpc/7) [vdsm.api] FINISH getVolumeInfo 
return={'info': {'status': 'OK', 'domain': 
'96000ec9-e181-44eb-893f-e0a36e3a6775', 'voltype': 'LEAF', 'description': 
'{"DiskAlias":"anova.admin.uoc.gr_Disk2","DiskDescription":""}', 'parent': 
'----', 'format': 'RAW', 'generation': 0, 
'image': '205a30a3-fc06-4ceb-8ef2-018f16d4ccbb', 'disktype': 'DATA', 
'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '42949672960', 'children': 
[], 'pool': '', 'ctime': '1625846644', 'capacity': '42949672960', 'uuid': 
u'7611ebcf-5323-45ca-b16c-9302d0bdedc6', 'truesize': '42949672960', 'type': 
'PREALLOCATED', 'lease': {'path': 
'/dev/96000ec9-e181-44eb-893f-e0a36e3a6775/leases', 'owners': [], 'version': 
None, 'offset': 105906176}}} from=:::10.252.80.201,58850, 
flow_id=3bf9345d-fab2-490f-ba44-6aa014bbb743, 
task_id=be6c50d9-a8e4-4ef5-85cf-87a00d79d77e (api:54)
2021-08-03 15:51:40,083+0300 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call 
Volume.getInfo succeeded in 0.04 seconds (__init__:312)

last appearance of this drive on the spm vdsm.log is when the snapshot download 
finishes:
2021-08-03 15:34:18,619+0300 INFO  (jsonrpc/6) [vdsm.api] FINISH 
get_image_ticket return={'result': {u'timeout': 300, u'idle_time': 0, u'uuid': 
u'5c1943a9-cac4-4398-9ec1-46ab82cacd04', u'ops': [u'read'], u'url': 
u'file:///rhev/data-center/mnt/blockSD/a5a492a7-f770-4472-baa3-ac7297a581a9/images/2e6e3cd3-f0cb-47a7-8bda-7738bd7c1fb5/84c005da-cbec-4ace-8619-5a8e2ae5ea75',
 u'expires': 6191177, u'transferred': 150256746496, u'transfer_id': 
u'7dcb75c0-4373-4986-b25f-5629b1b68f5d', u'sparse': False, u'active': True, 
u'size': 150323855360}} from=:::10.252.80.201,58850, 
flow_id=3035db30-8a8c-48a5-b0c6-0781fda6ac2e, 
task_id=674028a2-e37c-46e4-a463-eeae1b09aef0 (api:54)
2021-08-03 15:34:18,620+0300 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call 
Host.get_image_ticket succeeded in 0.00 seconds (__init__:312)

If I can send any more information or test something please let me know.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KEJ24BI6PLXYFQHJ6O2AESK3M4SXMUID/


[ovirt-users] Re: live merge of snapshots failed

2021-08-04 Thread g . vasilopoulos
hello Benny and thank you for the quick response:
this is the vdsm log:
2021-08-03 15:50:58,655+0300 INFO  (jsonrpc/3) [storage.VolumeManifest] 
96000ec9-e181-44eb-893f-e0a36e3a6775/205a30a3-fc06-4ceb-8ef2-018f16d4ccbb/7611ebcf-5323-45ca-b16c-9302d0bdedc6
 info is {'status': 'OK', 'domain': '96000ec9-e181-44eb-893f-e0a36e3a6775', 
'voltype': 'INTERNAL', 'description': 
'{"DiskAlias":"anova.admin.uoc.gr_Disk2","DiskDescription":""}', 'parent': 
'----', 'format': 'RAW', 'generation': 0, 
'image': '205a30a3-fc06-4ceb-8ef2-018f16d4ccbb', 'disktype': 'DATA', 
'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '42949672960', 'children': 
[], 'pool': '', 'ctime': '1625846644', 'capacity': '42949672960', 'uuid': 
u'7611ebcf-5323-45ca-b16c-9302d0bdedc6', 'truesize': '42949672960', 'type': 
'PREALLOCATED', 'lease': {'path': 
'/dev/96000ec9-e181-44eb-893f-e0a36e3a6775/leases', 'owners': [], 'version': 
None, 'offset': 105906176}} (volume:279)
2021-08-03 15:50:58,655+0300 INFO  (jsonrpc/3) [vdsm.api] FINISH getVolumeInfo 
return={'info': {'status': 'OK', 'domain': 
'96000ec9-e181-44eb-893f-e0a36e3a6775', 'voltype': 'INTERNAL', 'description': 
'{"DiskAlias":"anova.admin.uoc.gr_Disk2","DiskDescription":""}', 'parent': 
'----', 'format': 'RAW', 'generation': 0, 
'image': '205a30a3-fc06-4ceb-8ef2-018f16d4ccbb', 'disktype': 'DATA', 
'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '42949672960', 'children': 
[], 'pool': '', 'ctime': '1625846644', 'capacity': '42949672960', 'uuid': 
u'7611ebcf-5323-45ca-b16c-9302d0bdedc6', 'truesize': '42949672960', 'type': 
'PREALLOCATED', 'lease': {'path': 
'/dev/96000ec9-e181-44eb-893f-e0a36e3a6775/leases', 'owners': [], 'version': 
None, 'offset': 105906176}}} from=:::10.252.80.201,41898, 
flow_id=3bf9345d-fab2-490f-ba44-6aa014bbb743, 
task_id=0b4e6fe7-4345-40b1-9e86-86ec2f662d3f (api:54)
2021-08-03 15:50:58,656+0300 INFO  (jsonrpc/3) [vdsm.api] START 
getVolumeInfo(sdUUID=u'96000ec9-e181-44eb-893f-e0a36e3a6775', 
spUUID='5da76866-7b7d-11eb-9913-00163e1f2643', 
imgUUID=u'205a30a3-fc06-4ceb-8ef2-018f16d4ccbb', 
volUUID=u'17618ba1-4ab8-49eb-a991-fc3d602ced14', options=None) 
from=:::10.252.80.201,41898, flow_id=3bf9345d-fab2-490f-ba44-6aa014bbb743, 
task_id=0c307c2c-9bd3-4d1a-9db8-ec45d822bc71 (api:48)
2021-08-03 15:50:58,657+0300 INFO  (jsonrpc/3) [storage.VolumeManifest] Info 
request: sdUUID=96000ec9-e181-44eb-893f-e0a36e3a6775 
imgUUID=205a30a3-fc06-4ceb-8ef2-018f16d4ccbb volUUID = 
17618ba1-4ab8-49eb-a991-fc3d602ced14  (volume:240)
2021-08-03 15:50:58,681+0300 INFO  (jsonrpc/3) [storage.VolumeManifest] 
96000ec9-e181-44eb-893f-e0a36e3a6775/205a30a3-fc06-4ceb-8ef2-018f16d4ccbb/17618ba1-4ab8-49eb-a991-fc3d602ced14
 info is {'status': 'OK', 'domain': '96000ec9-e181-44eb-893f-e0a36e3a6775', 
'voltype': 'LEAF', 'description': '', 'parent': 
'7611ebcf-5323-45ca-b16c-9302d0bdedc6', 'format': 'COW', 'generation': 0, 
'image': '205a30a3-fc06-4ceb-8ef2-018f16d4ccbb', 'disktype': 'DATA', 
'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 'children': 
[], 'pool': '', 'ctime': '1627991040', 'capacity': '42949672960', 'uuid': 
u'17618ba1-4ab8-49eb-a991-fc3d602ced14', 'truesize': '1073741824', 'type': 
'SPARSE', 'lease': {'path': '/dev/96000ec9-e181-44eb-893f-e0a36e3a6775/leases', 
'owners': [], 'version': None, 'offset': 49056}} (volume:279)
2021-08-03 15:50:58,681+0300 INFO  (jsonrpc/3) [vdsm.api] FINISH getVolumeInfo 
return={'info': {'status': 'OK', 'domain': 
'96000ec9-e181-44eb-893f-e0a36e3a6775', 'voltype': 'LEAF', 'description': '', 
'parent': '7611ebcf-5323-45ca-b16c-9302d0bdedc6', 'format': 'COW', 
'generation': 0, 'image': '205a30a3-fc06-4ceb-8ef2-018f16d4ccbb', 'disktype': 
'DATA', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 
'children': [], 'pool': '', 'ctime': '1627991040', 'capacity': '42949672960', 
'uuid': u'17618ba1-4ab8-49eb-a991-fc3d602ced14', 'truesize': '1073741824', 
'type': 'SPARSE', 'lease': {'path': 
'/dev/96000ec9-e181-44eb-893f-e0a36e3a6775/leases', 'owners': [], 'version': 
None, 'offset': 49056}}} from=:::10.252.80.201,41898, 
flow_id=3bf9345d-fab2-490f-ba44-6aa014bbb743, 
task_id=0c307c2c-9bd3-4d1a-9db8-ec45d822bc71 (api:54)
2021-08-03 15:50:58,711+0300 INFO  (jsonrpc/3) [virt.vm] 
(vmId='1c1d20ed-3167-4be7-bff3-29845142fc57') Starting merge with 
jobUUID=u'62bf8c83-cd78-42a5-b57d-d67ddfdee8ee', original 
chain=7611ebcf-5323-45ca-b16c-9302d0bdedc6 < 
17618ba1-4ab8-49eb-a991-fc3d602ced14 (top), disk='sdb', base='sdb[1]', 
top=None, bandwidth=0, flags=12 (vm:5951)
2021-08-03 15:50:58,735+0300 INFO  (jsonrpc/3) [api.virt] FINISH merge 
return={'status': {'message': 'Done', 'code': 0}} 
from=:::10.252.80.201,41898, flow_id=3bf9345d-fab2-490f-ba44-6aa014bbb743, 
vmId=1c1d20ed-3167-4be7-bff3-29845142fc57 (api:54)
2021-08-03 15:50:58,735+0300 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call 
VM.merge succeeded in 0.37 seconds (__init__:312)