Re: Questions about storage

2023-08-09 Thread Cole Robinson
On 8/8/23 9:47 PM, Blake McBride wrote:
> Greetings,
> 
> I make pretty regular use of VMs on my Linux box.  I've used KVM,
> VirtualBox, and VMWare.  I'd like to make more use of KVM/virt-manager
> but I am having a number of problems using virt-manager.  I'm pretty
> sure the problem is me.  I just need to understand it more.  I think all
> of my questions are about storage.  I hope someone on this list can help
> me.  Here are my questions:
> 
> 1.  Is there any way of escaping the whole "storage pool" concept?  I'd
> like to just specify a directory to put my files in rather than needing
> to creating a pool each time.  Likewise for the ISO I use to gen the
> system.  I want to be able to simply browse my disk and select the ISO.
> 

In the storage browser UI, there's the button to `browse local` which
opens a native file browser, which kinda escapes the pool UI. But no,
there's not a simple UI way to create a new directory for each VM.
virt-manager and libvirt convention is to place all VM disk images in
one directory. Straying from that requires more config and UI clicks

> 2.  I need all of the files associated with a particular VM in one
> independent place so I can back it up and move it as a unit easily. 
> This includes the cow2 file and all of the VM meta information files. 
>

No unfortunately this is not easy to do with libvirt. XML and disk
images are never in the same place, UEFI variables are somewhere else,
TPM state is yet another place, etc. And I don't know of a tool that
simplifies moving all these details. Maybe some virt-v2v invocation can
do it.

- Cole



Re: Virt-manager and guest window stop reacting after some time-out

2023-07-05 Thread Cole Robinson
On 7/4/23 10:48 AM, Leon Pollak wrote:
> Hello.
> After upgrading to FC38 I have the following problem:
> After some time of inactivity (working in other windows in my FC38 KDE
> machine), returning to the virt-manager (4.1.0) and guest window does
> not happen. I see the window capture displaying "Virtual Machine
> Manager" or the name of my VM, but nothing happens.
> I still can switch back to the host machine.
> If I run "systemctl restart libvirtd", close all windows and run
> virt-manager again, after providing the root password everything
> returns to its normal state - the guest continues to run (from the
> same place), manager is working normally.
> Till the next "time-out" which is rather "long" - about 10-15 minutes.
> 
> Does someone know what to do in such a case?
> 

It's probably this systemd regression, it's hitting lots of libvirt
users right now: https://bugzilla.redhat.com/show_bug.cgi?id=2213660

- Cole



Re: [PATCH 0/3] Fix the virt-install progress bar file size

2022-12-14 Thread Cole Robinson
On 11/9/22 4:33 AM, Toshiki Sonoda wrote:
> Fix the virt-install progress bar file size
> 
> These patches improve about the virt-install progress bar.
> 
> Since version 4.0, implementation of the progress bar has changed.
> If the file copy about vmlinuz and initrd.img completes quickly 
> (less than self.update_period),
> the process ends without updating the progress bar.
> In short, the result is printed as 0B.
> 
> In addition, 0B is printed at "Creating domain" printing.
> 0B is meaningless and can be misled as a failed transfer.
> 
> Toshiki Sonoda (3):
>   virtinstall: Improve the progress bar output by meter.end()
>   virtinstall: Fix the allocating disk size printed by the progress bar
>   virtinstall: Hide total_size in the progress bar if it doesn't need
> 
>  tests/data/meter/meter7.txt  |  2 ++
>  tests/data/meter/meter8.txt  |  6 
>  tests/data/meter/meter9.txt  |  4 +++
>  tests/test_misc.py   | 49 
>  virtManager/object/domain.py |  2 +-
>  virtinst/_progresspriv.py| 12 +---
>  virtinst/diskbackend.py  |  3 +-
>  virtinst/install/urlfetcher.py   |  6 ++--
>  virtinst/install/volumeupload.py |  3 +-
>  virtinst/progress.py |  4 +--
>  virtinst/storage.py  |  3 +-
>  11 files changed, 76 insertions(+), 18 deletions(-)
>  create mode 100644 tests/data/meter/meter7.txt
>  create mode 100644 tests/data/meter/meter8.txt
>  create mode 100644 tests/data/meter/meter9.txt
> 

Thanks for the report, and the patches! I see all the reported issues
with this reproducer:

./virt-install --connect test:///default --location
tests/data/fakemedia/fake-f26-netinst.iso

Patch #1 was a bit tough to understand, since it's adding test cases,
internal API change, and behavior change all at once. In the process of
trying to understand it, I found a simpler fix that doesn't require any
internal API change. I pushed that as patch #1, and pushed your patches
#2 and #3 on top of that, with some adjustments due to patch #1 being
different.

Please test git and confirm the changes still fix your issues

Thanks,
Cole



Re: [PATCH virt-manager] Fix uninitialized controller index

2022-12-13 Thread Cole Robinson
On 12/7/22 5:20 PM, Charles Arnold wrote:
> Subject: Fix uninitialized controller index
> From: Charles Arnold carn...@suse.com Tue Nov 15 08:17:23 2022 -0700
> Date: Tue Nov 15 08:17:23 2022 -0700:
> Git: fc8f0027a54ab8405e584398ca6d31d7c2108065
> 
> When adding a second virtio-scsi controller to a guest from the
> Add Hardware dialog an exception is thrown.
> Error building device XML: unsupported operand type(s) for +: 'NoneType'
> and 'int'
> This happens because the index for the first controller created
> was not initialized to zero.
> 
> Signed-off-by: Charles Arnold 
> 
> diff --git a/virtManager/addhardware.py b/virtManager/addhardware.py
> index 0a8e33d3..fd576839 100644
> --- a/virtManager/addhardware.py
> +++ b/virtManager/addhardware.py
> @@ -1562,6 +1562,8 @@ class vmmAddHardware(vmmGObjectUI):
>  if len(controller_num) > 0:
>  index_new = max([x.index for x in controller_num]) + 1
>  dev.index = index_new
> +    else:
> +    dev.index = 0
> 
>  dev.type = controller_type
> 
> 

Thanks for the patch. I commented on your PR, let's follow up there:

https://github.com/virt-manager/virt-manager/pull/459

- Cole



Re: [PATCH 1/2] cloner: clone serial files

2022-12-13 Thread Cole Robinson
On 12/2/22 3:45 AM, Oleg Vasilev wrote:
> Before this, on clone the serial file would remain the same for the cloned
> domain. This doesn't make much sense as the output would be an intermix of two
> unrelated output sequences.
> 
> Here we apply the the same filename changing algorithm, as with disk files.
> 
> Signed-off-by: Oleg Vasilev 

Thanks for the patches! I commented on your PR, let's follow up there:

https://github.com/virt-manager/virt-manager/pull/466

- Cole



Re: [virt-tools-list] [virt-manager] [PATCH] diskbackend: Drop support for sheepdog

2022-11-07 Thread Cole Robinson
On 11/2/22 8:45 AM, Lin Ma wrote:
> The sheepdog project is no longer actively developed, Libvirt removed
> the support for sheepdog storage backend since v8.8.0, Let's drop it.
> 
> Signed-off-by: Lin Ma 
> ---
>  tests/data/testdriver/testdriver.xml | 40 ++--
>  virtManager/object/storagepool.py|  2 --
>  virtinst/storage.py  |  8 +++---
>  3 files changed, 5 insertions(+), 45 deletions(-)

This need `pytest --regenerate-output` changes, but I squashed those in
and pushed.

Thanks,
Cole



Re: [virt-tools-list] [virt-manager] [PATCH 0/2] Check availability of usb redirdevs and spicevmc channels in domcaps

2022-11-07 Thread Cole Robinson
On 11/4/22 12:11 AM, Lin Ma wrote:
> Libvirt domcaps can advertise the redirect devices and spicevmc channel
> devices since v8.9.0.
> Virt-install adds usb redirdevs and spicevmc channels by default on x86.
> This patchset adds checks to avoid using redirect devices/spicevmc channel
> devices when not supported by the target qemu.
> 
> Lin Ma (2):
>   guest: Query availability of usb redirdevs in domcaps
>   guest: Query availability of spicevmc channels in domcaps
> 
>  virtinst/domcapabilities.py | 14 ++
>  virtinst/guest.py   |  4 
>  2 files changed, 18 insertions(+)
> 

These patches break the test suite very obviously.

Can you start submitting PRs via github? CI would have made this clear.

They will also regress runtime: now every VM created against older
libvirt will think redirdev and spicevmc isn't supported, and not add
those devices. Check around in domcapabilities.py for examples of how we
have handled this when new libvirt output pops up. You can use a version
check, or check for presense of empty redirdev attribute to know that we
can trust the empty caps output.

- Cole



Re: importing luks-encrypted lvm volume

2022-11-06 Thread Cole Robinson
On 11/3/22 9:25 AM, Erik Huelsmann wrote:
> Hi,
> 
> I'd like to use virt-install to create a new VM which imports an
> existing volume. This volume is luks-encrypted. When I manually attach
> the VM with the correct encryption settings, the resulting XML looks
> like this:
> 
>     
>       
>       
>       
>       
>       
>          uuid='27a023d0-5afa-11ed-9822-c3a776c2fdff'/>
>       
>       
>       
>     
> 
> I'm able to create all of the above XML with the exception of
> 
>       
>          uuid='27a023d0-5afa-11ed-9822-c3a776c2fdff'/>
>       
> 
> Is there a way to have virt-install (or virt-xml) generate this as part
> of the VM definition?

It's not pretty but the arguments to `--disk` should do it. Put them in
the same `--disk` string that configures the rest of the device

--disk
xpath0.set=./encryption/@format=luks,xpath1.set=./encryption/secret/@type=passphrase,xpath2.set=./encryption/secret/@uuid=27a023d0-5afa-11ed-9822-c3a776c2fdff

- Cole



Re: where are the files ?

2022-10-10 Thread Cole Robinson
On 10/8/22 9:36 AM, Bob Power wrote:
> Hi,
> 
> I'm moving "stuff" to a new machine. (Rocky 8.4 to Alma 8.6)
> 
> I have been using virt-manager on the old machine.
> 
> I want to open virt-manager on the the new machine and see what I saw on
> the old machine - the connections and list of VMs under those connections.
> 
> I have copied everything under /etc/libvirt/qemu over but no joy.
> 
> The man pages (man virt-manager) do not list any files.
> 
> Can anyone give me a list of file paths used by virt-manager when it
> starts up to populate the GUI

virt-manager's connection list is stored in dconf. But it's probably not
worth trying to transfer that over, instead manually `File->Open
Connection` and recreate the list.

If once connected to local qemu:///system your VMs aren't showing up,
and you copied over /etc/libvirt/qemu files, you may need to restart
libvirtd or virtqemud (or reboot). The proper way to do this would have
been `sudo virsh dumpxml $VMNAME > $VMNAME.xml` on the source and then
`sudo virsh define $VMNAME.xml` on the dest.

- Cole



Re: [virt-tools-list] [virt-manager] [PATCH] cli: --cpu: Add maxphysaddr.{mode,bits} options

2022-08-20 Thread Cole Robinson
On 8/19/22 6:18 AM, Lin Ma wrote:
> This commit added support for cpu physical address bits control, It's
> useful for VMs with huge amount of ram.
> 
> E.g.
> --cpu Cascadelake-Server,maxphysaddr.mode=emulate,maxphysaddr.bits=46
> 
> Signed-off-by: Lin Ma 
> ---
>  .../cli/compare/virt-install-many-devices.xml |  1 +
>  .../compare/virt-install-testdriver-edgecases.xml |  4 +++-
>  tests/test_cli.py |  5 +++--
>  virtinst/cli.py   |  3 +++
>  virtinst/domain/cpu.py| 15 ++-
>  5 files changed, 24 insertions(+), 4 deletions(-)

Thanks, pushed now!

- Cole



Re: [virt-tools-list] [virt-manager] [PATCH] tests: Add a compat check for linux2020 in amd-sev test case

2022-08-20 Thread Cole Robinson
On 8/19/22 6:18 AM, Lin Ma wrote:
> It avoids amd-sev test failure if using older osinfo-db.
> 
> Signed-off-by: Lin Ma 
> ---
>  tests/test_cli.py | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Thanks, pushed now

- Cole



Re: [virt-tools-list] [virt-manager] [PATCH] cli: --disk: Add driver.metadata_cache options

2022-08-17 Thread Cole Robinson
On 8/16/22 12:59 AM, Lin Ma wrote:
> Properly setting the metadata cache size can provide better performance
> in case of using big qcow2 images.
> 
> This patch introduces two driver options:
> * driver.metadata_cache.max_size
> * driver.metadata_cache.max_size.unit
> 
> E.g. --disk ...,driver.type=qcow2,\
>  driver.metadata_cache.max_size=2,\
>  driver.metadata_cache.max_size.unit=MiB
> 
> BTW, Metadata cache size control is currently supported only for qcow2.
> Regarding how to properly caluclate the cache size of qcow2, Please refer
> to qemu's documentation.
> 
> Signed-off-by: Lin Ma 
> ---
>  tests/data/cli/compare/virt-install-many-devices.xml | 9 +
>  tests/test_cli.py| 1 +
>  virtinst/cli.py  | 7 +++
>  virtinst/devices/disk.py | 5 +
>  4 files changed, 22 insertions(+)
> 
> diff --git a/tests/data/cli/compare/virt-install-many-devices.xml 
> b/tests/data/cli/compare/virt-install-many-devices.xml
> index a73343a9..a33dc16a 100644
> --- a/tests/data/cli/compare/virt-install-many-devices.xml
> +++ b/tests/data/cli/compare/virt-install-many-devices.xml
> @@ -423,6 +423,15 @@
>
>
>  
> +
> +  
> +
> +  2048
> +
> +  
> +  
> +  
> +
>  
>
>  
> diff --git a/tests/test_cli.py b/tests/test_cli.py
> index 774db098..259ac78c 100644
> --- a/tests/test_cli.py
> +++ b/tests/test_cli.py
> @@ -605,6 +605,7 @@ 
> source.reservations.managed=no,source.reservations.source.type=unix,source.reser
>  --disk path=/fooroot.img,size=.0001,transient=on
>  --disk source.dir=/
>  --disk 
> type=nvme,source.type=pci,source.managed=no,source.namespace=2,source.address.domain=0x0001,source.address.bus=0x02,source.address.slot=0x00,source.address.function=0x0
> +--disk 
> /tmp/disk1.qcow2,size=16,driver.type=qcow2,driver.metadata_cache.max_size=2048,driver.metadata_cache.max_size.unit=KiB
>  
>  
>  --network 
> user,mac=12:34:56:78:11:22,portgroup=foo,link_state=down,rom_bar=on,rom_file=/tmp/foo
> diff --git a/virtinst/cli.py b/virtinst/cli.py
> index c4dffd34..042b500f 100644
> --- a/virtinst/cli.py
> +++ b/virtinst/cli.py
> @@ -3497,6 +3497,8 @@ class ParserDisk(VirtCLIParser):
>  "driver.io": "io",
>  "driver.name": "driver_name",
>  "driver.type": "driver_type",
> +"driver.metadata_cache.max_size": "metadata_cache.max_size",
> +"driver.metadata_cache.max_size.unit": 
> "metadata_cache.max_size.unit",
>  }
>  

I botched testing this. These aliases should never be added for new cli
properties, they are for maintaining cli backcompat when adding new
names for the same parameter (like when we changed --disk driver_name to
driver.name for consistency). So I reverted this bit

- Cole



Re: [virt-tools-list] [virt-manager] [PATCH] fsdetails: Fix an error with source.socket of virtiofs

2022-08-17 Thread Cole Robinson
On 8/16/22 12:59 AM, Lin Ma wrote:
> Using the source.socket of virtiofs needs a virtiofsd daemon launched
> outside of libvirtd, So the filesystem UI doesn't support it yet. If
> users need it they can set it manually in the XML editor.
> But if we view the filesystem info of such a VM on the details page,
> It fails with this error message:
> 
> Traceback (most recent call last):
>   File "/usr/share/virt-manager/virtManager/details/details.py", line 1713, 
> in _refresh_page
> self._refresh_filesystem_page(dev)
>   File "/usr/share/virt-manager/virtManager/details/details.py", line 2241, 
> in _refresh_filesystem_page
> self.fsDetails.set_dev(dev)
>   File "/usr/share/virt-manager/virtManager/device/fsdetails.py", line 193, 
> in set_dev
> self.widget("fs-source").set_text(dev.source)
> TypeError: Argument 1 does not allow None as a value
> 
> This patch fixes above issue by leaving the 'source path' info blank in
> case of source.socket.
> In this case, Considering that showing 'target path' info without source
> info is kind of meaningless, So this patch leaves the 'target path' info
> blank as well.
> 
> Signed-off-by: Lin Ma 
> ---
>  virtManager/device/fsdetails.py | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/virtManager/device/fsdetails.py b/virtManager/device/fsdetails.py
> index 40868d1c..67716eab 100644
> --- a/virtManager/device/fsdetails.py
> +++ b/virtManager/device/fsdetails.py
> @@ -190,10 +190,10 @@ class vmmFSDetails(vmmGObjectUI):
>  self.widget("fs-format-combo"), dev.driver_format)
>  
>  if dev.type != DeviceFilesystem.TYPE_RAM:
> -self.widget("fs-source").set_text(dev.source)
> +self.widget("fs-source").set_text(dev.source or "")
>  else:
>  self.widget("fs-ram-source-spin").set_value(int(dev.source) // 
> 1024)
> -self.widget("fs-target").set_text(dev.target or "")
> +self.widget("fs-target").set_text(dev.target and dev.source or "")
>  self.widget("fs-readonly").set_active(dev.readonly)
>  
>  self._active_edits = []

If you do 'FOO and BAR or BAZ' and both FOO and BAR evaluate True, then
BAR is returned. So this last line would put dev.source in the target
path UI when both values are set. I don't think it's necessary to keep
dev.target empty in this case, so I dropped this last line and pushed.

Thanks,
Cole



Re: [virt-tools-list] [virt-manager] [PATCH] cli: --disk: Add driver.metadata_cache options

2022-08-17 Thread Cole Robinson
On 8/16/22 12:59 AM, Lin Ma wrote:
> Properly setting the metadata cache size can provide better performance
> in case of using big qcow2 images.
> 
> This patch introduces two driver options:
> * driver.metadata_cache.max_size
> * driver.metadata_cache.max_size.unit
> 
> E.g. --disk ...,driver.type=qcow2,\
>  driver.metadata_cache.max_size=2,\
>  driver.metadata_cache.max_size.unit=MiB
> 
> BTW, Metadata cache size control is currently supported only for qcow2.
> Regarding how to properly caluclate the cache size of qcow2, Please refer
> to qemu's documentation.
> 

Thanks, pushed now.

- Cole



Re: gpg Signature File

2022-08-08 Thread Cole Robinson
With the latest release I've published a .asc file now.
https://virt-manager.org/download

On 7/25/22 11:30 AM, Wayne Sallee wrote:
> Where is the gpg signature file?
> 
> so that I can run the following command:
> gpg --verify SIGNATURE-FILE  virt-manager-4.0.0.tar.gz
> 
> Wayne Sallee
> wa...@waynesallee.com
> http://www.WayneSallee.com
> 


- Cole



ANNOUNCE: virt-manager 4.1.0 released

2022-08-08 Thread Cole Robinson
I'm happy to announce the release of virt-manager 4.1.0!

virt-manager is a desktop application for managing KVM, Xen, and LXC
virtualization via libvirt.

The release can be downloaded from:

http://virt-manager.org/download/

This release includes:

- Fix build with setuptools-61 (Peter Alfredsen, Miro Hrončok)
- add UI and cli support for qemu-vdagent channel (Jonathon Jongsma)
- cli: More --iothreads suboptions (Lin Ma)
- launch_security: Use SEV-ES policy=0x07 if host supports it (Charles
  Arnold)
- cli: Add support for URL query with disks (Martin Kletzander)

Thanks to everyone who has contributed to this release through testing,
bug reporting, submitting patches, and otherwise sending in feedback!

Thanks,
Cole



Re: [PATCH virt-manager v2] Add support for enabling Secure Encrypted Virtualization in the GUI

2022-08-03 Thread Cole Robinson
On 4/7/22 3:14 PM, Charles Arnold wrote:
> From 9646a5dd083a5acdf6cf519863354efaf4bdd07c Mon Sep 17 00:00:00 2001
> From: Charles Arnold 
> Date: Thu, 7 Apr 2022 13:00:10 -0600
> Subject: [PATCH v2 1/1] Add support for enabling Secure Encrypted
> Virtualization
>  in the GUI
> 
> Add an "Enable launch security" checkbox on the Details memory tab.
> Do the minimal configuration required for libvirt to enable this feature
> on compatible hardware.
> 
> Allow libvirt to determine the firmware file. This just requires
> enabling of efi and setting an appropriate policy. Check the libvirt
> domain capabilities to determine whether SEV or SEV-ES is supported.
> 

Sorry for the embarrassing delay replying here :/

I separated out a commit with the launch_security.py change to use
SEV-ES policy, added a test case, and pushed with you as author:
https://github.com/virt-manager/virt-manager/commit/424283ad1db9c4da519fac698486967e6b6557b0

Some more comments below

> Signed-off-by: Charles Arnold 
> ---
>  ui/details.ui  | 15 ++-
>  virtManager/details/details.py | 15 ++-
>  virtManager/object/domain.py   | 14 +-
>  virtinst/domain/launch_security.py | 23 +++
>  virtinst/domain/memorybacking.py   |  3 +++
>  virtinst/domcapabilities.py    |  8 +++-
>  6 files changed, 66 insertions(+), 12 deletions(-)
> 

> diff --git a/virtManager/object/domain.py b/virtManager/object/domain.py
> index 70e4e49f..feb43bd2 100644
> --- a/virtManager/object/domain.py
> +++ b/virtManager/object/domain.py
> @@ -688,7 +688,7 @@ class vmmDomain(vmmLibvirtObject):
>  guest.memoryBacking.access_mode = access_mode
> 
>  def define_memory(self, memory=_SENTINEL, maxmem=_SENTINEL,
> -    mem_shared=_SENTINEL):
> +    mem_shared=_SENTINEL, sevmem=_SENTINEL):
>  guest = self._make_xmlobj_to_define()
> 
>  if memory != _SENTINEL:
> @@ -697,6 +697,15 @@ class vmmDomain(vmmLibvirtObject):
>  guest.memory = int(maxmem)
>  if mem_shared != _SENTINEL:
>  self._edit_shared_mem(guest, mem_shared)
> +    if sevmem != _SENTINEL:
> +    if sevmem is True:
> +    guest.launchSecurity.type = "sev"
> +    guest.launchSecurity.set_defaults(guest)
> +    guest.memoryBacking.set_locked(True)
> +    else:
> +    guest.launchSecurity.type = None
> +    guest.launchSecurity.policy = None
> +    guest.memoryBacking.set_locked(False)
> 
>  self._redefine_xmlobj(guest)
>

The problem with toggling the locked memory setting in lockstep is, if
user enables then disables sev, it could take the locked memory setting
with it. Pretty niche, but this is the problem with coupling multiple
options together under a single UI knob. The alternative is adding a
'Lock memory' option to the UI which sucks in its own way...

> @@ -1310,6 +1319,9 @@ class vmmDomain(vmmLibvirtObject):
>  def get_description(self):
>  return self.get_xmlobj().description
> 
> +    def get_launch_security_type(self):
> +    return self.get_xmlobj().launchSecurity.type
> +
>  def get_boot_order(self):
>  legacy = not self.can_use_device_boot_order()
>  return self.xmlobj.get_boot_order(legacy=legacy)
> diff --git a/virtinst/domain/launch_security.py
> b/virtinst/domain/launch_security.py
> index 7af71811..f4a4a4b8 100644
> --- a/virtinst/domain/launch_security.py
> +++ b/virtinst/domain/launch_security.py
> @@ -19,16 +19,23 @@ class DomainLaunchSecurity(XMLBuilder):
>  kernelHashes = XMLProperty("./@kernelHashes", is_yesno=True)
> 
>  def _set_defaults_sev(self, guest):
> -    if not guest.os.is_q35() or not guest.is_uefi():
> -    raise RuntimeError(_("SEV launch security requires a Q35
> UEFI machine"))
> +    if not guest.os.is_q35():
> +    raise RuntimeError(_("SEV launch security requires a Q35
> machine"))
> +    # Libvirt will select the appropriate firmware file based on
> the policy
> +    # defined below and if efi is enabled.
> +    if not guest.is_uefi():
> +    guest.os.firmware = 'efi'
> 

We can't change firmware setting here. This same code path would be used
for UI editing an existing VM, and switching from bios to efi could make
that VM unbootable. The UI doesn't give any option for changing firmware
for an existing VM for that reason. Also this is likely to cause issues
in virt-install too where we have to take pains to handle uefi request
early, since later VM defaults depend on it.

For virt-manager 'customize' dialog, the solution is to disable the sev
UI with a error tooltip saying UEFI is required, and make the user
select that first. For an existing VM not using UEFI, they are out of luck.

For virt-install if you wanted to make the defaults smarter, you could
extend Guest.prefers_uefi to check for launchSecurity=sev. Would 

Re: [virt-manager] [PATCH v2] cli: Add --iothreads defaultiothread.thread_pool_{min,max}

2022-08-02 Thread Cole Robinson
On 8/2/22 1:02 AM, Lin Ma wrote:
> It allows to set the thread pool size to optimize spawning worker threads
> for the default event loop in real time environment. For example:
> 
> --iothreads defaultiothread.thread_pool_min=8,\
> defaultiothread.thread_pool_max=16
> 
> Signed-off-by: Lin Ma 
> ---
>  tests/data/cli/compare/virt-install-many-devices.xml |  1 +
>  tests/test_cli.py|  2 +-
>  virtinst/cli.py  | 10 ++
>  virtinst/guest.py| 11 ++-
>  4 files changed, 22 insertions(+), 2 deletions(-)

Thanks! pushed now

- Cole



Re: [virt-manager] [PATCH 0/2] Add --defaultiothread and iothreadids improvements

2022-08-01 Thread Cole Robinson
On 7/29/22 7:16 AM, Lin Ma wrote:
> * Add --defaultiothread
> 
> * Add two optional attributes to --iothreads iothreadids:
> --iothreads iothreadids.iothread[0-9]*.thread_pool_min
> --iothreads iothreadids.iothread[0-9]*.thread_pool_max
> 
> Lin Ma (2):
>   cli: Add iothreadids attributes thread_pool_min and thread_pool_max
>   cli: Add support for --defaultiothread
> 
>  man/virt-install.rst  | 15 ++
>  man/virt-xml.rst  |  1 +
>  .../cli/compare/virt-install-many-devices.xml |  3 +-
>  tests/test_cli.py |  5 +++-
>  virtinst/cli.py   | 29 ++-
>  virtinst/guest.py | 15 --
>  6 files changed, 63 insertions(+), 5 deletions(-)
> 

Thanks, I pushed patch #1. Second patch looks okay but I'd prefer to
stuff those options as suboptions of --iothreads, like we do for
iothreadids. So

--iothreads
defaultiothread.thread_pool_min=8,defaultiothread.thread_pool_max=16

This lets us drop the .rst file updates for something that's pretty
advanced/obscure, and keeps the current pattern for handling
/domain/iothreads and /domain/iothreadids. Thoughts?

Thanks,
Cole



Re: [virt-manager] [PATCH] domain: cpu: Clear 'migratable' when changing to custom cpu

2022-08-01 Thread Cole Robinson
On 7/20/22 2:53 AM, Lin Ma wrote:
> Otherwise going host-passthrough -> custom can cause libvirt validation
> error due to libvirt fills the default value(migratable='on') for the
> host-passthrough in domain XML.
> 
> Signed-off-by: Lin Ma 
> ---
>  virtinst/domain/cpu.py | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/virtinst/domain/cpu.py b/virtinst/domain/cpu.py
> index e40c5307..5de42b4e 100644
> --- a/virtinst/domain/cpu.py
> +++ b/virtinst/domain/cpu.py
> @@ -358,6 +358,7 @@ class DomainCpu(XMLBuilder):
>  
>  def set_model(self, guest, val):
>  log.debug("setting cpu model %s", val)
> +self.migratable = None
>  if val:
>  self.mode = "custom"
>  if not self.match:

Thanks, pushed now

- Cole



Re: [virt-manager PATCH 0/2] Add support for qemu-vdagent channel

2022-06-13 Thread Cole Robinson
On 5/18/22 6:03 PM, Jonathon Jongsma wrote:
> Libvirt 8.4.0 will have support for the qemu-vdagent channel that will allow
> clipboard sync with VNC guests. Add this support to virt-install/virt-manager.
> 
> Jonathon Jongsma (2):
>   virt-install: add support for qemu-vdagent channel
>   virt-manager: add support for qemu-vdagent channel
> 
>  man/virt-install.rst  |  9 ++
>  .../cli/compare/virt-install-many-devices.xml |  7 +
>  tests/test_cli.py |  1 +
>  ui/addhardware.ui | 28 ++-
>  ui/details.ui | 27 +-
>  virtManager/addhardware.py| 10 ++-
>  virtManager/details/details.py|  7 +++--
>  virtinst/cli.py   |  2 ++
>  virtinst/devices/char.py  |  8 +-
>  9 files changed, 93 insertions(+), 6 deletions(-)
> 

Thanks, pushed now.

I changed the Details device list label back to the old format, since
the new format felt too wordy to me. I special cased the qemu-vdagent to
list it as 'Channel qemu-vdagent' which I think will cover things.

- Cole



Re: no release tarball

2022-04-20 Thread Cole Robinson
On 4/19/22 1:25 PM, Henrik Riomar wrote:
> Hi,
> 
> Looks like the v1.22 release was never uploaded:
> https://people.redhat.com/~rjones/virt-what/files/
> 
> Can you look into this?
> 

ccing rjones incase he missed this is about virt-what

- Cole



Re: Add support for enabling Secure Encrypted Virtualization in the GUI

2022-04-20 Thread Cole Robinson
On 4/19/22 1:21 PM, Daniel P. Berrangé wrote:
> On Tue, Apr 19, 2022 at 12:05:29PM -0400, Cole Robinson wrote:
>> On 4/4/22 11:49 AM, Charles Arnold wrote:
>>>
>>>
>>> On 4/4/22 6:50 AM, Daniel P. Berrangé wrote:
>>>> On Fri, Apr 01, 2022 at 12:13:17PM -0600, Charles Arnold wrote:
>>>>>  From d700e8cee7cd525c0022b5a9a440f64c4ab149f0 Mon Sep 17 00:00:00 2001
>>>>> From: Charles Arnold 
>>>>> Date: Fri, 1 Apr 2022 12:01:21 -0600
>>>>> Subject: [PATCH 1/1] Add support for enabling Secure Encrypted
>>>>> Virtualization
>>>>>   in the GUI
>>>>>
>>>>> Add an "Enable Launch Security" checkbox on the Details memory tab.
>>>>> Do the minimal configuration required for libvirt to enable this feature
>>>>> on compatible hardware.
>>>>>
>>>> Don't we need to turn on the 'iommu' option for all virtio devices
>>>> too, and disable PXE on any NICs ?
>>>>
>>>> https://libvirt.org/kbase/launch_security_sev.html#virtio
>>>>
>>>
>>> I used to enumerate through the virtio devices in an old version of this
>>> patch
>>> for virt-manager and enable iommu but it really wasn't reasonable for
>>> virt-manager to track which virtio devices needed iommu enabled.
>>> Additionally,
>>> libvirt will sometimes add a device when a VM is created. This patch
>>> leans on libvirt to do the right thing when sev is enabled similar to what
>>> happens when launch security is specified on the virt-install command line.
>>>
>>
>> Yeah, I would still like to see libvirt do this unless there's a good
>> reason why it can't. From my July 2020 mail
>> https://listman.redhat.com/archives/virt-tools-list/2020-July/017087.html
>>
>>> if sev
>>> launchSecurity _requires_ every virtio device to have iommu='on' then
>>> libvirt should be setting that itself. It doesn't need to hardcode it
>>> into the XML, it can set it at VM startup time. If the user set an
>>> explicit value in the XML then honor that but otherwise fill it in at
>>> runtime when it is required. Trying to deal with this in an app where we
>>> want to advertise turning the config off is basically an impossible
>>> problem to know if we are going to undo any explicit user config or not.
>>
>> danpb does this sound reasonable? If so I can work on this.
> 
> Something automagic sounds ok
> 
>> Also, anyone know if TDX and SNP will require virtio iommu setting as well?
> 
> I expect SNP will, but no idea about TDX.
> 

So apparently qemu 6.0.0+ already does this for us:
https://gitlab.com/qemu-project/qemu/-/commit/9f88a7a3df

Which is strange because some of my sev testing definitely failed with
an iommu sounding error in the guest kernel until I enabled iommu on
virtio-blk, but I can't reproduce now. Maybe I was using an older qemu,
I think the host was fedora 34 at the time

I'll send a patch for libvirt docs

I'll give this patch another look too

Thanks,
Cole



Re: [virt-manager question] Advice on using libvirt with better default arguments

2022-04-19 Thread Cole Robinson
On 4/8/22 11:01 PM, Cameron Showalter wrote:
> Hi,
> 
> I'm trying to learn libvirt enough have a python library around it to
> manage VM's. I'm at the part where I'm trying to make almost any OS easy
> to run based solely on the ISO. (For example: If I try to start a
> Windows 10 VM, There's extra work to get the OS to see the qcow volume
> during the install. But by creating one through the virt-manager GUI, it
> see's it automatically. So this is something you've solved outside
> libvirt). I'd like to get the same defaults as virt-manager into my
> python package. To do this, any advice on the easiest / most stable
> path? A couple ways I thought of:
> 
> - Maybe add virt-manager as a submodule, and call your python scripts to
> get the good default arguments? Is the back-end for your GUI stable
> enough long term for something like this?
> 
> - Do those defaults come straight from the `Libosinfo` library? And by
> learning how to use that, it'll provide the default libvirt xml blocks I
> can use for libvirt?

For this type of thing most people call out directly to virt-install.
It's the closest thing to an API for 'create a VM and give me good
defaults'. Trying to use vendored virtinst/ code could work but you'll
end up reimplementing parts of virtinstall.py.

libosinfo is a critical piece but for virt-manager/virt-install usage
it's mostly 'does this OS support virtio or not' and 'give me known
install media location for OS $FOO'

- Cole



Re: Add support for enabling Secure Encrypted Virtualization in the GUI

2022-04-19 Thread Cole Robinson
On 4/4/22 11:49 AM, Charles Arnold wrote:
> 
> 
> On 4/4/22 6:50 AM, Daniel P. Berrangé wrote:
>> On Fri, Apr 01, 2022 at 12:13:17PM -0600, Charles Arnold wrote:
>>>  From d700e8cee7cd525c0022b5a9a440f64c4ab149f0 Mon Sep 17 00:00:00 2001
>>> From: Charles Arnold 
>>> Date: Fri, 1 Apr 2022 12:01:21 -0600
>>> Subject: [PATCH 1/1] Add support for enabling Secure Encrypted
>>> Virtualization
>>>   in the GUI
>>>
>>> Add an "Enable Launch Security" checkbox on the Details memory tab.
>>> Do the minimal configuration required for libvirt to enable this feature
>>> on compatible hardware.
>>>
>> Don't we need to turn on the 'iommu' option for all virtio devices
>> too, and disable PXE on any NICs ?
>>
>> https://libvirt.org/kbase/launch_security_sev.html#virtio
>>
> 
> I used to enumerate through the virtio devices in an old version of this
> patch
> for virt-manager and enable iommu but it really wasn't reasonable for
> virt-manager to track which virtio devices needed iommu enabled.
> Additionally,
> libvirt will sometimes add a device when a VM is created. This patch
> leans on libvirt to do the right thing when sev is enabled similar to what
> happens when launch security is specified on the virt-install command line.
> 

Yeah, I would still like to see libvirt do this unless there's a good
reason why it can't. From my July 2020 mail
https://listman.redhat.com/archives/virt-tools-list/2020-July/017087.html

> if sev
> launchSecurity _requires_ every virtio device to have iommu='on' then
> libvirt should be setting that itself. It doesn't need to hardcode it
> into the XML, it can set it at VM startup time. If the user set an
> explicit value in the XML then honor that but otherwise fill it in at
> runtime when it is required. Trying to deal with this in an app where we
> want to advertise turning the config off is basically an impossible
> problem to know if we are going to undo any explicit user config or not.

danpb does this sound reasonable? If so I can work on this.

Also, anyone know if TDX and SNP will require virtio iommu setting as well?



Anyways, until we can make this 'just work' I don't think it makes sense
in virt-manager UI. If making this work already requires XML edits that
virt-manager doesn't expose, IMO it's fine to tell users to also do:

sudo virt-xml VMNAME --edit --launchSecurity type=sev && sudo virt-xml
VMNAME --edit --memoryBacking locked=yes

Putting it in the UI before it works out of the box is just going to
create support headaches IMO.

Thanks,
Cole



Re: Translation fixup

2022-04-13 Thread Cole Robinson
On 3/23/22 6:38 AM, Temuri Doghonadze wrote:
> Hello,
> Wanted to start translation of virt-manager to Georgian severl days
> ago, but weblate says project is locked because of some errors.
> (https://translate.fedoraproject.org/projects/virt-manager/)
> Could you fix it, please?
> 

Should be fixed now, sorry about that

- Cole



ANNOUNCE: virt-manager 4.0.0 released

2022-03-02 Thread Cole Robinson
I'm happy to announce the release of virt-manager 4.0.0!
The release can be downloaded from: http://virt-manager.org/download/


Some notable defaults changes:

* virt-install: missing --os-variant/--osinfo is now a hard error in
  most cases. If you weren't specifying a value, or getting one from
  install media detection, you were probably getting crappy defaults
  and didn't realize it. If you hit this case you will see a big
  descriptive error hopefully guiding you to an easy solution.
  You can see the error and some more details in this email:

https://listman.redhat.com/archives/virt-tools-list/2022-February/msg00021.html

* For qemu x86 we now use mode=host-passthrough as the CPU default
  instead of mode=host-model

* We now use video model virtio-gpu/virtio-vga in many cases where
  we previously used qxl, following the suggestions here:
  https://www.kraxel.org/blog/2019/09/display-devices-in-qemu/

* We now add an TPM emulated device when the VM will use UEFI

* qemu x86 q35 VMs will be created with extra pcie-root-ports to
  facilitate device hotplug.

* We set disk discard=unmap when we know the disk storage sparse,
  or when storage is a block device.


Some other notable changes:

- Add 'Enable shared memory' UI checkbox (Lin Ma)
- add UI preference to default to UEFI for new VMs (Charles Arnold)
- Add virtiofs filesystem driver UI option
- Fill in all --cputune, --cpu, --shmem, --input, and --boot suboptions
  (Hugues Fafard)
- virt-* mdev improvements (Shalini Chellathurai Saroja)
- bhyve improvments (Roman Bogorodskiy)
- Revive network portgroup UI


Notes for distro maintainers:

* We replaced usage of isoinfo with xorisso
* We now depend on python setuptools for build + install
* We added an explicit runtime requirement to pygobject >= 3.31.3.
  This is from June 2014 so probably not relevant for modern distros.


Thanks to everyone who has contributed to this release through testing,
bug reporting, submitting patches, and otherwise sending in feedback!

Thanks,
Cole



Re: Uninstall Virtual Machine Manager

2022-03-02 Thread Cole Robinson
On 3/2/22 9:16 AM, Andreas Schweiger wrote:
> Hi there,
> 
>  
> 
> I have installed the Windows binary of the application. How can I
> uninstall it?
> 

AFAIK nobody involved with the upstream virt-manager project is involved
with providing a windows build. You will need to figure out where you
got it, and ask on their support channels

- Cole



Re: [virt-tools-list] [virt-manager] [PATCH 0/7] Add a checkbox in memory details to control memory access mode

2022-02-20 Thread Cole Robinson
On 1/25/22 12:50 PM, Cole Robinson wrote:
> On 8/1/21 8:36 AM, Lin Ma wrote:
>> Future patches about virtiofs addhw support relies on shared memory access.
>>
>> Lin Ma (7):
>>   domcapabilities: Get filesystem devices
>>   domcapabilities: Add supports_filesystem_virtiofs()
>>   domcapabilities: Add supports_memorybacking_memfd()
>>   domain: memorybacking: Add function is_shared_access()
>>   domain: cpu: Add function has_private_memAccess_cells()
>>   domain: cpu: Add function all_shared_memAccess_cells()
>>   details: Add new checkbox to control shared memory access
>>
>>  ui/details.ui| 14 +
>>  virtManager/details/details.py   | 52 
>>  virtManager/object/domain.py | 17 ++-
>>  virtinst/domain/cpu.py   | 14 +
>>  virtinst/domain/memorybacking.py |  3 ++
>>  virtinst/domcapabilities.py  | 20 
>>  6 files changed, 119 insertions(+), 1 deletion(-)
>>
> 
> Sorry for the reaaally long delay. This came in right when I went on
> paternity leave and I'm still digging out of the backlog.
> 
> I've pushed this series now, but with some bits removed. I stripped the
> logic back to only handle enabling+disabling source_type=memfd and
> access_mode=shared. If the guest has numa cells configured, we leave the
> checkbox unchecked and disable it with a warning. Same thing if the
> domcaps don't advertise both virtiofs and memfd.
> 
> I think that logic covers what most users will need, going forward. If a
> user has  in their XML, I think they can be expected to make the
> shared memory change themselves. It makes me a little nervous getting
> into the business of processing  XML that virt-manager otherwise
> doesn't have any UI for. So my preference is to not do that.
> 
> If you have a good argument for why it should be readded, I'm open to
> it. But resubmitting it should come with uitests, since this adds some
> annoying corner cases that I will want to see tested. FWIW I tried to
> keep the code revert to a single commit, so hopefully it's easy to
> revive, if you are interested.
> 
> The other big thing I did was move the logic from details.py to
> domain.py, which will make it easier to share the interesting bits with
> the Add Hardware UI like I mentioned in this mail:
> https://listman.redhat.com/archives/virt-tools-list/2021-July/msg00016.html
> 
> I'm working towards a release, so unless you have patches with that work
> kicking around in a git branch, I'll make the addhw change in the next
> few weeks.

I updated the filesystem UI in git to show 9p vs virtiofs option. If
user selects virtiofs and they don't have shared memory enabled (per our
less than exhaustive logic), a UI label appears with: "You may need to
'Enable shared memory' on the 'Memory' screen". It would take more work
to wire up the 'Enable shared memory' checkbox into the addhardware
wizard, so I saved that for another day.

Thanks,
Cole



Re: virt-install: changing default --os-variant behavior

2022-02-17 Thread Cole Robinson
On 2/17/22 4:28 AM, Andrea Bolognani wrote:
> On Wed, Feb 16, 2022 at 04:27:47PM -0500, Cole Robinson wrote:
>> FYI I've changed the defaults here upstream now. For x86 HVM VMs, we now
>> error if an OS is not specified, or detected from install media
>>
>> https://github.com/virt-manager/virt-manager/commit/26ecf8a5e3e4721488159605afd10e39f68e6382
> 
> Is there a specific reason why you're explicitly limiting this to
> x86? I think it would be less confusing if this behavior applied
> consistently to all architectures, even those that (by virtue of
> being introduced more recently) have better defaults.
> 

Generally it matters less, since we hardcode virtio default for non-x86
KVM regardless of OS. Support for those arches in most distros, and in
virt-install, came far after virtio was widely available. Also x86 is
the only case we previously printed a warning for too.

But now that you mention it, we probably should just enable it to be
consistent even if it doesn't have a ton of impact. I'll look at it

Thanks,
Cole



Re: Plans for next release?

2022-02-16 Thread Cole Robinson
On 2/16/22 3:59 AM, Christian Ehrhardt wrote:
> On Tue, Jan 25, 2022 at 1:38 PM Andrea Bolognani  wrote:
>>
>> On Tue, Jan 25, 2022 at 11:21:27AM +0100, Christian Ehrhardt wrote:
>>> On Tue, Jan 25, 2022 at 10:33 AM Andrea Bolognani  
>>> wrote:
>>>> On Mon, Jan 24, 2022 at 03:53:12PM -0500, Cole Robinson wrote:
>>>>> On 1/23/22 11:47 AM, Andrea Bolognani wrote:
>>>>>> Do you think there's any chance of that happening in the next couple
>>>>>> of weeks?
>>>>>
>>>>> 2 weeks: maybe. 4 weeks: more likely. There's already some potentially
>>>>> disruptive stuff in git like feature removals and default changes, but
>>>>> there's more coming, and I'd rather they all be in one release than two
>>>>> releases in close proximity. I'm actively working on it all though
>>>>
>>>> That makes perfect sense, and having a rough timeline should make it
>>>> possible to plan the Debian work accordingly. Thanks!
>>>
>>> Thanks to both of you for taking care of this!
>>>
>>> FYI - Feb 24th is the next Ubuntu feature freeze (just the estimated 4
>>> weeks from now), if we could manage to get it done by then that would
>>> be great.
>>> Keep me in the loop if we need a feature freeze exception.
>>
>> Given your deadline, I would probably not hold out for an upstream
>> release and coordinate with the Debian developers to make sure a
>> 3.2.0-based package that can be built against the latest libvirt
>> release is available in the archive instead. You can then decide to
>> rebase if the new virt-manager release materializes in time. But
>> that's just how *I* would personally handle the situation :)
> 
> I'm not holding breath on anything :-)
> But I did the virtual version of it [1] which allows me to wait just a
> tiny bit longer.
> 
> Also I wanted to use this opportunity and the fact that since the 2-4
> weeks estimate 3.5 have passed by now.
> Therefore @Cole Robinson is there any update on a new release ETA
> based on the work so far?
> 
> [1]: https://bugs.launchpad.net/ubuntu/+source/virt-manager/+bug/1961027

All the defaults changes are in git now. If you wanted to push a git
snapshot to ubuntu, now is a good time. I'm doing the same for fedora 36
tomorrow. Final release will be next week after some smaller changes.

There is one big virt-install defaults change that will cause some
previous usage to error:
https://listman.redhat.com/archives/virt-tools-list/2022-February/msg00021.html

Depending on how close you are to release it might be disruptive to push
it, if virt-install is used in distro CI or something. Just something to
consider.

- Cole



Re: virt-install: changing default --os-variant behavior

2022-02-16 Thread Cole Robinson
FYI I've changed the defaults here upstream now. For x86 HVM VMs, we now
error if an OS is not specified, or detected from install media

https://github.com/virt-manager/virt-manager/commit/26ecf8a5e3e4721488159605afd10e39f68e6382

Here's the complete error message virt-install will print which kinda
covers everything:

```
ERROR
--os-variant/--osinfo OS name is required, but no value was
set or detected.

This is now a fatal error. Specifying an OS name is required
for modern, performant, and secure virtual machine defaults.

If you expected virt-install to detect an OS name from the
install media, you can set a fallback OS name with:

  --osinfo detect=on,name=OSNAME

You can see a full list of possible OS name values with:

   virt-install --osinfo list

If your Linux distro is not listed, try one of generic values
such as: linux2020, linux2018, linux2016

If you just need to get the old behavior back, you can use:

  --osinfo detect=on,require=off

Or export VIRTINSTALL_OSINFO_DISABLE_REQUIRE=1
```


A few things other comments:

* I updated error messages and docs to push users towards the --osinfo
cli naming, which is a newer alias for --os-variant. --osinfo is nicer
naming since it makes it more obvious this value is tied to libosinfo.
--os-variant isn't ever being removed though

* The old behavior was --os-variant detect=on,name=generic. Adding this
to your command line is still a valid workaround. I didn't put this in
the error message though since we should be discouraging usage of
generic. linux entries are almost always the better choice here.

* The environment variable is for cases where virt-install is used in CI
jobs or deeply embedded in other tools, where it may be tough to change
the invocation. When the env variable is set we still print the big
error message.

Thanks,
Cole



On 9/20/20 4:09 PM, Cole Robinson wrote:
> Hi virt-tools-list + CCd libosinfo developers too
> 
> Use of virt-install without a detected guest OS, and without manually
> specified --os-variant, is a never ending source of bugs and user
> confusion and poorly performing VM configs. This mail is intended to
> start a discussion about how to improve that situation.
> 
> In virt-install 3.0.0 I added some extra options to --os-variant. Examples:
> 
> * --os-variant detect=on,require=on: detect from media but fail if
> detection doesn't work
> * `--os-variant detect=on,name=NAME: detect from media but fall back to
> os NAME if detection fails
> 
> The default didn't change though, which is `--os-variant
> detect=on,name=generic`, with a warning printed if detection fails.
> 'generic' OS entry is a virt-install creation that basically means 'use
> libvirts defaults', which are not performant to say the least.
> 
> (Caveat this is mostly about x86 qemu-using hypervisors. We largely
> assume the presence of virtio for all arm32/aarhc64/s390x/ppc64/riscv
> KVM guests even though that's not entirely correct.)
> 
> 
> For virt-install, the two possible paths forward I see are
> 
> 1) Default to --os-variant detect=on. Detection failure is fatal. Couple
> this with a big descriptive error when it fails explaining why this
> changed, and explaining how to specify a fallback name, with some
> suggestions how to choose one. Probably wouldn't want to make this
> change until the new --os-variant options have been around for a release
> or two
> 
> 2) Default to --os-variant detect=on,name=. 'give me
> virtio' is representative of what most virt-install users want. But this
> adds some new corner cases, ex if anyone is using virt-install with
> windows up until now they could get away without specifying a
> --os-variant and things would generally work, but now if we default to
> virtio windows out of the box is not going to install. I kinda doubt
> many people are using virt-install with windows though.
> 
> 
> I think #1 is the way to go but either case would benefit from some
> coordination with libosinfo.
> 
> IMO we don't really have a good story for when users are installing a
> distro that isn't represented in libosinfo. This applies to virt-manager
> too. If users are installing 'spanking new exotic distro FOO' and
> libosinfo can't detect their install media, what OS value should we
> recommend they use? If they are installing linux there's a 99.9%
> certainty that the distro is new enough to support all the major virtio
> devices, so mostly it doesn't matter what distro they choose as long as
> it's from the past decade. But conveying that is difficult, it's not
> easily actionable, and it would feel weird to choose ubuntu* when you
> are installing slackware or something.
> 
> IMO it would be useful for osinfo-db to have some 'meta' OS entries.
> Something like linux2018, linux2020, etc. I figure we can probably peg
> them to roughly matc

Re: [PATCH virt-manager v4] virt-manager: prevent attach of inactive nodedevs

2022-02-12 Thread Cole Robinson
On 2/11/22 8:33 AM, Shalini Chellathurai Saroja wrote:
> With virt-manager application, it is possible to add inactive node
> devices(eg: mediated device) in host system to guest system. But it is
> impossible to start a guest system with inactive node devices.  Also,
> it is not yet possible to start a node device with virt-manager
> application. So, the user cannot use the inactive node devices.
> 
> This patch disables the "finish" button and provides a tip, when
> inactive node devices are selected. So, it is not possible to add
> inactive node devices to the guest system.
> 
> Signed-off-by: Shalini Chellathurai Saroja 
> Reviewed-by: Boris Fiuczynski 
> ---

Thanks, pushed now. I added some small tweaks on top, to add uitest
coverage, and to move the tooltip from the finish button to the actual
row in treeview. Problem with the finish button is we'd need to audit
other places to make sure the tooltip was removed when switching pages
for example

Thanks,
Cole



Re: [virt-manager PATCH] Fix checking for graphics spice for Xen

2022-02-10 Thread Cole Robinson
On 2/9/22 4:53 PM, Charles Arnold wrote:
> On 2/9/22 11:32 AM, Charles Arnold wrote:
>> On 2/9/22 10:32 AM, Cole Robinson wrote:
>>> On 2/9/22 9:50 AM, Charles Arnold wrote:
>>>> Advertising graphics support doesn't necessarily mean spice support.
>>>> self.devices.graphics.supported seems to not be spice specific
>>>> but rather more generic in indicating whether graphics are supported.
>>>> For Xen, spice is not supported so fallback to the old logic.
>>>>
>>>> Signed-off-by: Charles Arnold 
>>>>
>>>> diff --git a/virtinst/domcapabilities.py b/virtinst/domcapabilities.py
>>>> index 67bceaa3..ad6e3363 100644
>>>> --- a/virtinst/domcapabilities.py
>>>> +++ b/virtinst/domcapabilities.py
>>>> @@ -382,7 +382,7 @@ class DomainCapabilities(XMLBuilder):
>>>>   return len(models) > 0 and bool("emulator" in backends)
>>>>
>>>>   def supports_graphics_spice(self):
>>>> -    if not self.devices.graphics.supported:
>>>> +    if not self.devices.graphics.supported or self.conn.is_xen():
>>>>   # domcaps is too old, or the driver doesn't advertise
>>>> graphics
>>>>   # support. Use our pre-existing logic
>>>>   if not self.conn.is_qemu() and not self.conn.is_test():
>>>>
>>>>
>>> Hmm but does that mean domcapabilities for xen is reporting that spice
>>> is available? If that's the case, seems like a domcapabilities bug in
>>> libvirt xen driver. Or am I missing something?
>>>
>>> Thanks,
>>> Cole
>>>
>>
>> Booted into Xen, virsh domcapabilities reports,
>>
>> 
>>   /usr/bin/qemu-system-x86_64
>>   xen
>>   xenpv
>>   x86_64
>>   
>>   
>>   
>>     
>>   
>>   
>>     
>>     
>>     
>>     
>>   
>>   
>>     
>>   
>>     disk
>>     cdrom
>>   
>>   
>>     ide
>>     scsi
>>     xen
>>   
>>   
>>     
>>     
>>   
>>     sdl
>>     vnc
>>     spice
>>   
>>     
>>     
>>   
>>     vga
>>     cirrus
>>     xen
>>   
>>     
>>     
>>   
>>     subsystem
>>   
>>   
>>     default
>>     mandatory
>>     requisite
>>     optional
>>   
>>   
>>     usb
>>     pci
>>   
>>   
>>   
>>     xen
>>   
>>     
>>   
>>   
>>     
>>     
>>     
>>     
>>   
>> 
>>

Thanks, this could be useful to add to the test suite. Is output the
mostly the same for `virsh domcapabilities --machine xenfv` too?

> 
> The more precise error I'm trying to fix is while attempting to install
> a Xen HVM guest it reports,
> 
> details=Traceback (most recent call last):
>   File "./virtManager/asyncjob.py", line 72, in cb_wrapper
>     callback(asyncjob, *args, **kwargs)
>   File "./virtManager/createvm.py", line 2008, in _do_async_install
>     installer.start_install(guest, meter=meter)
>   File "./virtinst/install/installer.py", line 704, in start_install
>     doboot, transient)
>   File "./virtinst/install/installer.py", line 649, in _create_guest
>     domain = self.conn.createXML(install_xml or final_xml, 0)
>   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 4393, in
> createXML
>     raise libvirtError('virDomainCreateXML() failed')
> libvirt.libvirtError: unsupported configuration: cannot add redirected
> USB device: USB is disabled for this domain
> 
> So in the latest code the VM is marked as supporting spice and later
> wants to add a DeviceRedirdev. The 3.2.0 code always just failed for
> spice if 'is_qemu()' was false.

Okay, interesting. Can you provide the full --debug output? Probably our
code that adds usb2/usb3 support by default is not triggering for xen.
There might be something we need to tweak here. This could be a bug on
its own since it probably means explicit `--graphics spice` on the cli
doesn't work when it should.

Can you also try adding `--redirdev none` on the cli and see if that
fixes it, or hits another error?


But my change wasn't intended to change the default for xen from vnc to
spice. I didn't think xen supported spice so I didn't expect it to be
advertised in domcaps. Rather than hack the domcaps check, I think we
should make this explicit at the source: devices/graphics.py
_default_type function. I've pushed this bit which I think will fix your
case:

diff --git a/virtinst/devices/graphics.py b/virtinst/devices/graphics.py
index a91db289..0c3499ef 100644
--- a/virtinst/devices/graphics.py
+++ b/virtinst/devices/graphics.py
@@ -155,10 +155,16 @@ class DeviceGraphics(Device):
 def _default_type(self, guest):
 gtype = guest.default_graphics_type
 log.debug("Using default_graphics=%s", gtype)
+
+if self.conn.is_xen():
+# Xen domcaps can advertise spice support, but we have
historically
+# not defaulted to it for xen, so force vnc.
+log.debug("Not defaulting to spice for xen driver. Using vnc.")
+gtype = "vnc"
+


Thanks,
Cole



Re: [virt-manager PATCH] Fix checking for graphics spice for Xen

2022-02-09 Thread Cole Robinson
On 2/9/22 9:50 AM, Charles Arnold wrote:
> Advertising graphics support doesn't necessarily mean spice support.
> self.devices.graphics.supported seems to not be spice specific
> but rather more generic in indicating whether graphics are supported.
> For Xen, spice is not supported so fallback to the old logic.
> 
> Signed-off-by: Charles Arnold 
> 
> diff --git a/virtinst/domcapabilities.py b/virtinst/domcapabilities.py
> index 67bceaa3..ad6e3363 100644
> --- a/virtinst/domcapabilities.py
> +++ b/virtinst/domcapabilities.py
> @@ -382,7 +382,7 @@ class DomainCapabilities(XMLBuilder):
>  return len(models) > 0 and bool("emulator" in backends)
> 
>  def supports_graphics_spice(self):
> -    if not self.devices.graphics.supported:
> +    if not self.devices.graphics.supported or self.conn.is_xen():
>  # domcaps is too old, or the driver doesn't advertise graphics
>  # support. Use our pre-existing logic
>  if not self.conn.is_qemu() and not self.conn.is_test():
> 
> 

Hmm but does that mean domcapabilities for xen is reporting that spice
is available? If that's the case, seems like a domcapabilities bug in
libvirt xen driver. Or am I missing something?

Thanks,
Cole



Re: [virt-manager][PATCH] Update domain title/description on corresponding event

2022-02-09 Thread Cole Robinson
On 2/8/22 9:12 AM, Michal Privoznik wrote:
> If a domain is edited outside of virt-manager (e.g. via virsh
> edit) then this is reflected in the GUI (in the domain HW details
> tab). However, if domain title or description is updated outside
> of virt-manager (virsh desc) then this change is not reflected.
> This is simply because the corresponding event emitted by libvirt
> is not listened to.
> 
> Signed-off-by: Michal Privoznik 
> ---
>  virtManager/connection.py | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/virtManager/connection.py b/virtManager/connection.py
> index 68a37ea6..6585fef6 100644
> --- a/virtManager/connection.py
> +++ b/virtManager/connection.py
> @@ -765,6 +765,7 @@ class vmmConnection(vmmGObject):
>  _add_domain_xml_event("VIR_DOMAIN_EVENT_ID_DEVICE_ADDED", 19)
>  _add_domain_xml_event("VIR_DOMAIN_EVENT_ID_AGENT_LIFECYCLE", 18,
>self._domain_agent_lifecycle_event)
> +_add_domain_xml_event("VIR_DOMAIN_EVENT_ID_METADATA_CHANGE", 23)
>  
>  try:
>  _check_events_disabled()

Thanks, pushed now

- Cole



Re: [virt-manager PATCH] man: fix default value for filesystem accessmode option

2022-02-09 Thread Cole Robinson
On 2/8/22 1:59 PM, Jonathon Jongsma wrote:
> Commit 85307b9bd2971cfc972c976c78e411f018647861 changed the default
> value for 'accessmode' from 'passthrough' to 'mapped', but forgot to
> update the documentation in the man page.
> 
> Signed-off-by: Jonathon Jongsma 
> ---
>  man/virt-install.rst | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/man/virt-install.rst b/man/virt-install.rst
> index 85070314..d70c37ca 100644
> --- a/man/virt-install.rst
> +++ b/man/virt-install.rst
> @@ -1202,7 +1202,7 @@ Some example suboptions:
>  
>  ``accessmode`` or ``mode``
>  The access mode for the source directory from the guest OS. Only used 
> with
> -QEMU and type=mount. Valid modes are 'passthrough' (the default), 
> 'mapped',
> +QEMU and type=mount. Valid modes are 'mapped' (the default), 
> 'passthrough',
>  or 'squash'. See libvirt domain XML documentation for more info.
>  
>  ``source``

Thanks, pushed now

- Cole



FYI virt-manager 'master' branch renamed to 'main'

2022-02-03 Thread Cole Robinson
Hi all, I've changed the upstream virt-manager default branch from
'master' branch to 'main'. You can update your local repos with:

git branch -m master main
git branch main --set-upstream remotes/origin/main
git pull origin --prune

Thanks,
Cole



Re: [PATCH virt-manager v3] virt-manager: prevent attach of inactive nodedevs

2022-02-03 Thread Cole Robinson
On 2/1/22 8:43 AM, Shalini Chellathurai Saroja wrote:
> With virt-manager application, it is possible to add inactive node
> devices(eg: mediated device) in host system to guest system. But it is
> impossible to start a guest system with inactive node devices.  Also,
> it is not yet possible to start a node device with virt-manager
> application. So, the user cannot use the inactive node devices.
> 
> This patch disables the "finish" button and provides a tip, when
> inactive node devices are selected. So, it is not possible to add
> inactive node devices to the guest system.
> 
> Signed-off-by: Shalini Chellathurai Saroja 
> Reviewed-by: Boris Fiuczynski 
> ---
>  virtManager/addhardware.py | 42 +++---
>  1 file changed, 39 insertions(+), 3 deletions(-)
> 
> diff --git a/virtManager/addhardware.py b/virtManager/addhardware.py
> index 132ba4e0..60b58741 100644
> --- a/virtManager/addhardware.py
> +++ b/virtManager/addhardware.py
> @@ -754,7 +754,7 @@ class vmmAddHardware(vmmGObjectUI):
>  def _build_hostdev_treeview(self):
>  host_dev = self.widget("host-device")
>  # [ xmlobj, label]
> -host_dev_model = Gtk.ListStore(object, str)
> +host_dev_model = Gtk.ListStore(object, str, bool)
>  host_dev.set_model(host_dev_model)
>  host_col = Gtk.TreeViewColumn()
>  text = Gtk.CellRendererText()

Add this here to make the text visibly insensitive:

diff --git a/virtManager/addhardware.py b/virtManager/addhardware.py
index 60b58741..5e52c1ea 100644
--- a/virtManager/addhardware.py
+++ b/virtManager/addhardware.py
@@ -760,6 +760,7 @@ class vmmAddHardware(vmmGObjectUI):
 text = Gtk.CellRendererText()
 host_col.pack_start(text, True)
 host_col.add_attribute(text, 'text', 1)
+host_col.add_attribute(text, 'sensitive', 2)
 host_dev_model.set_sort_column_id(1, Gtk.SortType.ASCENDING)
 host_dev.append_column(host_col)


> @@ -763,6 +763,28 @@ class vmmAddHardware(vmmGObjectUI):
>  host_dev_model.set_sort_column_id(1, Gtk.SortType.ASCENDING)
>  host_dev.append_column(host_col)
>  
> +
> +def disable_finish_if_inactive(self, selection):

Please name this _hostdev_row_selected_cb or something describing the
context it is called from.

> +model, row = selection.get_selected()
> +

the 'row' here is a 'treeiter'

> +if row is None:
> +return
> +
> +hostdev = model[row]

calling this variable 'hostdev' is confusing. This should be called
'row' IMO


> +if hostdev[1] is None:
> +self.widget("create-finish").set_sensitive(False)
> +self.widget("create-finish").set_tooltip_text()
> +elif hostdev[2]:
> +self.widget("create-finish").set_sensitive(True)
> +self.widget("create-finish").set_tooltip_text()
> +else:
> +tooltip = (_("%s is not active in the host system.\n"
> +"Please start the mdev in the host system before adding it to 
> the guest.")
> +% hostdev[1])
> +self.widget("create-finish").set_sensitive(False)
> +self.widget("create-finish").set_tooltip_text(tooltip)
> +

I prefer if there's only a single set_sensitive and set_tooltip_text
call. The state is basically

tooltip = None
sensitive = row[2]
if row[1] and sensitive is False:
tooltip = ...
set_tooltip_text(tooltip)
set_sensitive(sensitive)

> +
>  def _populate_hostdev_model(self, devtype):
>  devlist = self.widget("host-device")
>  model = devlist.get_model()
> @@ -790,12 +812,26 @@ class vmmAddHardware(vmmGObjectUI):
>  prettyname = "%s %s" % (
>  parentdev.pretty_name(), prettyname)
>  
> -model.append([dev.xmlobj, prettyname])
> +model.append([dev.xmlobj, prettyname, dev.is_active()])
>  
>  if len(model) == 0:
> -model.append([None, _("No Devices Available")])
> +model.append([None, _("No Devices Available"), False])
> +
>  uiutil.set_list_selection_by_number(devlist, 0)
>  
> +# enable/disable finish button for default selection
> +if model[0][2]:
> +self.widget("create-finish").set_sensitive(True)
> +self.widget("create-finish").set_tooltip_text()
> +else:
> +tooltip = (_("%s is not active in the host system.\n"
> +"Please start the mdev in the host system before adding it to 
> the guest.")
> +% model[0][1])
> +self.widget("create-finish").set_sensitive(False)
> +self.widget("create-finish").set_tooltip_text(tooltip)
> +
> +devlist.get_selection().connect("changed", 
> self.disable_finish_if_inactive)
> +

Please break this long line here.

Duplication like that is a sign that something is wrong :)
You can trigger the callback by doing:

devlist.get_selection().emit("changed")


Re: [PATCH] virt-manager: Add firmware preferences for creating a new VM

2022-01-31 Thread Cole Robinson
On 1/26/22 5:44 PM, Charles Arnold wrote:
> On 1/26/22 12:48 PM, Cole Robinson wrote:
>> On 10/25/21 7:00 PM, Charles Arnold wrote:
>>>  From d30b50139e60afc6e5da1e38f130b27859d31a83 Mon Sep 17 00:00:00 2001
>>> From: Charles Arnold 
>>> Date: Mon, 25 Oct 2021 16:16:06 -0600
>>> Subject: virt-manager: Add firmware preferences for creating a new VM
>>>
>>> Possible values are BIOS (default) and UEFI.
>>> The firmware used is determined by libvirt unless a specific firmware is
>>> selected from the Customize dialog.
>>>
>>> See https://bugzilla.redhat.com/show_bug.cgi?id=1997882
>>>
>> Thanks, this seems mostly okay, but I think we need to make this x86
>> only. Other arches either have no UEFI support, or from virt-manager's
>> perspective _only_ have UEFI support (aarch64 + arm32 machvirt). So the
>> internal field and the UI should make it clear this is x86.
>>
>> The preference UI fields should just be 'System default' and "UEFI".
>> BIOS value isn't actually handled anywhere in create.py, and we don't
>> want to provide that option anyways IMO: Eventually when win11 is
>> supported in osinfo-db, the only option will be UEFI, which means we
>> would have to error if bios is requested. If we ever start choosing to
>> set UEFI by default for OS that support both BIOS and UEFI then we can
>> consider adding a BIOS option here.
>>
>> In some prep patches I changed virtinst/guest.py to use
>> os.firmware='efi' if libvirt is new enough, when UEFI is requested. The
>> attached diff on top of your patch will use that infrastructure, and
>> give some upfront error reporting if for example it appears uefi is not
>> available.
>>
>> If you're cool with those changes I can make them and push, or you can
>> send a v2
> 
> They look good to me. Feel free to push them.
> 

Thanks, pushed now

- Cole



Re: [PATCH] virt-manager: Add firmware preferences for creating a new VM

2022-01-26 Thread Cole Robinson
On 10/25/21 7:00 PM, Charles Arnold wrote:
> From d30b50139e60afc6e5da1e38f130b27859d31a83 Mon Sep 17 00:00:00 2001
> From: Charles Arnold 
> Date: Mon, 25 Oct 2021 16:16:06 -0600
> Subject: virt-manager: Add firmware preferences for creating a new VM
> 
> Possible values are BIOS (default) and UEFI.
> The firmware used is determined by libvirt unless a specific firmware is
> selected from the Customize dialog.
> 
> See https://bugzilla.redhat.com/show_bug.cgi?id=1997882
> 

Thanks, this seems mostly okay, but I think we need to make this x86
only. Other arches either have no UEFI support, or from virt-manager's
perspective _only_ have UEFI support (aarch64 + arm32 machvirt). So the
internal field and the UI should make it clear this is x86.

The preference UI fields should just be 'System default' and "UEFI".
BIOS value isn't actually handled anywhere in create.py, and we don't
want to provide that option anyways IMO: Eventually when win11 is
supported in osinfo-db, the only option will be UEFI, which means we
would have to error if bios is requested. If we ever start choosing to
set UEFI by default for OS that support both BIOS and UEFI then we can
consider adding a BIOS option here.

In some prep patches I changed virtinst/guest.py to use
os.firmware='efi' if libvirt is new enough, when UEFI is requested. The
attached diff on top of your patch will use that infrastructure, and
give some upfront error reporting if for example it appears uefi is not
available.

If you're cool with those changes I can make them and push, or you can
send a v2

Thanks
Colecommit 5d705e47e78bc92b2a1f5e8b5d1e5634417d5b93
Author: Cole Robinson 
Date:   Wed Jan 26 14:44:18 2022 -0500

colediff

Signed-off-by: Cole Robinson 

diff --git a/virtManager/createvm.py b/virtManager/createvm.py
index be947435..52bb6b2c 100644
--- a/virtManager/createvm.py
+++ b/virtManager/createvm.py
@@ -106,7 +106,6 @@ class _GuestData:
 self.os_variant = None
 self.uefi_requested = None
 self.name = None
-self.firmware = None
 
 self.vcpus = None
 self.memory = None
@@ -161,8 +160,6 @@ class _GuestData:
 guest.currentMemory = self.currentMemory
 if self.memory:
 guest.memory = self.memory
-if self.firmware == "uefi":
-guest.os.firmware = "efi"
 
 return guest
 
@@ -481,7 +478,11 @@ class vmmCreateVM(vmmGObjectUI):
 guest.os.is_ppc64() or
 guest.os.is_s390x())
 
-if guest.prefers_uefi():
+default_efi = self.config.get_default_firmware_setting() == "uefi"
+if default_efi:
+log.debug("UEFI default requested via app preferences")
+
+if guest.prefers_uefi() or default_efi:
 try:
 # We call this for validation
 guest.enable_uefi()
@@ -1428,7 +1429,6 @@ class vmmCreateVM(vmmGObjectUI):
 
 gdata.default_graphics_type = self.config.get_graphics_type()
 gdata.x86_cpu_default = self.config.get_default_cpu_setting()
-gdata.firmware = self.config.get_default_firmware_setting()
 
 return gdata
 


Re: [virt-tools-list] [virt-manager] [PATCH 0/7] Add a checkbox in memory details to control memory access mode

2022-01-25 Thread Cole Robinson
On 8/1/21 8:36 AM, Lin Ma wrote:
> Future patches about virtiofs addhw support relies on shared memory access.
> 
> Lin Ma (7):
>   domcapabilities: Get filesystem devices
>   domcapabilities: Add supports_filesystem_virtiofs()
>   domcapabilities: Add supports_memorybacking_memfd()
>   domain: memorybacking: Add function is_shared_access()
>   domain: cpu: Add function has_private_memAccess_cells()
>   domain: cpu: Add function all_shared_memAccess_cells()
>   details: Add new checkbox to control shared memory access
> 
>  ui/details.ui| 14 +
>  virtManager/details/details.py   | 52 
>  virtManager/object/domain.py | 17 ++-
>  virtinst/domain/cpu.py   | 14 +
>  virtinst/domain/memorybacking.py |  3 ++
>  virtinst/domcapabilities.py  | 20 
>  6 files changed, 119 insertions(+), 1 deletion(-)
> 

Sorry for the reaaally long delay. This came in right when I went on
paternity leave and I'm still digging out of the backlog.

I've pushed this series now, but with some bits removed. I stripped the
logic back to only handle enabling+disabling source_type=memfd and
access_mode=shared. If the guest has numa cells configured, we leave the
checkbox unchecked and disable it with a warning. Same thing if the
domcaps don't advertise both virtiofs and memfd.

I think that logic covers what most users will need, going forward. If a
user has  in their XML, I think they can be expected to make the
shared memory change themselves. It makes me a little nervous getting
into the business of processing  XML that virt-manager otherwise
doesn't have any UI for. So my preference is to not do that.

If you have a good argument for why it should be readded, I'm open to
it. But resubmitting it should come with uitests, since this adds some
annoying corner cases that I will want to see tested. FWIW I tried to
keep the code revert to a single commit, so hopefully it's easy to
revive, if you are interested.

The other big thing I did was move the logic from details.py to
domain.py, which will make it easier to share the interesting bits with
the Add Hardware UI like I mentioned in this mail:
https://listman.redhat.com/archives/virt-tools-list/2021-July/msg00016.html

I'm working towards a release, so unless you have patches with that work
kicking around in a git branch, I'll make the addhw change in the next
few weeks.

Thanks,
Cole



Re: Plans for next release?

2022-01-24 Thread Cole Robinson
On 1/23/22 11:47 AM, Andrea Bolognani wrote:
> Hi,
> 
> virt-manager 3.2.0 has been out for more than a year now, and since
> then there's been a fair number of changes, including fixes to make
> the test suite pass against recent libvirt versions.
> 
> There's now a bug[1] filed against the Debian package highlighting
> these test suite failures, and because of it virt-manager is going to
> be removed from testing on February 8th.
> 
> It would of course be possible to backport the necessary fixes, but
> given how long it's been since the last release I think it would make
> sense to tag a new one anyway.
> 
> Do you think there's any chance of that happening in the next couple
> of weeks?
> 
> Thanks!
> 

2 weeks: maybe. 4 weeks: more likely. There's already some potentially
disruptive stuff in git like feature removals and default changes, but
there's more coming, and I'd rather they all be in one release than two
releases in close proximity. I'm actively working on it all though

Thanks,
Cole



Re: [RFC PATCH] virt-manager: console disconnected after rebooting virtual machine in Xen mode

2022-01-21 Thread Cole Robinson
On 1/20/22 5:07 PM, Charles Arnold wrote:
> 
> Yes, I can confirm that this patch resolves the problem.
> 
> Thank you!
> 

Thanks, I pushed that patch now

- Cole



Re: [RFC PATCH] virt-manager: console disconnected after rebooting virtual machine in Xen mode

2022-01-20 Thread Cole Robinson
On 10/13/21 3:30 PM, Charles Arnold wrote:
> Running with virt-manager 3.2.0, not a problem in 2.2.1.
> 
> When rebooting an HVM guest on Xen, virt-manager gets a disconnected
> event which closes the viewer. See
> self._viewer.connect("disconnected", self._viewer_disconnected).
> Running on KVM, this event does not seem to occur on VM reboot.
> 
> Both Xen and KVM get a state changed event,
> self.vm.connect("state-changed", self._vm_state_changed_cb)
> but for Xen, the viewer is gone due to the prior disconnect and when
> calling into _activate_default_console_page(), the viewer does not
> exist nor does it get created.
> The VM comes up fine but you need to close the console window and
> open it again to see the running VM.
> 
> This patch resolves the issue for me.
> 
> diff --git a/virtManager/details/console.py
> b/virtManager/details/console.py
> index fafb7202..f8a005ee 100644
> --- a/virtManager/details/console.py
> +++ b/virtManager/details/console.py
> @@ -913,6 +913,8 @@ class vmmConsolePages(vmmGObjectUI):
>  return
> 
>  cpage = self.widget("console-pages").get_current_page()
> +    if cpage == _CONSOLE_PAGE_GRAPHICS:
> +    self._init_viewer()
>  if cpage != _CONSOLE_PAGE_UNAVAILABLE:
>  return
> 
> 

Sorry for the long delayed response. Thanks for analyzing this. I think
I've seen some similar presenting issue with qemu but it's been hard to
reproduce.

The flow in console.py is confusing despite my previous cleanup
attempts. There's two error pages: _CONSOLE_PAGE_GRAPHICS with subpage
_GFX_PAGE_UNAVAILABLE, and _CONSOLE_PAGE_UNAVAILABLE

The former is meant to be for fatal connection errors, which should
block simple attempts to reconnect. So the error is preserved on screen
unless the dialog is closed+reopened. Otherwise an error may come in but
quickly get wiped out by a VM XML change event, which would attempt to
reconnect to the VM, possibly asking for auth, etc.

_CONSOLE_PAGE_UNAVAILABLE is lighter weight. This is the 'guest not
running' screen, and a few others. If the VM is running and a state
change event comes in, and we are on this page, we attempt to connect to
the viewer (like when you click 'run' on an open details window).

Unfortunately your patch makes the two cases act identical, which we
don't want for the current code IMO

The issue in this case is that viewer-disconnected is overloaded, since
it can trigger for error conditions (like ssh tunnel problems), or for
normal disconnects like this reboot case. The former should block
console reconnection, the latter shouldn't since we have no useful error
feedback to report.

Can you try this patch and see if it fixes your case?

diff --git a/virtManager/details/console.py b/virtManager/details/console.py
index fafb7202..9b1e5db5 100644
--- a/virtManager/details/console.py
+++ b/virtManager/details/console.py
@@ -814,14 +814,23 @@ class vmmConsolePages(vmmGObjectUI):
 return

 msg = _("Viewer was disconnected.")
+errmsg = ""
 if errdetails:
-msg += "\n" + errdetails
+errmsg += "\n" + errdetails
 if ssherr:
 log.debug("SSH tunnel error output: %s", ssherr)
-msg += "\n\n"
-msg += _("SSH tunnel error output: %s") % ssherr
+errmsg += "\n\n"
+errmsg += _("SSH tunnel error output: %s") % ssherr

-self._activate_gfx_unavailable_page(msg)
+if errmsg:
+self._activate_gfx_unavailable_page(msg + errmsg)
+return
+
+# If no error message was reported, this isn't a clear graphics
+# error that should block reconnecting. So use the top level
+# 'VM unavailable' page which makes it easier for the user to
+# reconnect.
+self._activate_vm_unavailable_page(msg)

 def _viewer_disconnected(self, ignore, errdetails, ssherr):
 self._activate_gfx_unavailable_page(_("Viewer disconnected."))



Thanks,
Cole



Re: [PATCH virt-manager v2 2/2] virtManager: change MDEV display names

2022-01-19 Thread Cole Robinson
On 11/12/21 5:51 AM, Shalini Chellathurai Saroja wrote:
> The MDEV devices listed in the "Add New Virtual Hardware" page, are a
> concatenation of parent device name and MDEV device name, eg:
> css_0_0_0014 mdev_b204c698_6731_4f25_b5f4_894614a05ec0_0_0_0014. The
> parent name is duplicated in here, as the MDEV device name itself includes
> a part of the parent name in libvirt version 7.8.0 and later. So, this patch
> changes the MDEVs listed in "Add New Virtual Hardware" page to only display
> the MDEV device
> name(eg:mdev_b204c698_6731_4f25_b5f4_894614a05ec0_0_0_0014), when the
> new naming convention is used.
> 
> Signed-off-by: Shalini Chellathurai Saroja 
> ---
>  virtManager/addhardware.py | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/virtManager/addhardware.py b/virtManager/addhardware.py
> index 13b899c3..132ba4e0 100644
> --- a/virtManager/addhardware.py
> +++ b/virtManager/addhardware.py
> @@ -782,7 +782,9 @@ class vmmAddHardware(vmmGObjectUI):
>  if dev.xmlobj.name == subdev.xmlobj.parent:
>  prettyname += " (%s)" % subdev.pretty_name()
>  
> -if devtype == "mdev":
> +# parent device names are appended with mdev names in
> +# libvirt 7.8.0
> +if devtype == "mdev" and len(prettyname) <= 41:
>  for parentdev in self.conn.list_nodedevs():
>  if dev.xmlobj.parent == parentdev.xmlobj.name:
>  prettyname = "%s %s" % (

Sorry for the delay! I've pushed this now

- Cole



Re: [RFC virt-manager] virtmanager: list active node devices only

2022-01-17 Thread Cole Robinson
On 12/22/21 8:40 AM, Shalini Chellathurai Saroja wrote:
> With virt-manager application, it is possible to add inactive node
> devices(eg: mediated devices) in host system to guest system. But it
> is impossible to start a guest system with inactive node devices.
> Also, it is not yet possible to start a node device with virt-manager
> application. So, the user cannot use the inactive node device.
> 
> This patch filters out all inactive node devices and lists only active
> node devices in virt-manager application, so that it is only possible
> to add active node devices to the guest system.
> 

I agree with the idea to consider nodedev active state in the UI. The
problem with this implementation is that now node device listing will
fail on any libvirt older than 1.5 years since they won't support this
polling flag.

I think we should use the (even newer) virNodeDeviceIsActive support for
this so we don't need to touch the polling path. This is how we track
active vs inactive status for other objects like domain, network, storage.

I just pushed a commit that will cache isActive if it is available,
otherwise just assume the devices are active. Now in virt-manager code
can check nodedev.is_active() to get the state.

Rather than just hiding inactive devices, I'd rather we list them in the
UI but set the tree row as insensitive, with a tooltip explaining the
device is inactive, and disable the 'Finish' button. Sound OK?

Thanks,
Cole



Re: [virt-manager PATCH v2] cli: add 'ioapic.driver' to --features

2022-01-11 Thread Cole Robinson
On 1/11/22 12:21 PM, Jonathon Jongsma wrote:
> Add the ability to set the ioapic driver using the --features argument:
> 
> $ virt-install --features ioapic.driver=qemu ...
> 
> This results in the following xml:
> 
>   
> ...
> 
>   
> 
> This is required in order to install a guest with >255 cpus. Such a
> configuration requires an iommu with extended interrupt mode enabled,
> which in turn requires IOMMU interrupt remapping to be enabled, which in
> turn requires a split I/O APIC.
> 
> Signed-off-by: Jonathon Jongsma 
> ---
> 
> Not sure how I missed the cli test.
> 
> I wasn't quite sure where to add the option, but just added it to an existing
> test that was already excercising most of the features. Hopefully that's
> sufficient? It silences the test failure.
> 

Yup, that's typically what I do too

Thanks, pushed now

- Cole



Re: [virt-manager PATCH] cli: add 'ioapic.driver' to --features

2022-01-11 Thread Cole Robinson
On 1/7/22 3:33 PM, Jonathon Jongsma wrote:
> Add the ability to set the ioapic driver using the --features argument:
> 
> $ virt-install --features ioapic.driver=qemu ...
> 
> This results in the following xml:
> 
>   
> ...
> 
>   
> 
> This is required in order to install a guest with >255 cpus. Such a
> configuration requires an iommu with extended interrupt mode enabled,
> which in turn requires IOMMU interrupt remapping to be enabled, which in
> turn requires a split I/O APIC.
> 

Code looks fine, but this needs a test_cli.py addition:

...
tests/test_checkprops.py .F
 [100%]

= FAILURES
=
__ testCheckCLISuboptions
__
Traceback (most recent call last):
  File "/home/crobinso/src/virt-manager/tests/test_checkprops.py", line
63, in testCheckCLISuboptions
pytest.fail(msg)
  File "/usr/lib/python3.10/site-packages/_pytest/outcomes.py", line
153, in fail
raise Failed(msg=msg, pytrace=pytrace)
Failed:

--features ioapic.driver


Thanks,
Cole



Re: [virt-tools-list] [virt-manager 0/8] filesystem: Add support for virtiofs

2021-07-30 Thread Cole Robinson
On 7/29/21 6:37 AM, lma wrote:
> 在 2021-07-29 02:46,Cole Robinson 写道:
>> On 7/15/21 9:17 AM, Michal Prívozník wrote:
>>> On 7/7/21 3:31 PM, Cole Robinson wrote:
>>>> On 7/7/21 4:50 AM, Michal Prívozník wrote:
>>>>> On 7/5/21 5:20 AM, lma wrote:
>>>>>> 在 2021-07-01 08:39,Michal Prívozník 写道:
>>>>>>> On 7/1/21 12:52 AM, Cole Robinson wrote:
>>>>>>>> (ccing mprivozn with a domaincapabilities design question below)
>>>
>>> 
>>>
>>>>
>>>> IMO just source type= is enough.
>>>>
>>>
>>> FYI, I just merged patches that implement this in libvirt:
>>>
>>> 133d05a15e qemu: capabilities: fill in domcaps 
>>> e27e22585a conf: domcaps: Report 
>>>
>>> # virsh domcapabilities | xmllint --xpath
>>> /domainCapabilities/memoryBacking -
>>>   
>>>     
>>>   file
>>>   anonymous
>>>   memfd
>>>     
>>>   
>>>
>>>
>>
>> Thanks Michal!
>>
>> We had some internal discussion about using memfd by default, but
>> there's some caveats: memfd implies shared memory, and shared memory
>> does things like prevent KSM from working, add some hiccups to memory
>> ballooning, and causes VM to appear to own all its memory after
>> save/restore. So I'm not sure we should set this as the default any time
>> soon.
>>
>> So this puts us back at the beginning. Do we make addhardware try to
>> transparently set up shared memory when virtiofs is requested? That
>> still makes me leery TBH.
> 
> IMO transparently setting up shared memory in this case is not a good idea.
> 
>> I'm thinking we should add a UI checkbox in the memory screen for
>> enabling/disabling shared memory. This way the logic is testable for an
>> existing VM outside of the addhardware wizard. If our logic is incorrect
>> and screws up their VM like making it unbootable, there's an explicit
>> way to back out that change. We should prefer memfd over file shm memory
>> if memfd is available in domcaps.
>>
>> Then when adding virtiofs device in addhardware, we try to make the
>> default behavior work, but let the user opt out. So in the standard
>> case, where VM has no shared memory config, when selecting virtio-fs, we
>> show a new checkbox 'Enable shared memory for the whole VM' with an info
>> icon + tooltip explaining this is necessary for virtio-fs. We still show
>> it even when the host already has shared memory configured, but disable
>> the checkbox and add a tooltip saying 'VM is already configured with
>> shared memory'. I think that should be enough to make this safe to
>> maintain, and more informative to the user about what is happening
> 
> Based on the above valuable information, What I intend to do are:
> - If virtiofs is available in domcaps: Add a UI checkbox in the memory
> screen
>   of details for enabling/disabling shared memory, meanwhile set 'memfd' as
>   backend if memfd is available in domcaps, otherwise set 'file' as
> backend.
> 

Sounds good

> - This UI checkbox will be invisible if virtiofs is unavailable in domcaps.
> 

I prefer not to hide things, that can confuse users wondering why they
see the UI on one setup but not the other. So for this case, show the
checkbox, but disable the entire thing and add a tooltip explaining
'Libvirt may not be new enough to support shared memory'

> - This UI checkbox will be disabled in the memory screen if existing
> virtio-fs
>   device(s) are detected in VM config.

IMO we don't need to go that far. If the user disables the checkbox,
their VM won't start on next boot, which IMO is enough to inform users
'don't do that'. And it's easy enough for them to undo that change. This
will save us some code and reduce the test matrix.

> 
> - Don't add such UI checkbox in new VM wizard.
> 

Agreed.

> - Add a checkbox "Enable shared memory for the whole VM" with an info
> icon +
>   tooltip when selecting virtio-fs in the addhardware wizard or in the
> details.
> 

Sounds good

> - Still show this checkbox even when the VM already has shared memory
> configured,
>   but disable the checkbox and add a tooltip saying 'VM is already
> configured with shared memory'
> 

Yes I think so, same principal I mentioned above for not hiding the
checkbox.

> BTW, About the "Are there qemu versions that support virtiofs but aren't
> new
> enough for libvirt's defaultRAMid determination?":
> We know that qemu exposes default_ram_id through 'query-machines' since
> v5.2.
> Libvirt uses memory-backend-* for reg

Re: [virt-tools-list] [virt-manager 0/8] filesystem: Add support for virtiofs

2021-07-28 Thread Cole Robinson
On 7/15/21 9:17 AM, Michal Prívozník wrote:
> On 7/7/21 3:31 PM, Cole Robinson wrote:
>> On 7/7/21 4:50 AM, Michal Prívozník wrote:
>>> On 7/5/21 5:20 AM, lma wrote:
>>>> 在 2021-07-01 08:39,Michal Prívozník 写道:
>>>>> On 7/1/21 12:52 AM, Cole Robinson wrote:
>>>>>> (ccing mprivozn with a domaincapabilities design question below)
> 
> 
> 
>>
>> IMO just source type= is enough.
>>
> 
> FYI, I just merged patches that implement this in libvirt:
> 
> 133d05a15e qemu: capabilities: fill in domcaps 
> e27e22585a conf: domcaps: Report 
> 
> # virsh domcapabilities | xmllint --xpath
> /domainCapabilities/memoryBacking -
>   
> 
>   file
>   anonymous
>   memfd
> 
>   
> 
> 

Thanks Michal!

We had some internal discussion about using memfd by default, but
there's some caveats: memfd implies shared memory, and shared memory
does things like prevent KSM from working, add some hiccups to memory
ballooning, and causes VM to appear to own all its memory after
save/restore. So I'm not sure we should set this as the default any time
soon.

So this puts us back at the beginning. Do we make addhardware try to
transparently set up shared memory when virtiofs is requested? That
still makes me leery TBH.

I'm thinking we should add a UI checkbox in the memory screen for
enabling/disabling shared memory. This way the logic is testable for an
existing VM outside of the addhardware wizard. If our logic is incorrect
and screws up their VM like making it unbootable, there's an explicit
way to back out that change. We should prefer memfd over file shm memory
if memfd is available in domcaps.

Then when adding virtiofs device in addhardware, we try to make the
default behavior work, but let the user opt out. So in the standard
case, where VM has no shared memory config, when selecting virtio-fs, we
show a new checkbox 'Enable shared memory for the whole VM' with an info
icon + tooltip explaining this is necessary for virtio-fs. We still show
it even when the host already has shared memory configured, but disable
the checkbox and add a tooltip saying 'VM is already configured with
shared memory'. I think that should be enough to make this safe to
maintain, and more informative to the user about what is happening

Thanks,
Cole



Re: [virt-tools-list] [virt-manager 0/8] filesystem: Add support for virtiofs

2021-07-07 Thread Cole Robinson
On 7/7/21 4:50 AM, Michal Prívozník wrote:
> On 7/5/21 5:20 AM, lma wrote:
>> 在 2021-07-01 08:39,Michal Prívozník 写道:
>>> On 7/1/21 12:52 AM, Cole Robinson wrote:
>>>> (ccing mprivozn with a domaincapabilities design question below)
>>>>
>>>> On 6/30/21 8:10 AM, Lin Ma wrote:
>>>>> So far, virt-manager only supports virtio-9p, The patchset adds
>>>>> virtiofs
>>>>> which offering better performance.
>>>>>
>>>>> We know that the virtiofs needs 'shared' access mode of memory backing
>>>>> or 'shared' access mode of virtual numa node, But virt-manager doesn't
>>>>> provide UI to configure memory backing or virtual numa node because
>>>>> they
>>>>> are advanced features and can be configured by raw XML editor.
>>>>>
>>>>> This patchset introduces basic virtiofs support and offers an easier
>>>>> way
>>>>> to configure virtiofs by adjusting access mode to 'shared' if
>>>>> necessary.
>>>>>
>>>>> I don't intend to introduce memory backing UI or numa UI, That means I
>>>>> need to modify the access mode attribue which belongs memorybacking or
>>>>> numa in filesystem code, This perhaps looks not good, Any comments are
>>>>> appreciated.
>>>>>
>>>>
>>>> Thanks for the patches. Regarding virtio-fs I've recorded my thoughts in
>>>> this issue: https://github.com/virt-manager/virt-manager/issues/127
>>>>
>>>> Basically I don't want to add this to virt-manager until we can make it
>>>> closer to 'just work' without pitfalls. IMO that means adjusting libvirt
>>>> to report via domcapabilities when it is safe and supported to
>>>> unconditionally specify shared memory, without hugepages or numa config.
>>>> Then we set that by default for new VMs, and _maybe_ do something like
>>>> what your patches do (set it automatically when user requests virtiofs
>>>> via addhw).
>>>>
>>>> Until that's done, it's a pain in the ass to try and figure out, outside
>>>> of libvirt, whether the domain XML has suitable setup to make virtio-fs
>>>> work, and what is the simplest memory XML adjustment to make virtiofs
>>>> work. We basically have to reimplement the libvirt
>>>> qemuValidateDomainDefVhostUserRequireSharedMemory function from here
>>>> https://gitlab.com/libvirt/libvirt/-/blob/master/src/qemu/qemu_validate.c#L1427
>>>>
>>>>
>>>> Your code attempts to implement the numa_nodes check, but it doesn't
>>>> account for the defaultRAMID bit.
>>>
>>> Right. IIRC the shared memory is needed for DAX. I wonder if there's a
>>> way to turn off DAX in virtiofsd. Then the  could be added
>>> just like any other device.
>>
>> Because vhost-user needs shared memory, virtio-fs can't be enabled in
>> isolation.
>>
>>>>
>>>> The specific >>> would be:
>>>>
>>>> 
>>>>   
>>>>     
>>>>   file
>>>>   memfd
>>>>   ...
>>>>
>>>
>>> Yes, this looks sane and could be valuable for other use cases too.
>>>
>>>> The 7.1.0 check, when access mode=shared can be used without numa or
>>>> hugepages, we probably need some arbitrary boolean to report. It
>>>> could be:
>>>>
>>>> 
>>>>   
>>>>     
>>>>
>>>> Or maybe something under . There isn't a clear precedent for
>>>> exposing something like this in the XML. CCing mprivozn, any
>>>> suggestions?
>>>
>>> I think we can rely on  from domcaps AND newly added
>>>  as described above. Yes, this will leave behind some
>>> versions where virtiofs would work and yet virt-manager won't be able to
>>> configure it, but I think that's acceptable.
>>>
>>>>
>>>> Lin if you get those into libvirt I will be happy to help you land
>>>> virtio-fs support in virt-manager, writing code coverage tests etc.
>>>
>>> And I can help with domcaps, let me know if you want to post patches
>>> yourself or whether I should do it.
>>
>> You know much about domcaps, I lean towards to you, So, please.
> 
> Will do. But just do double check: we need domcaps to report:
> 
>   
> 
>   
> 
> Do we need something else too? I mean, for virtiofs we will need to set
>  in the domain XML, but that doesn't seem to be
> conditional. I mean, nothing in libvirt checks whether qemu/kernel
> supports "shared" or not.
> 

IMO just source type= is enough.

Thanks,
Cole



Re: [virt-tools-list] [virt-manager 0/8] filesystem: Add support for virtiofs

2021-07-01 Thread Cole Robinson
On 7/1/21 4:39 AM, Michal Prívozník wrote:
> On 7/1/21 12:52 AM, Cole Robinson wrote:
>> (ccing mprivozn with a domaincapabilities design question below)
>>
>> On 6/30/21 8:10 AM, Lin Ma wrote:
>>> So far, virt-manager only supports virtio-9p, The patchset adds virtiofs
>>> which offering better performance.
>>>
>>> We know that the virtiofs needs 'shared' access mode of memory backing
>>> or 'shared' access mode of virtual numa node, But virt-manager doesn't
>>> provide UI to configure memory backing or virtual numa node because they
>>> are advanced features and can be configured by raw XML editor.
>>>
>>> This patchset introduces basic virtiofs support and offers an easier way
>>> to configure virtiofs by adjusting access mode to 'shared' if necessary.
>>>
>>> I don't intend to introduce memory backing UI or numa UI, That means I
>>> need to modify the access mode attribue which belongs memorybacking or
>>> numa in filesystem code, This perhaps looks not good, Any comments are
>>> appreciated.
>>>
>>
>> Thanks for the patches. Regarding virtio-fs I've recorded my thoughts in
>> this issue: https://github.com/virt-manager/virt-manager/issues/127
>>
>> Basically I don't want to add this to virt-manager until we can make it
>> closer to 'just work' without pitfalls. IMO that means adjusting libvirt
>> to report via domcapabilities when it is safe and supported to
>> unconditionally specify shared memory, without hugepages or numa config.
>> Then we set that by default for new VMs, and _maybe_ do something like
>> what your patches do (set it automatically when user requests virtiofs
>> via addhw).
>>
>> Until that's done, it's a pain in the ass to try and figure out, outside
>> of libvirt, whether the domain XML has suitable setup to make virtio-fs
>> work, and what is the simplest memory XML adjustment to make virtiofs
>> work. We basically have to reimplement the libvirt
>> qemuValidateDomainDefVhostUserRequireSharedMemory function from here
>> https://gitlab.com/libvirt/libvirt/-/blob/master/src/qemu/qemu_validate.c#L1427
>>
>> Your code attempts to implement the numa_nodes check, but it doesn't
>> account for the defaultRAMID bit.
> 
> Right. IIRC the shared memory is needed for DAX. I wonder if there's a
> way to turn off DAX in virtiofsd. Then the  could be added
> just like any other device.
> 

IIRC it's a generic vhost-user requirement. Libvirt enforces shared
memory for vhost-user-blk as well. And this blog post suggests it's a
requirement for the older vhostuser net support too, though libvirt
doesn't seem to enforce it:
https://www.redhat.com/en/blog/hands-vhost-user-warm-welcome-dpdk

>>
>> The specific > would be:
>>
>> 
>>   
>> 
>>   file
>>   memfd
>>   ...
>>
> 
> Yes, this looks sane and could be valuable for other use cases too.
> 
>> The 7.1.0 check, when access mode=shared can be used without numa or
>> hugepages, we probably need some arbitrary boolean to report. It could be:
>>
>> 
>>   
>> 
>>
>> Or maybe something under . There isn't a clear precedent for
>> exposing something like this in the XML. CCing mprivozn, any suggestions?
> 
> I think we can rely on  from domcaps AND newly added
>  as described above. Yes, this will leave behind some
> versions where virtiofs would work and yet virt-manager won't be able to
> configure it, but I think that's acceptable.
> 

Good point, we can use domcaps virtiofs reporting as a proxy for
'libvirt is new enough to allow bare access mode=shared'. Is it
sufficient for qemu support? Are there qemu versions that support
virtiofs but aren't new enough for libvirt's defaultRAMid determination?
If we want to set this by default for new VMs I want to be as safe as
possible here.

If we want to start setting shared memory, possibly memfd, by default
though we will still want the memoryBacking sourceType reporting as well.

Thanks,
Cole



Re: [virt-tools-list] [virt-manager 0/8] filesystem: Add support for virtiofs

2021-06-30 Thread Cole Robinson
(ccing mprivozn with a domaincapabilities design question below)

On 6/30/21 8:10 AM, Lin Ma wrote:
> So far, virt-manager only supports virtio-9p, The patchset adds virtiofs
> which offering better performance.
> 
> We know that the virtiofs needs 'shared' access mode of memory backing
> or 'shared' access mode of virtual numa node, But virt-manager doesn't
> provide UI to configure memory backing or virtual numa node because they
> are advanced features and can be configured by raw XML editor.
> 
> This patchset introduces basic virtiofs support and offers an easier way
> to configure virtiofs by adjusting access mode to 'shared' if necessary.
> 
> I don't intend to introduce memory backing UI or numa UI, That means I
> need to modify the access mode attribue which belongs memorybacking or
> numa in filesystem code, This perhaps looks not good, Any comments are
> appreciated.
>

Thanks for the patches. Regarding virtio-fs I've recorded my thoughts in
this issue: https://github.com/virt-manager/virt-manager/issues/127

Basically I don't want to add this to virt-manager until we can make it
closer to 'just work' without pitfalls. IMO that means adjusting libvirt
to report via domcapabilities when it is safe and supported to
unconditionally specify shared memory, without hugepages or numa config.
Then we set that by default for new VMs, and _maybe_ do something like
what your patches do (set it automatically when user requests virtiofs
via addhw).

Until that's done, it's a pain in the ass to try and figure out, outside
of libvirt, whether the domain XML has suitable setup to make virtio-fs
work, and what is the simplest memory XML adjustment to make virtiofs
work. We basically have to reimplement the libvirt
qemuValidateDomainDefVhostUserRequireSharedMemory function from here
https://gitlab.com/libvirt/libvirt/-/blob/master/src/qemu/qemu_validate.c#L1427

Your code attempts to implement the numa_nodes check, but it doesn't
account for the defaultRAMID bit.

The specific 
  

  file
  memfd
  ...

The 7.1.0 check, when access mode=shared can be used without numa or
hugepages, we probably need some arbitrary boolean to report. It could be:


  


Or maybe something under . There isn't a clear precedent for
exposing something like this in the XML. CCing mprivozn, any suggestions?

Lin if you get those into libvirt I will be happy to help you land
virtio-fs support in virt-manager, writing code coverage tests etc.

Thanks,
Cole



Re: [virt-manager][PATCH v2] virtinst: Support missing NVDIMM knobs

2021-06-22 Thread Cole Robinson
On 6/21/21 12:31 PM, Michal Privoznik wrote:
> There are two domain XML knobs specific to NVDIMMs that
> virt-install doesn't allow to set:  and .
> Implement them.
> 
> Closes: https://github.com/virt-manager/virt-manager/issues/267
> Signed-off-by: Michal Privoznik 
> ---
> 
> v2 of:
> 
> https://listman.redhat.com/archives/virt-tools-list/2021-June/msg00023.html
> 
> diff to v1:
> - Added discard for DIMMs
> - Added target.readonly for NVDIMMs
> 
>  tests/data/cli/compare/virt-install-singleton-config-3.xml | 5 -
>  tests/test_cli.py  | 4 ++--
>  virtinst/cli.py| 5 +
>  virtinst/devices/memory.py | 4 
>  4 files changed, 15 insertions(+), 3 deletions(-)

Thanks, pushed now

- Cole



Re: [virt-manager PATCH] spec: use %autosetup

2021-06-22 Thread Cole Robinson
On 6/21/21 5:53 PM, Jonathon Jongsma wrote:
> Make sure potential patches get applied by using %autosetup instead of
> plain %setup
> 
> Signed-off-by: Jonathon Jongsma 
> ---
>  virt-manager.spec | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/virt-manager.spec b/virt-manager.spec
> index 7ad70044..08e4d443 100644
> --- a/virt-manager.spec
> +++ b/virt-manager.spec
> @@ -99,7 +99,7 @@ machine).
>  
>  
>  %prep
> -%setup -q
> +%autosetup -p1
>  
>  
>  %build
> 

Thanks, pushed now

- Cole



Re: [PATCH] cloner: mark transient_shareBacking disk as shareable

2021-06-22 Thread Cole Robinson
On 6/17/21 4:40 AM, Liu Yiding wrote:
> libvirt 7.4.0 added support for sharing base image of  disks,
> multiple VMs can share the same image
> https://libvirt.org/news.html#v7-4-0-2021-06-01
> 
> Signed-off-by: Liu Yiding 
> ---
>  virtinst/cloner.py | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/virtinst/cloner.py b/virtinst/cloner.py
> index 622a2921..1fdffa2a 100644
> --- a/virtinst/cloner.py
> +++ b/virtinst/cloner.py
> @@ -195,7 +195,7 @@ def _get_shareable_msg(disk):
>  return _("No storage to clone.")
>  if disk.read_only:
>  return _("Read Only")
> -if disk.shareable:
> +if disk.shareable or disk.transient_shareBacking:
>  return _("Marked as shareable")
>  
>  
> 

Thanks, pushed now

- Cole



Re: ERROR internal error: Unexpected enum value 0 for virDomainDeviceAddressType

2021-06-08 Thread Cole Robinson
On 6/8/21 4:43 AM, Mario Marietto wrote:
> root@zi-desktop:~/Desktop/zi/Work/I9/qemu-rpi-kernel# virsh --version
> 7.4.0
> 
> virt-install \
>     --debug \
>     --name pi \
>     --machine raspi3 \
>     --cpu arm1176 \
>     --memory 1024 \
>     --import \
>     --disk
> /var/www/html/2019-09-26-raspbian-buster-lite.img,format=raw,bus=virtio \
>     --network user,model=virtio \
>     --video vga \
>     --graphics spice \
>     --rng device=/dev/urandom,model=virtio \
>     --boot
> 'dtb=bcm2710-rpi-3-b-plus.dtb,kernel=kernel8.img,kernel_args=root=/dev/vda2
> rootwait panic=1 dwc_otg.fiq_fsm_enable=0' \
>     --events on_reboot=destroy
> 
> [mar, 08 giu 2021 10:42:22 virt-install 26236] DEBUG (cli:265) Launched
> with command line: /usr/share/virt-manager/virt-install --debug --name
> pi --machine raspi3 --cpu arm1176 --memory 1024 --import --disk
> /var/www/html/2019-09-26-raspbian-buster-lite.img,format=raw,bus=virtio
> --network user,model=virtio --video vga --graphics spice --rng
> device=/dev/urandom,model=virtio --boot
> dtb=bcm2710-rpi-3-b-plus.dtb,kernel=kernel8.img,kernel_args=root=/dev/vda2
> rootwait panic=1 dwc_otg.fiq_fsm_enable=0 --events on_reboot=destroy
> [mar, 08 giu 2021 10:42:22 virt-install 26236] DEBUG (cli:279)
> Requesting libvirt URI default
> [mar, 08 giu 2021 10:42:22 virt-install 26236] DEBUG (cli:282) Received
> libvirt URI qemu:///system
> [mar, 08 giu 2021 10:42:22 virt-install 26236] DEBUG (virt-install:358)
> Requesting virt method 'default', hv type 'default'.
> [mar, 08 giu 2021 10:42:24 virt-install 26236] DEBUG (virt-install:583)
> Received virt method 'kvm'
> [mar, 08 giu 2021 10:42:24 virt-install 26236] DEBUG (virt-install:584)
> Hypervisor name is 'hvm'
> [mar, 08 giu 2021 10:42:24 virt-install 26236] DEBUG (virt-install:270)
> Distilled --network options: ['user,model=virtio']
> [mar, 08 giu 2021 10:42:24 virt-install 26236] DEBUG (virt-install:183)
> Distilled --disk options:
> ['/var/www/html/2019-09-26-raspbian-buster-lite.img,format=raw,bus=virtio']
> [mar, 08 giu 2021 10:42:25 virt-install 26236] DEBUG (installer:222)
> distro detection not available for this installer.
> [mar, 08 giu 2021 10:42:25 virt-install 26236] DEBUG (guest:251) Setting
> Guest.os_variant to 'None'
> [mar, 08 giu 2021 10:42:26 virt-install 26236] WARNING
> (virt-install:545) No operating system detected, VM performance may
> suffer. Specify an OS with --os-variant for optimal results.
> [mar, 08 giu 2021 10:42:27 virt-install 26236] DEBUG (virt-install:697)
> Guest.has_install_phase: False
> 
> Starting install...
> [mar, 08 giu 2021 10:42:27 virt-install 26236] DEBUG (guest:1169) Local
> connection, disabling spice image compression.
> [mar, 08 giu 2021 10:42:27 virt-install 26236] DEBUG (guest:384)
> Generated install XML: None required
> [mar, 08 giu 2021 10:42:27 virt-install 26236] DEBUG (guest:385)
> Generated boot XML:
> 
>   pi
>   c0e51f74-e4e0-4eb0-a9b7-2a6743da9b57
>   1048576
>   1048576
>   1
>   
>     hvm
>     /home/zi/Desktop/Work/I9/qemu-rpi-kernel/kernel8.img
>     root=/dev/vda2 rootwait panic=1
> dwc_otg.fiq_fsm_enable=0
>    
> /home/zi/Desktop/Work/I9/qemu-rpi-kernel/bcm2710-rpi-3-b-plus.dtb
>   
>   
>     
>   
>   
>     arm1176
>   
>   
>   destroy
>   
>     /usr/local/bin/qemu-system-aarch64
>     
>       
>       
>       
>     
>     
>       
>       
>     
>     
>       
>     
>     
>     
>       
>     
>     
>     
>       
>     
>     
>       /dev/urandom
>     
>   
> 
> 
> [mar, 08 giu 2021 10:42:29 virt-install 26236] DEBUG (cli:317)   File
> "/usr/share/virt-manager/virt-install", line 1008, in 
>     sys.exit(main())
>   File "/usr/share/virt-manager/virt-install", line 1002, in main
>     start_install(guest, options)
>   File "/usr/share/virt-manager/virt-install", line 728, in start_install
>     fail(e, do_exit=False)
>   File "/usr/share/virt-manager/virtinst/cli.py", line 317, in fail
>     logging.debug("".join(traceback.format_stack()))
> 
> [mar, 08 giu 2021 10:42:29 virt-install 26236] ERROR (cli:318)
> unsupported configuration: ACPI requires UEFI on this architecture
> [mar, 08 giu 2021 10:42:29 virt-install 26236] DEBUG (cli:320)
> Traceback (most recent call last):
>   File "/usr/share/virt-manager/virt-install", line 707, in start_install
>     transient=options.transient)
>   File "/usr/share/virt-manager/virtinst/guest.py", line 498, in
> start_install
>     doboot, transient)
>   File "/usr/share/virt-manager/virtinst/guest.py", line 434, in
> _create_guest
>     domain = self.conn.createXML(install_xml or final_xml, 0)
>   File "/usr/lib/python2.7/dist-packages/libvirt.py", line 3603, in
> createXML
>     if ret is None:raise libvirtError('virDomainCreateXML() failed',
> conn=self)
> libvirtError: unsupported configuration: ACPI requires UEFI on this
> architecture
> [mar, 08 giu 2021 10:42:29 virt-install 26236] DEBUG (cli:331) Domain
> installation does not appear to have been successful.
> If it was, you can 

Re: [PATCH virt-manager v3] virt-manager: enable MDEV support

2021-06-08 Thread Cole Robinson
On 6/7/21 5:34 AM, Shalini Chellathurai Saroja wrote:
> Enable virt-manager GUI to support add, edit, remove, hot-plug and
> hot-unplug of mediated devices (like DASDs, APQNs and PCIs) in virtual
> server.
> 
> It is not possible to edit MDEV when a virtual server is in
> running state, as this is not supported by libvirt.
> 
> Signed-off-by: Shalini Chellathurai Saroja 

Thanks, pushed now

- Cole



Re: ERROR internal error: Unexpected enum value 0 for virDomainDeviceAddressType

2021-06-07 Thread Cole Robinson
On 6/4/21 5:40 AM, Mario Marietto wrote:
> Hello.
> 
> I want to emulate the raspberry pi 3 on my Jetson nano (aarch64) using
> virt-install. I'm learning how to do that by reading heavily here :
> 
> https://github.com/dhruvvyas90/qemu-rpi-kernel
> 
> 
> I tried to do something like this :
> 
> Code:
> 
> virt-install
> --name pi \
> --machine raspi3 \
> --cpu arm1176 \
> --memory 1024 \
> --import \ 
> --disk 
> /root/Desktop/zi/Work/Android/Raspy/Debian/2019-09-26-raspbian-buster-lite.img,format=raw,bus=virtio
>  \
> --network user,model=virtio \
> --video vga \
> --graphics spice \
> --rng device=/dev/urandom,model=virtio \
> --boot 
> 'uefi=RPI_EFI.fd,dtb=bcm2710-rpi-3-b-plus.dtb,kernel=kernel8.img,kernel_args=root=/dev/vda2
>  rootwait panic=1 dwc_otg.fiq_fsm_enable=0'\
> --events on_reboot=destroy
> 
> 
> But I get this error :
> 
> *ERROR internal error: Unexpected enum value 0 for
> virDomainDeviceAddressType *
> 
> I've got the UEFI BIOS file from here :
> 
> 
> https://github.com/andreiw/Raspberry...ASE/RPI_EFI.fd
> 
> 
> 
> I'm not sure if it isn't the right efi file to use in this specific
> configuration or if it is,but I'm passing the parameter badly. What do
> you suggest me to do ?

That error is coming from libvirt so probably not related to EFI in any
way. What libvirt version are you using? Those docs suggest libvirt
5.0.0 or later. Pastebin full virt-install `--debug` output might help

- Cole


- Cole



Re: [PATCH virt-manager v2 0/4] Enable MDEV support

2021-06-02 Thread Cole Robinson
On 6/2/21 9:34 AM, Shalini Chellathurai Saroja wrote:
> 
> On 6/1/21 11:13 PM, Cole Robinson wrote:
>> On 5/31/21 3:54 PM, Shalini Chellathurai Saroja wrote:
>>> Enable support for mediated devices in virt-xml tool and virt-manager
>>> GUI tool.
>>>
>> Thanks! I pushed 1-3 with a small tweak to patch #1 to keep code
>> coverage at 100%.
> 
> Good morning Cole,
> 
> Thank you for reviewing and pushing the virt-xml patch.
> 
>> In order to get the parent naming stuff in addhardware.py to trigger I
>> needed to change things a bit:
>>
>> diff --git a/virtManager/addhardware.py b/virtManager/addhardware.py
>> index 2df84bf5..13b899c3 100644
>> --- a/virtManager/addhardware.py
>> +++ b/virtManager/addhardware.py
>> @@ -783,9 +783,10 @@ class vmmAddHardware(vmmGObjectUI):
>>  prettyname += " (%s)" % subdev.pretty_name()
>>
>>  if devtype == "mdev":
>> -for subdev in self.conn.filter_nodedevs("mdev"):
>> -if dev.xmlobj.name == subdev.xmlobj.parent:
>> -prettyname += " (%s)" % subdev.pretty_name()
>> +for parentdev in self.conn.list_nodedevs():
>> +if dev.xmlobj.parent == parentdev.xmlobj.name:
>> +prettyname = "%s %s" % (
>> +parentdev.pretty_name(), prettyname)
>>
>>  model.append([dev.xmlobj, prettyname])
>>
>>
>> Was it working in your code before that, like on a real system with mdev
>> devices? Or was the Add Hardware->MDEV only listing devices with name
>> `mdev_$UUID` ?
> MDEV devices are listed as mdev_$UUID in my system with MDEV devices.
> I have provided a screenshot with my code below.
> 
>> The point of this code is to give more descriptive names to what we list
>> in the UI, if we can help it. The names for 'mdev' devices are not
>> descriptive. 
> Yes, your code provides a more descriptive name and helps to
> identify the device type (like CCW/PCI/AP device) . We can use it.
> I have provided a screenshot with your code below.
> 
>> Their type id=XXX values are a bit better:
>> vfio_ap-passthrough, nvidia-11, etc. Maybe we can just use that? The
>> parent devices have a bit more info under  but
>> it's not clear if that's always available. Maybe we can get away with
>> just using the  value in the mdev pretty_name() definition
> More than one MDEV device can have the same type id as shown below.
> May be, we can use type id in combination with mdev_$uuid.
> I feel like your suggested code is better, as we could identify the parent
> device and MDEV device type with this code.
> 
> # virsh nodedev-dumpxml mdev_8e782fea_e5f4_45fa_a0f9_024cf66e5009
> 
>   mdev_8e782fea_e5f4_45fa_a0f9_024cf66e5009
>  
> /sys/devices/css0/0.0.0024/8e782fea-e5f4-45fa-a0f9-024cf66e5009
>   css_0_0_0024
>   
>     vfio_mdev
>   
>   
>     **
>     8e782fea-e5f4-45fa-a0f9-024cf66e5009
>     
>   
> 
> 
> # virsh nodedev-dumpxml mdev_92cc4b96_95d8_47ce_923c_c688d061bc41
> 
>   mdev_92cc4b96_95d8_47ce_923c_c688d061bc41
>  
> /sys/devices/css0/0.0.0023/92cc4b96-95d8-47ce-923c-c688d061bc41
>   css_0_0_0023
>   
>     vfio_mdev
>   
>   
>     **
>     92cc4b96-95d8-47ce-923c-c688d061bc41
>     
>   
> 
> 
> Similarly,  info in MDEV parent device
> can also be same for more than one MDEV device.
> 
> # virsh nodedev-dumpxml css_0_0_0023
> 
>   css_0_0_0023
>   /sys/devices/css0/0.0.0023
>   computer
>   
>     vfio_ccw
>   
>   
>     0x0
>     0x0
>     0x0023
>     
>   
>     *I/O subchannel (Non-QDIO)*
>     vfio-ccw
>     0
>   
>     
>   
> 
> 
> # virsh nodedev-dumpxml css_0_0_0024
> 
>   css_0_0_0024
>   /sys/devices/css0/0.0.0024
>   computer
>   
>     vfio_ccw
>   
>   
>     0x0
>     0x0
>     0x0024
>     
>   
>     *I/O subchannel (Non-QDIO)*
>     vfio-ccw
>     0
>   
>     
>   
> 
> 
> Are you ok with using your suggested code (parent name mdev_$uuid)
> for prettyname? Please let me know your feedback.

Yes if you think that's looks good on a real machine, then that works
for me too.

Thanks,
Cole



Re: [PATCH virt-manager v2 0/4] Enable MDEV support

2021-06-01 Thread Cole Robinson
On 5/31/21 3:54 PM, Shalini Chellathurai Saroja wrote:
> Enable support for mediated devices in virt-xml tool and virt-manager
> GUI tool.
> 

Thanks! I pushed 1-3 with a small tweak to patch #1 to keep code
coverage at 100%.

In order to get the parent naming stuff in addhardware.py to trigger I
needed to change things a bit:

diff --git a/virtManager/addhardware.py b/virtManager/addhardware.py
index 2df84bf5..13b899c3 100644
--- a/virtManager/addhardware.py
+++ b/virtManager/addhardware.py
@@ -783,9 +783,10 @@ class vmmAddHardware(vmmGObjectUI):
 prettyname += " (%s)" % subdev.pretty_name()

 if devtype == "mdev":
-for subdev in self.conn.filter_nodedevs("mdev"):
-if dev.xmlobj.name == subdev.xmlobj.parent:
-prettyname += " (%s)" % subdev.pretty_name()
+for parentdev in self.conn.list_nodedevs():
+if dev.xmlobj.parent == parentdev.xmlobj.name:
+prettyname = "%s %s" % (
+parentdev.pretty_name(), prettyname)

 model.append([dev.xmlobj, prettyname])


Was it working in your code before that, like on a real system with mdev
devices? Or was the Add Hardware->MDEV only listing devices with name
`mdev_$UUID` ?

The point of this code is to give more descriptive names to what we list
in the UI, if we can help it. The names for 'mdev' devices are not
descriptive. Their type id=XXX values are a bit better:
vfio_ap-passthrough, nvidia-11, etc. Maybe we can just use that? The
parent devices have a bit more info under  but
it's not clear if that's always available. Maybe we can get away with
just using the  value in the mdev pretty_name() definition

Thanks,
Cole



Re: Fwd: Re: [PATCH virt-install v1 3/3] virtxml: prevent defining same hostdev again

2021-05-28 Thread Cole Robinson
On 5/28/21 8:06 AM, Shalini Chellathurai Saroja wrote:
> 
> On 5/19/21 12:49 AM, Cole Robinson wrote:
>> On 4/14/21 11:18 AM, Shalini Chellathurai Saroja wrote:
>>> Currently, it is possible to add same device multiple times, when the
>>> guest domain is in shut-off state. This patch prevents this unexpected
>>> behavior for all hostdev devices, including mdev devices.
>>>
>> If this is an invalid config, these types of validation checks should
>> live in libvirt so all clients benefit from them, and libvirt is the one
>> source of truth for what's supported or not.
> 
> Hello Cole,
> 
> Thank you for the review.
> 
> This validation check is available in libvirt, for this to work, the
> flag "VIR_DOMAIN_AFFECT_CONFIG" has to be set, when the virtual server
> is not in running state.
> 

It's kind of complicated.

AFFECT_CONFIG for an offline VM (or the offline XML of a running VM)
should in theory be identical to the process we currently use: dumpxml,
modify XML, redefine. Anything else is a bug IMO. My reasoning is that
if AFFECT_CONFIG does anything special besides hitting the XML define
code path, then there's logic locked into the AFFECT_CONFIG code path
which isn't available to the many other ways of editing the XML.

So for example, if there is a validation check on coldplugging mdev
devices in libvirt, and it's only accessible through the attachDevice
API, that's a libvirt bug IMO. If the check is worth doing at coldplug
time, it's also worth doing at XML define time, and should be part of
the generic virDomainDefValidate infrastructure. If it's not worth doing
at XML validate time for some reasons, there should be no argument for
why it makes sense at coldplug time.

That's my theory anyways. I think other libvirt devs will agree but I'm
not sure.

AFFECT_CONFIG in general is not a good fit for virt-install, for the
reasons you show below: we can't rely on it being implemented
consistently. It's not implemented in the test driver, and it's not
implemented for all devices even in the qemu driver. So we would have to
either punt the option to the user by explicitly exposing the flag, or
try to keep track internally which cases it works for and what cases it
doesn't, neither of which ideas I like.

So I think if for this case, libvirt already has the validation you
want, but it's only in the attachDevice code path, that's an opportunity
to improve libvirt. Move that check to
domain_validate.c:virDomainHostdevDefValidate, or possibly to
virDomainDefValidateInternal , do something like what
virDomainDefDuplicateDiskInfoValidate does

Hopefully that makes sense

Thanks,
Cole

> With the below changes, libvirt restricts addition of same device
> multiple times.
> 
> diff --git a/virtinst/virtxml.py b/virtinst/virtxml.py
> index 0c8da37e..6de66488 100644
> --- a/virtinst/virtxml.py
> +++ b/virtinst/virtxml.py
> @@ -222,12 +222,24 @@ def setup_device(dev):
>  dev.build_storage(cli.get_meter())
> 
> 
> -def define_changes(conn, inactive_xmlobj, devs, action, confirm):
> +def define_changes(domain, conn, inactive_xmlobj, devs, action, confirm):
>  if confirm:
>  if not prompt_yes_or_no(
>  _("Define '%s' with the changed XML?") %
> inactive_xmlobj.name):
>  return False
> 
> +    if action == "coldplug":
> +    msg_fail = _("Error attempting device coldplug: %(error)s")
> +
> +    for dev in devs:
> +    xml = dev.get_xml()
> +    setup_device(dev)
> +
> +    try:
> +    domain.attachDeviceFlags(xml,
> libvirt.VIR_DOMAIN_AFFECT_CONFIG)
> +    except libvirt.libvirtError as e:
> +    fail(msg_fail % {"error": e})
> +
>  if action == "hotplug":
>  for dev in devs:
>  setup_device(dev)
> @@ -324,7 +336,10 @@ def prepare_changes(xmlobj, options, parserclass):
> 
>  elif options.add_device:
>  devs = action_add_device(xmlobj, options, parserclass)
> -    action = "hotplug"
> +    if options.update:
> +    action = "hotplug"
> +    else:
> +    action = "coldplug"
> 
>  elif options.remove_device:
>  devs = action_remove_device(xmlobj, options, parserclass)
> @@ -524,7 +539,7 @@ def main(conn=None):
>     action, options.confirm)
>  return 0
> 
> -    dom = define_changes(conn, inactive_xmlobj,
> +    dom = define_changes(domain, conn, inactive_xmlobj,
>   devs, action, options.confirm)
>  if not dom:
>  # --confirm user said 'no'
> 
> However, some tests fail as shown below because the flag is not
> su

Re: [PATCH] cli: --feature: Support kvm.poll-control.state

2021-05-28 Thread Cole Robinson
On 5/27/21 12:01 AM, Han Han wrote:
> Implement kvm.poll-control.state to `virt-install --feature`. It requires
> libvirt >= v6.10.0.
> 
> Signed-off-by: Han Han 
> ---
>  tests/data/cli/compare/virt-install-singleton-config-3.xml | 1 +
>  tests/test_cli.py  | 2 +-
>  virtinst/cli.py| 1 +
>  virtinst/domain/features.py| 1 +
>  4 files changed, 4 insertions(+), 1 deletion(-)

Thanks! pushed now

- Cole



Re: [PATCH 0/2] Support transient disk in virtinst

2021-05-26 Thread Cole Robinson
On 5/25/21 9:22 PM, Han Han wrote:
> 
> Han Han (2):
>   virtinst: Implement support for transient disks
>   virtinst: Support transient.shareBacking option
> 
>  tests/data/cli/compare/virt-install-many-devices.xml | 2 ++
>  tests/test_cli.py| 4 ++--
>  virtinst/cli.py  | 2 ++
>  virtinst/devices/disk.py | 2 ++
>  4 files changed, 8 insertions(+), 2 deletions(-)
> 

Thanks, pushed now

- Cole



Re: [PATCH v2] virtinst: Support rotation_rate attrib in target

2021-05-23 Thread Cole Robinson
On 5/20/21 2:48 AM, Han Han wrote:
> Support rotation_rate attrib which is introduced since libvirt v7.3.
> 
> Signed-off-by: Han Han 
> ---
> Diff from v1: Add tests for the new options in virtinst
> ---
>  tests/data/cli/compare/virt-install-many-devices.xml | 4 ++--
>  tests/test_cli.py| 4 ++--
>  virtinst/cli.py  | 2 ++
>  virtinst/devices/disk.py | 1 +
>  4 files changed, 7 insertions(+), 4 deletions(-)

Thanks, pushed now

- Cole



Re: [RFC virt-manager] virt-manager: enable mdev support

2021-05-23 Thread Cole Robinson
On 5/5/21 11:42 AM, Shalini Chellathurai Saroja wrote:
> Enable virt-manager GUI to support add, edit, remove, hot-plug and
> hot-unplug of mediated devices (like DASDs, APQNs and PCIs) in virtual
> server.
> 
> It is not possible to edit mdev when a virtual server is in
> running state, as this is not supported by libvirt.
> 
> The patch-series "virt-install: Add mediated device support" is the
> base patch for this patch.
> 
> Signed-off-by: Shalini Chellathurai Saroja 
> ---
>  tests/uitests/test_addhardware.py |  6 ++
>  virtManager/addhardware.py| 18 +-
>  2 files changed, 23 insertions(+), 1 deletion(-)
> 
> diff --git a/tests/uitests/test_addhardware.py 
> b/tests/uitests/test_addhardware.py
> index ce3da57c..56acc2fa 100644
> --- a/tests/uitests/test_addhardware.py
> +++ b/tests/uitests/test_addhardware.py
> @@ -459,6 +459,12 @@ def testAddHosts(app):
>  app.click_alert_button("device is already in use by", "Yes")
>  lib.utils.check(lambda: details.active)
>  
> +# Add MDEV device
> +_open_addhw(app, details)
> +tab = _select_hw(addhw, "MDEV Host Device", "host-tab")
> +tab.find_fuzzy("mdev_8e37ee90_2b51_45e3_9b25_bf8283c03110",
> +   "table cell").click()
> +_finish(addhw, check=details)
>  
>  
>  def testAddChars(app):
> diff --git a/virtManager/addhardware.py b/virtManager/addhardware.py
> index cbf19f58..2df84bf5 100644
> --- a/virtManager/addhardware.py
> +++ b/virtManager/addhardware.py
> @@ -249,6 +249,10 @@ class vmmAddHardware(vmmGObjectUI):
>  add_hw_option(_("PCI Host Device"), "system-run", PAGE_HOSTDEV,
>nodedev_enabled, nodedev_errstr, "pci")
>  
> +add_hw_option(_("MDEV Host Device"), "system-run", PAGE_HOSTDEV,
> +  self.conn.support.conn_nodedev(),
> +  _("Connection does not support host device 
> enumeration"),
> +  "mdev")
>  add_hw_option(_("Video"), "video-display", PAGE_VIDEO, True,
>_("Libvirt version does not support video devices."))
>  add_hw_option(_("Watchdog"), "device_pci", PAGE_WATCHDOG,
> @@ -656,6 +660,9 @@ class vmmAddHardware(vmmGObjectUI):
>(dehex(hostdev.domain), dehex(hostdev.bus),
> dehex(hostdev.slot), dehex(hostdev.function)))
>  
> +elif hostdev.uuid:
> +label += " %s" % (str(hostdev.uuid))
> +
>  return label
>  
>  
> @@ -775,6 +782,11 @@ class vmmAddHardware(vmmGObjectUI):
>  if dev.xmlobj.name == subdev.xmlobj.parent:
>  prettyname += " (%s)" % subdev.pretty_name()
>  
> +if devtype == "mdev":
> +for subdev in self.conn.filter_nodedevs("mdev"):
> +if dev.xmlobj.name == subdev.xmlobj.parent:
> +prettyname += " (%s)" % subdev.pretty_name()
> +
Patch looks pretty good. This bit here isn't covered by the UI test
suite though because none of the nodedev parent devices were added to
the testdriver.xml with your previous patch. Can you send a v2 adding
the referenced parent devices too?

Thanks,
Cole



Re: [PATCH virt-install] virtinst: Support rotation_rate attrib in target

2021-05-19 Thread Cole Robinson
On 5/18/21 2:36 AM, Han Han wrote:
> Support rotation_rate attrib which is introduced since libvirt v7.3.
> 
> Signed-off-by: Han Han 

Thanks! But you need to extend tests/test_cli.py --disk examples as well
otherwise test suite is failing (but I didn't confirm)

Thanks,
Cole

> ---
>  virtinst/cli.py  | 2 ++
>  virtinst/devices/disk.py | 1 +
>  2 files changed, 3 insertions(+)
> 
> diff --git a/virtinst/cli.py b/virtinst/cli.py
> index 4ce3abf9..e4676bc9 100644
> --- a/virtinst/cli.py
> +++ b/virtinst/cli.py
> @@ -3252,6 +3252,7 @@ class ParserDisk(VirtCLIParser):
>  "snapshot": "snapshot_policy",
>  "target.dev": "target",
>  "target.removable": "removable",
> +"target.rotation_rate": "rotation_rate",
>  
>  "driver.discard": "discard",
>  "driver.detect_zeroes": "detect_zeroes",
> @@ -3443,6 +3444,7 @@ class ParserDisk(VirtCLIParser):
>  cls.add_arg("target.bus", "bus")
>  cls.add_arg("target.removable", "removable", is_onoff=True)
>  cls.add_arg("target.dev", "target")
> +cls.add_arg("target.rotation_rate", "rotation_rate")
>  
>  cls.add_arg("driver.cache", "driver_cache")
>  cls.add_arg("driver.discard", "driver_discard")
> diff --git a/virtinst/devices/disk.py b/virtinst/devices/disk.py
> index a8971581..77f10340 100644
> --- a/virtinst/devices/disk.py
> +++ b/virtinst/devices/disk.py
> @@ -471,6 +471,7 @@ class DeviceDisk(Device):
>  bus = XMLProperty("./target/@bus")
>  target = XMLProperty("./target/@dev")
>  removable = XMLProperty("./target/@removable", is_onoff=True)
> +rotation_rate = XMLProperty("./target/@rotation_rate", is_int=True)
>  
>  read_only = XMLProperty("./readonly", is_bool=True)
>  shareable = XMLProperty("./shareable", is_bool=True)
> 


- Cole



Re: [PATCH virt-install v1 3/3] virtxml: prevent defining same hostdev again

2021-05-18 Thread Cole Robinson
On 4/14/21 11:18 AM, Shalini Chellathurai Saroja wrote:
> Currently, it is possible to add same device multiple times, when the
> guest domain is in shut-off state. This patch prevents this unexpected
> behavior for all hostdev devices, including mdev devices.
> 

If this is an invalid config, these types of validation checks should
live in libvirt so all clients benefit from them, and libvirt is the one
source of truth for what's supported or not.

We make some exceptions in virt-install/virt-xml to validate common
configuration errors but I don't think this is one of them.

Thanks,
Cole



Re: [PATCH virt-install v1 2/3] virt-xml: add support for mediated devices

2021-05-18 Thread Cole Robinson
On 4/14/21 11:18 AM, Shalini Chellathurai Saroja wrote:
> Provide support to add/remove mdev in a guest domain, which is in
> shut-off or running state (hotplug/unplug). Also support update of
> already existing mdev device, when the guest domain is in shut-off
> state. Please note that libvirt does not support update of mdev
> device, when the guest domain is in running state.
> 
> Signed-off-by: Shalini Chellathurai Saroja 

Patch looks pretty good, but I have some comments that I think will
require a v2. See below

> diff --git a/tests/utils.py b/tests/utils.py
> index 802d6590..8a8a243a 100644
> --- a/tests/utils.py
> +++ b/tests/utils.py
> @@ -232,7 +232,7 @@ def diff_compare(actual_out, filename=None, 
> expect_out=None):
>  open(filename, "w").write(actual_out)
>  expect_out = open(filename).read()
>  
> -diff = xmlutil.diff(expect_out, actual_out,
> +diff = xmlutil.diff(expect_out.rstrip(), actual_out.rstrip(),
>  filename or '', "Generated output")

Is this necessary? I can't tell from test output what the point is. If
so I'd rather have it done as a separate patch so it's easier to see how
it affects the test suite output in isolation.

>  if diff:
>  raise AssertionError("Conversion outputs did not match.\n%s" % diff)
> diff --git a/virtinst/nodedev.py b/virtinst/nodedev.py
> index 97841794..873002ec 100644
> --- a/virtinst/nodedev.py
> +++ b/virtinst/nodedev.py
> @@ -5,6 +5,7 @@
>  # See the COPYING file in the top-level directory.
>  
>  import os
> +import uuid
>  
>  from .logger import log
>  from .xmlbuilder import XMLBuilder, XMLProperty, XMLChildProperty
> @@ -101,6 +102,12 @@ class NodeDevice(XMLBuilder):
>  _compare_int(self.bus, hostdev.bus) and
>  _compare_int(self.device, hostdev.device))
>  
> +if self.device_type == "mdev":
> +if hostdev.type != "mdev":
> +return False
> +
> +return uuid.UUID(self.name[5:].replace('_', '-')) == 
> uuid.UUID(hostdev.uuid)
> +

These UUID conversions should be wrapped in try except. See how
_compare_int handles int conversions as an example. We need to be very
conservative here because of the way this lookup path is used in places
like virt-manager IIRC

Also the self.name[5:]... bit is repeated in several places. It would
help to add a get_mdev_uuid() function to NodeDevice IMO that
centralizes this logic

>  return False
>  
>  
> @@ -207,6 +214,10 @@ def _AddressStringToHostdev(conn, addrstr):
>  hostdev.type = "usb"
>  hostdev.bus = bus
>  hostdev.device = device
> +
> +elif "mdev_" in addrstr:
> +hostdev.type = "mdev"
> +hostdev.uuid = addrstr[5:].replace('_', '-')
>  else:
>  raise RuntimeError("Unknown address type")
>  except Exception:
> 

This special syntactic sugar is intended only for --hostdev $PCI and
--hostdev $USB addresses of the well known address forms. I don't think
it's necessary for this case, so please drop it or explain what I missed.

Thanks,
Cole



Re: [PATCH virt-install v1 1/3] virt-install: add mediated device

2021-05-18 Thread Cole Robinson
On 4/14/21 11:18 AM, Shalini Chellathurai Saroja wrote:
> Add support to install a virtual server with passed-through mediated
> device. Mediated device can be created using vGPU attached to
> vfio_pci driver or DASD attached to vfio_ccw driver or APQNs attached
> to vfio_ap driver.
> 
> Signed-off-by: Shalini Chellathurai Saroja 
> ---
>  .../cli/compare/virt-install-many-devices.xml | 19 +
>  tests/data/testdriver/testdriver.xml  | 40 +++
>  tests/test_cli.py |  3 ++
>  tests/test_nodedev.py | 30 ++
>  virtinst/devices/hostdev.py   | 34 +++-
>  virtinst/nodedev.py   |  4 ++
>  6 files changed, 129 insertions(+), 1 deletion(-)

Sorry for the massively delayed review.

This patch looks good. I pushed this patch with a tweak to the
hostdev.py error message.

I'll follow up to the other patches individually

Thanks,
Cole



Re: [PATCH 1/3] Fix forgetting password from keyring

2021-04-06 Thread Cole Robinson
On 3/21/21 1:44 PM, w...@torlan.ru wrote:
> From: WGH 
> 
> First, Item.Delete never accepted any arguments, so this code likely
> never worked.
> 
> Second, Item.Delete might return a Prompt object, which client
> is supposed to call if keyring wants to confirm deletion.
> ---
>  virtManager/lib/keyring.py | 7 ++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/virtManager/lib/keyring.py b/virtManager/lib/keyring.py
> index 203886dc..c0f50142 100644
> --- a/virtManager/lib/keyring.py
> +++ b/virtManager/lib/keyring.py
> @@ -87,7 +87,12 @@ class vmmKeyring(vmmGObject):
>  iface = Gio.DBusProxy.new_sync(self._dbus, 0, None,
> "org.freedesktop.secrets", path,
> "org.freedesktop.Secret.Item", 
> None)
> -iface.Delete("(s)", "/")
> +prompt = iface.Delete()
> +if prompt != "/":
> +iface = Gio.DBusProxy.new_sync(self._dbus, 0, None,
> +   "org.freedesktop.secrets", 
> prompt,
> +   
> "org.freedesktop.Secret.Prompt", None)
> +iface.Prompt("(s)", "")
>  except Exception:
>  log.exception("Failed to delete keyring secret")
>  
> 

Thanks! I pushed this series

- Cole



Re: [PATCH v2] virtmanager: Fix legacy firmware installation error

2021-04-06 Thread Cole Robinson
On 3/10/21 4:02 PM, Charles Arnold wrote:
> When we choose /usr/share/qemu/bios.bin or /usr/share/qemu/bios-256k.bin
> in Customize configuration before install, show this error message:
> 
> Unable to complete install: 'operation failed: unable to find any master var 
> store for loader: /usr/share/qemu/bios.bin'
> 
> Traceback (most recent call last):
>   File "/usr/share/virt-manager/virtManager/asyncjob.py", line 65, in 
> cb_wrapper
> callback(asyncjob, *args, **kwargs)
>   File "/usr/share/virt-manager/virtManager/createvm.py", line 2081, in 
> _do_async_install
> installer.start_install(guest, meter=meter)
>   File "/usr/share/virt-manager/virtinst/install/installer.py", line 721, in 
> start_install
> domain = self._create_guest(
>   File "/usr/share/virt-manager/virtinst/install/installer.py", line 669, in 
> _create_guest
> domain = self.conn.createXML(install_xml or final_xml, 0)
>   File "/usr/lib64/python3.8/site-packages/libvirt.py", line 4347, in 
> createXML
> raise libvirtError('virDomainCreateXML() failed')
> libvirt.libvirtError: operation failed: unable to find any master var store 
> for loader: /usr/share/qemu/bios.bin
> 
> This patch checks the loader for bios.bin or bios-256k.bin and prevents them
> from begin presented as UEFI files (type pflash) but instead as type 'rom'.
> 

Sorry for the late response. From your domcapabilities mail I guess suse
is shipping firmware.json files for bios*.bin? I guess that's acceptable
but it's a little strange for historical usage, no one should really be
manually choosing between bios.bin and bios-256k.bin manually for
example. Anyways...

> Signed-off-by: Charles Arnold 
> Signed-off-by: Yu-Chen Cho 
> 
> ---
>  virtManager/object/domain.py | 18 +-
>  virtinst/guest.py| 13 +
>  2 files changed, 22 insertions(+), 9 deletions(-)
> 
> diff --git a/virtManager/object/domain.py b/virtManager/object/domain.py
> index cc2f506d..21841ada 100644
> --- a/virtManager/object/domain.py
> +++ b/virtManager/object/domain.py
> @@ -671,16 +671,16 @@ class vmmDomain(vmmLibvirtObject):
>  
>  if loader != _SENTINEL:
>  if loader is None:
> -# Implies seabios, aka the default, so clear everything
> -guest.os.loader = None
> -guest.os.loader_ro = None
> -guest.os.loader_type = None
> -guest.os.nvram = None
> -guest.os.nvram_template = None
> +# Implies the default, so clear everything
> +guest.set_legacy_path(None)
>  else:
> -# Implies UEFI
> -guest.set_uefi_path(loader)
> -guest.disable_hyperv_for_uefi()
> +if "bios.bin" in loader or "bios-256k.bin" in loader:
> +# Implies Custom: seabios, not UEFI
> +guest.set_legacy_path(loader)
> +else:
> +# Implies UEFI
> +guest.set_uefi_path(loader)
> +guest.disable_hyperv_for_uefi()
>  
>  if nvram != _SENTINEL:
>  guest.os.nvram = nvram
> diff --git a/virtinst/guest.py b/virtinst/guest.py
> index 34da3e8c..52f49252 100644
> --- a/virtinst/guest.py
> +++ b/virtinst/guest.py
> @@ -566,6 +566,19 @@ class Guest(XMLBuilder):
>  self.os.machine)
>  self.os.machine = "q35"
>  
> +def set_legacy_path(self, path):
> +"""
> +Configure legacy BIOS for QEMU.
> +"""
> +self.os.loader = path
> +if (path):
> +self.os.loader_ro = True
> +else:
> +self.os.loader_ro = None
> +self.os.loader_type = None
> +self.os.nvram = None
> +self.os.nvram_template = None
> +

This still has the same problem I pointed out in previous mails, that
`pytest --cov` now regresses. Anything added to virtinst/ needs unit
test coverage.

I think the simpler thing to do here is extend the 'if loader is None'
in details.py to handle bios*.bin too. Then the logic stays self
contained in virt-manager which is the only one that needs this code
path so far, and the change will be smaller.

Thanks,
Cole



Re: [virt-manager PATCH 2/2] virtinst: bhyve: properly configure loader

2021-02-15 Thread Cole Robinson
On 2/11/21 11:05 AM, Roman Bogorodskiy wrote:
>   Cole Robinson wrote:
> 
>> On 2/6/21 8:18 AM, Roman Bogorodskiy wrote:
>>> Bhyve requires explicit loader configuration. So query
>>> domain capabilities, try to find the "official"
>>> firmware and configure all the necessary loader options.
>>>
>>> Signed-off-by: Roman Bogorodskiy 
>>> ---
>>>  virtinst/domain/os.py | 16 
>>>  1 file changed, 16 insertions(+)
>>>
>>> diff --git a/virtinst/domain/os.py b/virtinst/domain/os.py
>>> index 59fbc43e..4a9fbc81 100644
>>> --- a/virtinst/domain/os.py
>>> +++ b/virtinst/domain/os.py
>>> @@ -127,3 +127,19 @@ class DomainOs(XMLBuilder):
>>>  self.init = "/sbin/init"
>>>  else:
>>>  self.init = "/bin/sh"
>>> +if self.conn.is_bhyve():
>>> +dom_caps = guest.lookup_domcaps()
>>> +firmware_files = [f.value for f in dom_caps.os.loader.values]
>>> +if not firmware_files:
>>> +return
>>> +# Prefer the known BHYVE_UEFI.fd firmware
>>> +# from the sysutils/bhyve-firmware port. If not found,
>>> +# just pick the first one
>>> +firmware_file = None
>>> +for f in firmware_files:
>>> +if 'BHYVE_UEFI.fd' in f:
>>> +firmware_file = f
>>> +break
>>> +self.loader = firmware_file or firmware_files[0]
>>> +self.loader_type = "pflash"
>>> +self.loader_ro = True
>>>
>>
>> Following on from my cover letter response. I don't think anything needs
>> to be changed here in os.py. I think this should be two patches:
> 
> Oh, I've just re-read it before responding and noticed you suggested to
> do that in two patches, and I've already sent v2. Change #2 appears to
> be a one line change though, I'll look, maybe I've missed some tests
> that need to be updated.
> 

No problem your patches are fine, I pushed them now

>> 1) teaching guest.py:get_uefi_path() and
>> domcaps.find_uefi_path_for_arch() about the preferred bhyve path. You
>> can test this in test_cli.py first by `--boot uefi` generating correct XML.
>>
>> 2) extending guest.py:prefers_uefi() to handle bhyve too. That should
>> change the virt-install/virt-manager default automatically.
>>
>> question: does bhyve libvirt driver support  ? It
>> would be nice if virt-install didn't need to add more of this firmware
>> filename scraping
> 
> IIUC,  instructs libvirt to find a proper firmware
> automatically, similarly to what I'm trying to implement here in
> virt-manager. I'll see if I can implement that for the bhyve driver.
> 

The main benefit for virt-install is that it lets us stop doing this
sort of filename scraping which in qemu land was a treadmill of changes
that needed to be duplicated across multiple tools. firmware='efi' is
the preferred way forward

Thanks,
Cole



Re: [virt-manager PATCH 0/2] Couple of bhyve related improvements

2021-02-15 Thread Cole Robinson
On 2/11/21 6:34 AM, Roman Bogorodskiy wrote:
>   Cole Robinson wrote:
> 
>> On 2/9/21 10:51 AM, Roman Bogorodskiy wrote:
>>>   Cole Robinson wrote:
>>>
>>>> On 2/6/21 8:18 AM, Roman Bogorodskiy wrote:
>>>>> Roman Bogorodskiy (2):
>>>>>   virtinst: prefer SATA bus for bhyve
>>>>>   virtinst: bhyve: properly configure loader
>>>>>
>>>>>  virtinst/connection.py   |  2 ++
>>>>>  virtinst/devices/disk.py |  3 +++
>>>>>  virtinst/domain/os.py| 16 
>>>>>  3 files changed, 21 insertions(+)
>>>>>
>>>>
>>>> Thanks for the patches. We need to add test coverage though, which since
>>>> these will be the first bhyve tests will take some plumbing.
>>>>
>>>> * Drop bhyve `virsh capabilities` in tests/data/capabilities/bhyve.xml
>>>> * Drop bhyve `virsh domcapabilities` in
>>>> tests/data/capabilities/bhyve-domcaps.xml
>>>> * In tests/utils.py:_URIs, add self.bhyve = _m("bhyve:///") +
>>>> _caps("bhyve.xml") + _domcaps("bhyve-domcaps.xml")
>>>> * in tests/test_cli.py add something like this:
>>>>
>>>> ```
>>>> 
>>>>
>>>> # bhyve specific tests #
>>>>
>>>> 
>>>>
>>>> c = vinst.add_category("bhyve", "--name foobhyve --noautoconsole
>>>> --connect " + utils.URIs.bhyve)
>>>> c.add_compare("--os-variant fedora27", "bhyve-default-f27")
>>>> ```
>>>>
>>>> * Run the test suite, verify the generated
>>>> ./tests/data/cli/compare/virt-install-bhyve-default-f27.xml looks sane.
>>>>
>>>>
>>>> That will all be the first commit. The next patches should cause test
>>>> output changes, use `pytest --regenerate-output` to refresh the test XML
>>>> and commit the difference.
>>>>
>>>> Also check `pytest --cov` doesn't regress
>>>>
>>>> Thanks,
>>>> Cole
>>>>
>>>>
>>>>
>>>> - Cole
>>>>
>>>
>>> Thanks for the detailed instruction, will do that.
>>>
>>> BTW, I noticed a strange issue with test_cli.py which I was hoping to
>>> fix later, but apparently need to figure out first for smooth testing:
>>>
>>> Many tests are failing with the following trace:
>>>
>>> Traceback (most recent call last):  
>>> 
>>> 
>>>   File "/usr/home/novel/code/virt-manager/tests/test_cli.py", line 228, in 
>>> _launch_command 
>>>  
>>> ret = virtinstall.main(conn=conn)   
>>> 
>>> 
>>>   File "/usr/home/novel/code/virt-manager/virtinst/virtinstall.py", line 
>>> 1145, in main
>>> guest, installer = build_guest_instance(conn, options)  
>>> 
>>> 
>>>   File "/usr/home/novel/code/virt-manager/virtinst/virtinstall.py", line 
>>> 598, in build_guest_instance
>>>
>>> cli.validate_disk(disk) 
>>> 
>>> 
>>>   File "/usr/home/novel/code/virt-manager/virtinst/cli.py", line 377, in 
>>> validate_disk
>>> check_inuse_conflict()  
>>> 
>>> 
>>>   File "/usr/home/novel/code/virt-manager/virtinst/cl

Re: [virt-manager PATCH v2 0/4] Couple of bhyve related improvements

2021-02-15 Thread Cole Robinson
On 2/11/21 10:41 AM, Roman Bogorodskiy wrote:
> Changes from v1:
> 
>  - Added tests/test_cli.py testing,
>  - Reworked loader commit,
>  - Added nmdm console configuration commit.
> 
> Roman Bogorodskiy (4):
>   tests: cli: start testing bhyve
>   virtinst: prefer SATA bus for bhyve
>   virtinst: bhyve: properly configure loader
>   virtinst: bhyve: configure nmdm console
> 
>  tests/data/capabilities/bhyve-domcaps.xml | 55 +++
>  tests/data/capabilities/bhyve.xml | 48 
>  .../virt-install-bhyve-default-f27.xml| 38 +
>  tests/test_cli.py |  9 +++
>  tests/utils.py|  1 +
>  virtinst/connection.py|  2 +
>  virtinst/devices/disk.py  |  3 +
>  virtinst/domcapabilities.py   | 11 
>  virtinst/guest.py | 11 +++-
>  9 files changed, 176 insertions(+), 2 deletions(-)
>  create mode 100644 tests/data/capabilities/bhyve-domcaps.xml
>  create mode 100644 tests/data/capabilities/bhyve.xml
>  create mode 100644 tests/data/cli/compare/virt-install-bhyve-default-f27.xml
> 

Thanks, pushed now, with a small change on top extending the tests a bit
and adding some coverage exclusions

- Cole



Re: [virt-manager PATCH 0/2] Couple of bhyve related improvements

2021-02-10 Thread Cole Robinson
On 2/9/21 10:51 AM, Roman Bogorodskiy wrote:
>   Cole Robinson wrote:
> 
>> On 2/6/21 8:18 AM, Roman Bogorodskiy wrote:
>>> Roman Bogorodskiy (2):
>>>   virtinst: prefer SATA bus for bhyve
>>>   virtinst: bhyve: properly configure loader
>>>
>>>  virtinst/connection.py   |  2 ++
>>>  virtinst/devices/disk.py |  3 +++
>>>  virtinst/domain/os.py| 16 
>>>  3 files changed, 21 insertions(+)
>>>
>>
>> Thanks for the patches. We need to add test coverage though, which since
>> these will be the first bhyve tests will take some plumbing.
>>
>> * Drop bhyve `virsh capabilities` in tests/data/capabilities/bhyve.xml
>> * Drop bhyve `virsh domcapabilities` in
>> tests/data/capabilities/bhyve-domcaps.xml
>> * In tests/utils.py:_URIs, add self.bhyve = _m("bhyve:///") +
>> _caps("bhyve.xml") + _domcaps("bhyve-domcaps.xml")
>> * in tests/test_cli.py add something like this:
>>
>> ```
>> 
>>
>> # bhyve specific tests #
>>
>> 
>>
>> c = vinst.add_category("bhyve", "--name foobhyve --noautoconsole
>> --connect " + utils.URIs.bhyve)
>> c.add_compare("--os-variant fedora27", "bhyve-default-f27")
>> ```
>>
>> * Run the test suite, verify the generated
>> ./tests/data/cli/compare/virt-install-bhyve-default-f27.xml looks sane.
>>
>>
>> That will all be the first commit. The next patches should cause test
>> output changes, use `pytest --regenerate-output` to refresh the test XML
>> and commit the difference.
>>
>> Also check `pytest --cov` doesn't regress
>>
>> Thanks,
>> Cole
>>
>>
>>
>> - Cole
>>
> 
> Thanks for the detailed instruction, will do that.
> 
> BTW, I noticed a strange issue with test_cli.py which I was hoping to
> fix later, but apparently need to figure out first for smooth testing:
> 
> Many tests are failing with the following trace:
> 
> Traceback (most recent call last):
>   
> 
>   File "/usr/home/novel/code/virt-manager/tests/test_cli.py", line 228, in 
> _launch_command   
>
> ret = virtinstall.main(conn=conn) 
>   
> 
>   File "/usr/home/novel/code/virt-manager/virtinst/virtinstall.py", line 
> 1145, in main
> guest, installer = build_guest_instance(conn, options)
>   
> 
>   File "/usr/home/novel/code/virt-manager/virtinst/virtinstall.py", line 598, 
> in build_guest_instance   
> 
> cli.validate_disk(disk)   
>   
> 
>   File "/usr/home/novel/code/virt-manager/virtinst/cli.py", line 377, in 
> validate_disk
> check_inuse_conflict()
>   
> 
>   File "/usr/home/novel/code/virt-manager/virtinst/cli.py", line 359, in 
> check_inuse_conflict  
>  
> names = dev.is_conflict_disk()
> 
>   File "/usr/home/novel/code/virt-manager/virtinst/devices/disk.py", line 
> 832, in is_conflict_disk
> read_only=self.read_only) 
>   
>   

Re: [virt-manager PATCH] virtinst: fix error message format string

2021-02-10 Thread Cole Robinson
On 2/10/21 8:54 AM, Roman Bogorodskiy wrote:
> Fix a regression introduced by commit 71f034d6b where
> format string expects kwarg "domain", but "vm" is passed instead.
> 
> Signed-off-by: Roman Bogorodskiy 
> ---
>  virtinst/virtxml.py | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/virtinst/virtxml.py b/virtinst/virtxml.py
> index bd2b4282..0c8da37e 100644
> --- a/virtinst/virtxml.py
> +++ b/virtinst/virtxml.py
> @@ -251,7 +251,7 @@ def start_domain_transient(conn, xmlobj, devs, action, 
> confirm):
>  dom = conn.createXML(xmlobj.get_xml())
>  except libvirt.libvirtError as e:
>  fail(_("Failed starting domain '%(domain)s': %(error)s") % {
> - "vm": xmlobj.name,
> + "domain": xmlobj.name,
>   "error": e,
>   })
>  else:
> 

Pushed, thanks. Also I pushed a follow up patch to fix the test case
that should have caught this

- Cole



Re: [virt-manager PATCH 2/2] virtinst: bhyve: properly configure loader

2021-02-08 Thread Cole Robinson
On 2/6/21 8:18 AM, Roman Bogorodskiy wrote:
> Bhyve requires explicit loader configuration. So query
> domain capabilities, try to find the "official"
> firmware and configure all the necessary loader options.
> 
> Signed-off-by: Roman Bogorodskiy 
> ---
>  virtinst/domain/os.py | 16 
>  1 file changed, 16 insertions(+)
> 
> diff --git a/virtinst/domain/os.py b/virtinst/domain/os.py
> index 59fbc43e..4a9fbc81 100644
> --- a/virtinst/domain/os.py
> +++ b/virtinst/domain/os.py
> @@ -127,3 +127,19 @@ class DomainOs(XMLBuilder):
>  self.init = "/sbin/init"
>  else:
>  self.init = "/bin/sh"
> +if self.conn.is_bhyve():
> +dom_caps = guest.lookup_domcaps()
> +firmware_files = [f.value for f in dom_caps.os.loader.values]
> +if not firmware_files:
> +return
> +# Prefer the known BHYVE_UEFI.fd firmware
> +# from the sysutils/bhyve-firmware port. If not found,
> +# just pick the first one
> +firmware_file = None
> +for f in firmware_files:
> +if 'BHYVE_UEFI.fd' in f:
> +firmware_file = f
> +break
> +self.loader = firmware_file or firmware_files[0]
> +self.loader_type = "pflash"
> +self.loader_ro = True
> 

Following on from my cover letter response. I don't think anything needs
to be changed here in os.py. I think this should be two patches:

1) teaching guest.py:get_uefi_path() and
domcaps.find_uefi_path_for_arch() about the preferred bhyve path. You
can test this in test_cli.py first by `--boot uefi` generating correct XML.

2) extending guest.py:prefers_uefi() to handle bhyve too. That should
change the virt-install/virt-manager default automatically.

question: does bhyve libvirt driver support  ? It
would be nice if virt-install didn't need to add more of this firmware
filename scraping

Thanks,
Cole



Re: [virt-manager PATCH 0/2] Couple of bhyve related improvements

2021-02-08 Thread Cole Robinson
On 2/6/21 8:18 AM, Roman Bogorodskiy wrote:
> Roman Bogorodskiy (2):
>   virtinst: prefer SATA bus for bhyve
>   virtinst: bhyve: properly configure loader
> 
>  virtinst/connection.py   |  2 ++
>  virtinst/devices/disk.py |  3 +++
>  virtinst/domain/os.py| 16 
>  3 files changed, 21 insertions(+)
> 

Thanks for the patches. We need to add test coverage though, which since
these will be the first bhyve tests will take some plumbing.

* Drop bhyve `virsh capabilities` in tests/data/capabilities/bhyve.xml
* Drop bhyve `virsh domcapabilities` in
tests/data/capabilities/bhyve-domcaps.xml
* In tests/utils.py:_URIs, add self.bhyve = _m("bhyve:///") +
_caps("bhyve.xml") + _domcaps("bhyve-domcaps.xml")
* in tests/test_cli.py add something like this:

```


# bhyve specific tests #



c = vinst.add_category("bhyve", "--name foobhyve --noautoconsole
--connect " + utils.URIs.bhyve)
c.add_compare("--os-variant fedora27", "bhyve-default-f27")
```

* Run the test suite, verify the generated
./tests/data/cli/compare/virt-install-bhyve-default-f27.xml looks sane.


That will all be the first commit. The next patches should cause test
output changes, use `pytest --regenerate-output` to refresh the test XML
and commit the difference.

Also check `pytest --cov` doesn't regress

Thanks,
Cole



- Cole



Re: [PATCH] virtmanager: Fixed the legacy firmware installation error

2021-01-13 Thread Cole Robinson
On 1/13/21 1:40 AM, Al Cho wrote:
> Hi,
> 
> That's before the last step during install new VM,
> and there is a checkbutton "Customize Configure before install"
> and Apply after choose Frimware bios.bin or bios-256k.bin, Click "Begin
> Installation"
> show up the error message, list virt-manager --debug output below:
> 
> [Wed, 13 Jan 2021 14:35:14 virt-manager 3983] DEBUG (vmwindow:184)
> Showing VM details:  id=0x7f3b5449b080>
> [Wed, 13 Jan 2021 14:35:14 virt-manager 3983] DEBUG (engine:316) window
> counter incremented to 3
> [Wed, 13 Jan 2021 14:35:20 virt-manager 3983] DEBUG (libvirtobject:48)
> Redefining 
> with XML diff:
> --- Original XML
> +++ New XML
> @@ -11,6 +11,7 @@
>    2
>    
>      hvm
> +    /usr/share/qemu/bios.bin
>      
>    
>    
> 
> [Wed, 13 Jan 2021 14:35:25 virt-manager 3983] DEBUG (createvm:1903) User
> finished customize dialog, starting install
> [Wed, 13 Jan 2021 14:35:25 virt-manager 3983] DEBUG (createvm:2000)
> Starting background install process
> [Wed, 13 Jan 2021 14:35:25 virt-manager 3983] DEBUG (installer:599)
> Generated install XML:
> 
>   opensuse15.2-2
>   e533dfcd-6767-400b-af6a-01de2716c88c
>   
>      xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0;>
>       http://opensuse.org/opensuse/15.2"/>
>     
>   
>   2097152
>   2097152
>   2
>   
>     hvm
>     /usr/share/qemu/bios.bin
>     
>     
>   
>   
>     
>     
>     
>   
>   
>   
>     
>     
>     
>   
>   destroy
>   
>     
>     
>   
>   
>     /usr/bin/qemu-system-x86_64
>     
>       
>       
>       
>     
>     
>       
>       
>       
>       
>     
>     
>     
>       
>       
>       
>     
>     
>     
>       
>       
>     
>     
>       
>     
>     
>     
>       
>     
>     
>     
>       
>     
>     
>     
>     
>     
>       /dev/urandom
>     
>   
> 
> 
> [Wed, 13 Jan 2021 14:35:25 virt-manager 3983] DEBUG (installer:601)
> Generated boot XML:
> 
>   opensuse15.2-2
>   e533dfcd-6767-400b-af6a-01de2716c88c
>   
>      xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0;>
>       http://opensuse.org/opensuse/15.2"/>
>     
>   
>   2097152
>   2097152
>   2
>   
>     hvm
>     /usr/share/qemu/bios.bin
>     
>   
>   
>     
>     
>     
>   
>   
>   
>     
>     
>     
>   
>   
>     
>     
>   
>   
>     /usr/bin/qemu-system-x86_64
>     
>       
>       
>       
>     
>     
>       
>       
>     
>     
>     
>       
>       
>       
>     
>     
>     
>       
>       
>     
>     
>       
>     
>     
>     
>       
>     
>     
>     
>       
>     
>     
>     
>     
>     
>       /dev/urandom
>     
>   
> 
> 
> [Wed, 13 Jan 2021 14:35:25 virt-manager 3983] DEBUG (error:101) error
> dialog message:
> summary=Unable to complete install: 'operation failed: unable to find
> any master var store for loader: /usr/share/qemu/bios.bin'
> details=Traceback (most recent call last):
>   File "./virtManager/asyncjob.py", line 65, in cb_wrapper
>     callback(asyncjob, *args, **kwargs)
>   File "./virtManager/createvm.py", line 2001, in _do_async_install
>     installer.start_install(guest, meter=meter)
>   File "./virtinst/install/installer.py", line 701, in start_install
>     domain = self._create_guest(
>   File "./virtinst/install/installer.py", line 649, in _create_guest
>     domain = self.conn.createXML(install_xml or final_xml, 0)
>   File "/usr/lib64/python3.8/site-packages/libvirt.py", line 4366, in
> createXML
>     raise libvirtError('virDomainCreateXML() failed')
> libvirt.libvirtError: operation failed: unable to find any master var
> store for loader: /usr/share/qemu/bios.bin

Okay thanks for clarifying. We populate that firmware box from
domcapabilities  values. Can you attach `sudo virsh
domcapabilities` output?

- Cole



Re: [PATCH] virtmanager: Fixed the legacy firmware installation error

2021-01-11 Thread Cole Robinson
On 1/5/21 2:34 AM, Cho, Yu-Chen wrote:
> When we choose /usr/share/qemu/bios.bin or /usr/share/qemu/bios-256k.bin
> in Customize configuration before install, show this error message:
> 
> Unable to complete install: 'operation failed: unable to find any master var 
> store for loader: /usr/share/qemu/bios.bin'
> 

This still seems to have tabs vs spaces issues, just running
virt-manager from git after applying this gives me a backtrace.

How are you selecting bios.bin/bios-256k.bin in the virt-manager UI
exactly? There's a 'Firmware' option for selecting OVMF or BIOS
(meaning, let qemu choose the default) but on my machine there is no
explicit bios*.bin option

- Cole



Re: [PATCH] virtmanager: Fixed the legacy firmware installation error

2021-01-04 Thread Cole Robinson
On 12/22/20 3:24 AM, Cho, Yu-Chen wrote:
> When we choose /usr/share/qemu/bios.bin or /usr/share/qemu/bios-256k.bin
> in Customize configuration before install, show this error message:
> 
> Unable to complete install: 'operation failed: unable to find any master var 
> store for loader: /usr/share/qemu/bios.bin'
> 
> Traceback (most recent call last):
>   File "/usr/share/virt-manager/virtManager/asyncjob.py", line 65, in 
> cb_wrapper
> callback(asyncjob, *args, **kwargs)
>   File "/usr/share/virt-manager/virtManager/createvm.py", line 2081, in 
> _do_async_install
> installer.start_install(guest, meter=meter)
>   File "/usr/share/virt-manager/virtinst/install/installer.py", line 721, in 
> start_install
> domain = self._create_guest(
>   File "/usr/share/virt-manager/virtinst/install/installer.py", line 669, in 
> _create_guest
> domain = self.conn.createXML(install_xml or final_xml, 0)
>   File "/usr/lib64/python3.8/site-packages/libvirt.py", line 4347, in 
> createXML
> raise libvirtError('virDomainCreateXML() failed')
> libvirt.libvirtError: operation failed: unable to find any master var store 
> for loader: /usr/share/qemu/bios.bin
> 
> This patch check the loader if it is include bios.bin and bios-256k.bin.
> 
> Signed-off-by: Cho, Yu-Chen 

Thanks for the patch. Please run pytest --cov and make sure there is not
any test or coverage regressions after this patch is applied.

> ---
>  virtManager/object/domain.py | 18 +-
>  virtinst/guest.py| 13 +
>  2 files changed, 22 insertions(+), 9 deletions(-)
> 
> diff --git a/virtManager/object/domain.py b/virtManager/object/domain.py
> index cc2f506d..77590375 100644
> --- a/virtManager/object/domain.py
> +++ b/virtManager/object/domain.py
> @@ -671,16 +671,16 @@ class vmmDomain(vmmLibvirtObject):
>  
>  if loader != _SENTINEL:
>  if loader is None:
> -# Implies seabios, aka the default, so clear everything
> -guest.os.loader = None
> -guest.os.loader_ro = None
> -guest.os.loader_type = None
> -guest.os.nvram = None
> -guest.os.nvram_template = None
> +# Implies the default, so clear everything
> +guest.set_legacy_path(None)
>  else:
> -# Implies UEFI
> -guest.set_uefi_path(loader)
> -guest.disable_hyperv_for_uefi()
> +if ("bios.bin" or "bios-256k.bin" in loader):
> +# Imlies Custom: seabios, not UEFI

Mixing tabs and spaces here which breaks running pytest on my machine.
Also Imlies -> Implies

- Cole



Re: [virt-manager PATCH 0/3] Create pool: show only available types

2020-12-09 Thread Cole Robinson
On 12/8/20 9:09 AM, Pino Toscano wrote:
> Hi Cole,
> 
> I'm reordering the paragraphs of your answer to ease mine.
> 
> On Monday, 7 December 2020 20:39:29 CET Cole Robinson wrote:
>> Can you explain your motivation a bit: Has this caused you issues
>> before?
> 
> My main motivation was to fix the fact that the pool creation dialog
> offers all the known types to virt-manager, even those that are not
> supported by the underlying connection. So I thought that, rather than
> letting the users select any of those unavailable ones to only get a
> "not supported" error afterwards, it seems better to offer only what
> can actually be created/setup.
> 

Yes I agree that it would be an improvement over the current UI to
report when we know a pool type is not going to work.

> Another thing came to my mind by reading this old Debian bug:
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=699701
> IIRC the "new volume" dialog used to offer a lot more formats in the
> past than only raw and qcow2. Still, that dialog has an editable field
> with the format, so you can change it to anything uncommon or even
> non-existing. So a possible idea was to validate that, both in the
> "new volume" dialog and when creating a new volume as result of
> virt-install --disk parameters, and warn on invalid formats.
> 

If libvirt throws an error message quickly enough I'm not really a fan
of duplicating validation in the app, unless there's a really good
reason. It's more code that needs testing. Worse is its an opportunity
for us to screw up and falsely reject something, so what started out as
a minor validation improvement can turn into a major blocking issue
(this has happened many times before).

Valid cases IMO are things like the new VM wizard where there are
multiple phases and we want to catch errors long before libvirt
interaction, or certain ambiguous situations where we want to point the
user in a better direction. There's other cases too but I don't think
volume formats apply.

Volume formats are problematic for other reasons too. Libvirt reports
'dmg' is supported but actually it's not available for volume creation
via qemu-img. reporting 'qcow' which basically no one wants could be
confusing for users looking for qcow2. RHEL compiles out support for
multiple formats but that isn't reflected in the capabilities list.
There's security concerns about some of the less common formats too IIRC

We used to have a hardcoded list of volume types in UI but I removed it
in favor of the current approach. I suspect 99% of users want qcow2 or
raw, and if they know they want something different then typing into the
field is not a big burden. That's my thinking

Using the storage capabilities to determine which storage pool types are
even _capable_ of creating file based volumes would replace a hardcoded
list we have in the code, so that's another net win, but still IMO not
worth the storagecapabilities call either.

>> Or is there something more useful in the storage XML that you
>> want to add support for afterwards?
> 
> One addition I wanted to do is supporting iscsi-direct pools, including
> their creation. Then I run into "hmm, that will make the combo in the
> new pool dialog even longer", so that got me into this series.
> 

That's fair.

Do you use iscsi-direct, or just notice it was missing? I've only heard
about Red Hat QE using iscsi-direct but it's fairly new in the scheme of
things so I could be missing something.

If you use it, do you use  with it? Because there's presently no
auth UI, but I know the XML supports it.

>> The code looks fine but I am conflicted about this. I'm not sure it's
>> worth adding code to read and process storage capabilities XML at all.
>> I'd rather see the storage dialogs become smaller, not more
>> featureful/smarter at the cost of increased maintenance and potential
>> for future feature creep.
> 
> Considering what I wrote, I can see your POV, however on my defense
> I don't consider my suggestion that expensive (in terms of complexity
> & maintenance).
>

I agree, the code is straight forward. But it opens the door to more
code which I'm not sold on the net value of either.

Adding full coverage for this would raise the code amount too. For
example this API is not supported by libvirt's test driver. If we want
to test it in the uitest suite we will have to mock the API. We do that
already for capabilities and domcapabilities so there's an existing
pattern but it takes more work and consideration.

>> For example sheepdog and mpath can be dropped
>> entirely IMO. rbd pool creation should probably be dropped because the
>> UI is never going to be comprehensive enough to handle the common case
>> which requires specifying an auth secret. Same may apply to gluster but
&

Re: [virt-manager PATCH 0/3] Create pool: show only available types

2020-12-09 Thread Cole Robinson
On 12/7/20 5:18 PM, Peter Crowther wrote:
> Cole, what would you recommend as an alternative to virt-manager these
> days?  It's slowly becoming apparent that the tool is moving in a
> direction opposite to our interests: low-maintenance rather than
> high-functionality, and where even third-party contributions aren't
> accepted into the codebase because of potential future maintenance
> effort.  That's not a problem if there's something else we can deploy to
> folks who aren't necessarily handy with XML and a command line; it *is*
> a problem if there's nothing else out there.
> 
> Or, as noted before, do we simply fork to a potentially (and
> deliberately) more leading-edge tool that has a different trade-off
> between functionality, maintenance, and reliability?
> 

I can't tell if this is a legitimate question or you were trying to make
a point, but the other desktop UI tools in libvirt space I know of are
gnome-boxes and qt-virt-manager (no relation). I'm sure there's others
but those are the only two that have crossed my radar in the past five
years that are actively maintained.

I'm curious do any of the bits I mentioned dropping actually matter for
your usecase? Creating storage pools outside of the simple dir/fs
usecases is pretty niche in my experience. For the case like ceph to
even get it working as a storage pool you either need to have pretty
strong understanding of libvirt XML, or you are copy+pasting XML from an
online source, in which case having clicky (and limited) UI doesn't add
much over the XML editor or the command line.

- Cole

> Cheers,
> 
> - Peter
> 
> On Mon, 7 Dec 2020 at 19:39, Cole Robinson  <mailto:crobi...@redhat.com>> wrote:
> 
> On 11/24/20 9:21 AM, Pino Toscano wrote:
> > Hi,
> >
> > this series adds a minimal StoragePoolCapabilities handling using the
> > virConnect.getStoragePoolCapabilities libvirt API; this is used to
> > filter the available pool types in the "Create pool" dialog, so it
> does
> > not offer anymore pool types that cannot be created (as unsupported by
> > the libvirt connection).
> >
> > Pino Toscano (3):
> >   support: check for virConnect.getStoragePoolCapabilities
> >   virtinst: add a basic StoragePoolCapabilities
> >   createpool: show only available pool types
> >
> >  tests/data/capabilities/poolcaps-fs.xml | 207
> 
> >  tests/test_capabilities.py              |  25 +++
> >  virtManager/createpool.py               |   7 +-
> >  virtinst/__init__.py                    |   1 +
> >  virtinst/storagepoolcapabilities.py     |  61 +++
> >  virtinst/support.py                     |   2 +
> >  6 files changed, 302 insertions(+), 1 deletion(-)
> >  create mode 100644 tests/data/capabilities/poolcaps-fs.xml
> >  create mode 100644 virtinst/storagepoolcapabilities.py
> >
> 
> The code looks fine but I am conflicted about this. I'm not sure it's
> worth adding code to read and process storage capabilities XML at all.
> I'd rather see the storage dialogs become smaller, not more
> featureful/smarter at the cost of increased maintenance and potential
> for future feature creep. For example sheepdog and mpath can be dropped
> entirely IMO. rbd pool creation should probably be dropped because the
> UI is never going to be comprehensive enough to handle the common case
> which requires specifying an auth secret. Same may apply to gluster but
> I'd need to double check.
> 
> As implemented I'm a little iffy on the UI too. Just hiding options
> without giving the user a hint can cause confusion, like why is FOO
> available as a pool option for one connection but not the other. There's
> ways to fix it but at the cost of more code with goes back to my point
> above.
> 
> Can you explain your motivation a bit: Has this caused you issues
> before? Or is there something more useful in the storage XML that you
> want to add support for afterwards?
> 
> Thanks,
> Cole
> 


- Cole



Re: [virt-manager PATCH 0/3] Create pool: show only available types

2020-12-07 Thread Cole Robinson
On 11/24/20 9:21 AM, Pino Toscano wrote:
> Hi,
> 
> this series adds a minimal StoragePoolCapabilities handling using the
> virConnect.getStoragePoolCapabilities libvirt API; this is used to
> filter the available pool types in the "Create pool" dialog, so it does
> not offer anymore pool types that cannot be created (as unsupported by
> the libvirt connection).
> 
> Pino Toscano (3):
>   support: check for virConnect.getStoragePoolCapabilities
>   virtinst: add a basic StoragePoolCapabilities
>   createpool: show only available pool types
> 
>  tests/data/capabilities/poolcaps-fs.xml | 207 
>  tests/test_capabilities.py  |  25 +++
>  virtManager/createpool.py   |   7 +-
>  virtinst/__init__.py|   1 +
>  virtinst/storagepoolcapabilities.py |  61 +++
>  virtinst/support.py |   2 +
>  6 files changed, 302 insertions(+), 1 deletion(-)
>  create mode 100644 tests/data/capabilities/poolcaps-fs.xml
>  create mode 100644 virtinst/storagepoolcapabilities.py
> 

The code looks fine but I am conflicted about this. I'm not sure it's
worth adding code to read and process storage capabilities XML at all.
I'd rather see the storage dialogs become smaller, not more
featureful/smarter at the cost of increased maintenance and potential
for future feature creep. For example sheepdog and mpath can be dropped
entirely IMO. rbd pool creation should probably be dropped because the
UI is never going to be comprehensive enough to handle the common case
which requires specifying an auth secret. Same may apply to gluster but
I'd need to double check.

As implemented I'm a little iffy on the UI too. Just hiding options
without giving the user a hint can cause confusion, like why is FOO
available as a pool option for one connection but not the other. There's
ways to fix it but at the cost of more code with goes back to my point
above.

Can you explain your motivation a bit: Has this caused you issues
before? Or is there something more useful in the storage XML that you
want to add support for afterwards?

Thanks,
Cole



ANNOUNCE: virt-manager 3.2.0 released

2020-11-16 Thread Cole Robinson
ANNOUNCE: virt-manager 3.2.0 released

I'm happy to announce the release of virt-manager 3.2.0!

virt-manager is a desktop application for managing KVM, Xen, and LXC
virtualization via libvirt.

The release can be downloaded from:

http://virt-manager.org/download/

This release includes:

- Slim down filesystem device editor UI
- Fix TOCTTOU virt-install bugs (Martin Pitt)
- Several other bug fixes

Thanks to everyone who has contributed to this release through testing,
bug reporting, submitting patches, and otherwise sending in feedback!

Thanks,
Cole



Re: [virt-manager PATCH] Add /usr/sbin to search path for non-root user

2020-11-03 Thread Cole Robinson
On 10/23/20 5:20 PM, Charles Arnold wrote:
> On a fresh install where no previous connection has been made a
> non-root user starting virt-manager won't find the installed libvirtd
> because the location is not in the user path.
> 
> Signed-off-by: Charles Arnold 
> ---
>  virtManager/lib/connectauth.py | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/virtManager/lib/connectauth.py b/virtManager/lib/connectauth.py
> index cc83e469..71e1b21f 100644
> --- a/virtManager/lib/connectauth.py
> +++ b/virtManager/lib/connectauth.py
> @@ -204,7 +204,8 @@ def connect_error(conn, errmsg, tb, warnconsole):
>  ##
>  
>  def setup_first_uri(config, tryuri):
> -libvirtd_installed = bool(shutil.which("libvirtd"))
> +# Add /usr/sbin to the path in case non-root user launches virt-manager
> +libvirtd_installed = bool(shutil.which("libvirtd", 
> path=os.environ['PATH'] + os.pathsep + "/usr/sbin"))
>  if config.CLITestOptions.fake_no_libvirtd:
>  libvirtd_installed = False
>  
> 

Thanks Charles! Pushed now

- Cole



Re: virt-install: changing default --os-variant behavior

2020-10-21 Thread Cole Robinson
Following on from this I submitted a linux2020 entry to osinfo-db:

https://gitlab.com/libosinfo/osinfo-db/-/merge_requests/237

Thanks,
Cole

On 9/20/20 4:09 PM, Cole Robinson wrote:
> Hi virt-tools-list + CCd libosinfo developers too
> 
> Use of virt-install without a detected guest OS, and without manually
> specified --os-variant, is a never ending source of bugs and user
> confusion and poorly performing VM configs. This mail is intended to
> start a discussion about how to improve that situation.
> 
> In virt-install 3.0.0 I added some extra options to --os-variant. Examples:
> 
> * --os-variant detect=on,require=on: detect from media but fail if
> detection doesn't work
> * `--os-variant detect=on,name=NAME: detect from media but fall back to
> os NAME if detection fails
> 
> The default didn't change though, which is `--os-variant
> detect=on,name=generic`, with a warning printed if detection fails.
> 'generic' OS entry is a virt-install creation that basically means 'use
> libvirts defaults', which are not performant to say the least.
> 
> (Caveat this is mostly about x86 qemu-using hypervisors. We largely
> assume the presence of virtio for all arm32/aarhc64/s390x/ppc64/riscv
> KVM guests even though that's not entirely correct.)
> 
> 
> For virt-install, the two possible paths forward I see are
> 
> 1) Default to --os-variant detect=on. Detection failure is fatal. Couple
> this with a big descriptive error when it fails explaining why this
> changed, and explaining how to specify a fallback name, with some
> suggestions how to choose one. Probably wouldn't want to make this
> change until the new --os-variant options have been around for a release
> or two
> 
> 2) Default to --os-variant detect=on,name=. 'give me
> virtio' is representative of what most virt-install users want. But this
> adds some new corner cases, ex if anyone is using virt-install with
> windows up until now they could get away without specifying a
> --os-variant and things would generally work, but now if we default to
> virtio windows out of the box is not going to install. I kinda doubt
> many people are using virt-install with windows though.
> 
> 
> I think #1 is the way to go but either case would benefit from some
> coordination with libosinfo.
> 
> IMO we don't really have a good story for when users are installing a
> distro that isn't represented in libosinfo. This applies to virt-manager
> too. If users are installing 'spanking new exotic distro FOO' and
> libosinfo can't detect their install media, what OS value should we
> recommend they use? If they are installing linux there's a 99.9%
> certainty that the distro is new enough to support all the major virtio
> devices, so mostly it doesn't matter what distro they choose as long as
> it's from the past decade. But conveying that is difficult, it's not
> easily actionable, and it would feel weird to choose ubuntu* when you
> are installing slackware or something.
> 
> IMO it would be useful for osinfo-db to have some 'meta' OS entries.
> Something like linux2018, linux2020, etc. I figure we can probably peg
> them to roughly match ubuntu LTS device support content.
> 
> If those existed, we could show them in virt-manager's UI when we don't
> find any hits for whatever distro name the user starts typing into the
> search field, like we do for the 'generic' entry. We could even consider
> pre-selecting them in the virt-manager UI.
> 
> Similarly we could recommend them in the virt-install error message if
> detection fails. If they follow a consistent naming pattern, users could
> probably guess what value they want based on the age of their distro.
> 
> Thoughts? Other ideas?
> 
> Thanks,
> Cole
> 


- Cole



Re: virt-install: changing default --os-variant behavior

2020-09-23 Thread Cole Robinson
On 9/21/20 4:01 PM, Charles Arnold wrote:
> On Sun, 2020-09-20 at 16:09 -0400, Cole Robinson wrote:
>> Hi virt-tools-list + CCd libosinfo developers too
>>
>> Use of virt-install without a detected guest OS, and without manually
>> specified --os-variant, is a never ending source of bugs and user
>> confusion and poorly performing VM configs. This mail is intended to
>> start a discussion about how to improve that situation.
>>
>> In virt-install 3.0.0 I added some extra options to --os-variant.
>> Examples:
>>
>> * --os-variant detect=on,require=on: detect from media but fail if
>> detection doesn't work
>> * `--os-variant detect=on,name=NAME: detect from media but fall back
>> to
>> os NAME if detection fails
>>
>> The default didn't change though, which is `--os-variant
>> detect=on,name=generic`, with a warning printed if detection fails.
>> 'generic' OS entry is a virt-install creation that basically means
>> 'use
>> libvirts defaults', which are not performant to say the least.
>>
>> (Caveat this is mostly about x86 qemu-using hypervisors. We largely
>> assume the presence of virtio for all arm32/aarhc64/s390x/ppc64/riscv
>> KVM guests even though that's not entirely correct.)
>>
>>
>> For virt-install, the two possible paths forward I see are
>>
>> 1) Default to --os-variant detect=on. Detection failure is fatal.
>> Couple
>> this with a big descriptive error when it fails explaining why this
>> changed, and explaining how to specify a fallback name, with some
>> suggestions how to choose one. Probably wouldn't want to make this
>> change until the new --os-variant options have been around for a
>> release
>> or two
> 
> +1
> 
>>
>> 2) Default to --os-variant detect=on,name=. 'give
>> me
>> virtio' is representative of what most virt-install users want. But
>> this
>> adds some new corner cases, ex if anyone is using virt-install with
>> windows up until now they could get away without specifying a
>> --os-variant and things would generally work, but now if we default
>> to
>> virtio windows out of the box is not going to install. I kinda doubt
>> many people are using virt-install with windows though.
> 
> It is hard to know how many customers use windows VMs but I know for a
> fact (from bug reports) that there are some.
> 
>>
>>
>> I think #1 is the way to go but either case would benefit from some
>> coordination with libosinfo.
>>
>> IMO we don't really have a good story for when users are installing a
>> distro that isn't represented in libosinfo. This applies to virt-
>> manager
>> too. If users are installing 'spanking new exotic distro FOO' and
>> libosinfo can't detect their install media, what OS value should we
>> recommend they use? If they are installing linux there's a 99.9%
>> certainty that the distro is new enough to support all the major
>> virtio
>> devices, so mostly it doesn't matter what distro they choose as long
>> as
>> it's from the past decade. But conveying that is difficult, it's not
>> easily actionable, and it would feel weird to choose ubuntu* when you
>> are installing slackware or something.
> 
> There is (or was) some code in virt-manager that when a disto is
> identified by name but the particular version is not found in libosinfo
> it falls back to using the newest definition of that distro in the
> database.
> 

Yes we have some heuristics for this. In time they should go away though
because the -unknown pattern in libosinfo should handle it, at least for
.treeinfo using distros.

>>
>> IMO it would be useful for osinfo-db to have some 'meta' OS entries.
>> Something like linux2018, linux2020, etc. I figure we can probably
>> peg
>> them to roughly match ubuntu LTS device support content.
> 
> This is much better than 'generic' which never works
Great, thanks for the feedback

- Cole



Re: virt-install: changing default --os-variant behavior

2020-09-22 Thread Cole Robinson
On 9/22/20 11:53 AM, Pino Toscano wrote:
> On Sunday, 20 September 2020 22:09:54 CEST Cole Robinson wrote:
>> Use of virt-install without a detected guest OS, and without manually
>> specified --os-variant, is a never ending source of bugs and user
>> confusion and poorly performing VM configs.
> 
> First of all: how often does this case happens? My guess is that this
> is common on older distros, when you osinfo-db is not up-to-date, and
> you are trying to install a very recent guest. Are there other notable
> or happening often cases?
> 

It was worse in the past for sure, osinfo-db was much more sparse 2
years ago. Still, the main places I see it historically, is people
installing pre-release distros, or new distro variants that aren't yet
represented in osinfo-db: special RHEL variants, more recently
Fedora/RHEL CoreOS.

>> In virt-install 3.0.0 I added some extra options to --os-variant. Examples:
>>
>> * --os-variant detect=on,require=on: detect from media but fail if
>> detection doesn't work
>> * `--os-variant detect=on,name=NAME: detect from media but fall back to
>> os NAME if detection fails
> 
> So either of
>   --os-variant generic
>   --os-variant detect=off,name=generic
> will disable the detection and use "generic" (i.e. no specific OS),
> right?
> 

`--os-variant generic` and `--os-variant name=generic` are identical,
and both will skip detection. detect=on isn't the unconditional default,
it's more that if the user doesn't specify anything, the default is
`--os-variant detect=on,name=generic`, otherwise those options are all
empty/false.

>> (Caveat this is mostly about x86 qemu-using hypervisors. We largely
>> assume the presence of virtio for all arm32/aarhc64/s390x/ppc64/riscv
>> KVM guests even though that's not entirely correct.)
> 
> Right: IMHO we need to provide more accurate devices for different
> architectures in osinfo-db, and slowly move virt-manager to rely on
> them rather than hardcoding architecture checks. This is a different
> discussion though :)
> 

Agreed

>> For virt-install, the two possible paths forward I see are
>>
>> 1) Default to --os-variant detect=on. Detection failure is fatal. Couple
>> this with a big descriptive error when it fails explaining why this
>> changed, and explaining how to specify a fallback name, with some
>> suggestions how to choose one. Probably wouldn't want to make this
>> change until the new --os-variant options have been around for a release
>> or two
> 
> Makes sense, maybe setting 2022 as years when perform this switch?
> 

We could base it on RHEL/Centos release. If/when virt-install 3.0.0 is
in a centos8 release, then the next upstream release can make the
change. That may well be 2022

>> 2) Default to --os-variant detect=on,name=. 'give me
>> virtio' is representative of what most virt-install users want. But this
>> adds some new corner cases, ex if anyone is using virt-install with
>> windows up until now they could get away without specifying a
>> --os-variant and things would generally work, but now if we default to
>> virtio windows out of the box is not going to install. I kinda doubt
>> many people are using virt-install with windows though.
> 
> There are certainly people who install Windows like this, and even the
> unattended installation works (registration included). The thing is,
> defaulting to virtio only works for Linux guests:
> - Windows guests: you really want to use the latest available version
>   of the server/desktop Windows you have, i.e. currently win10 for
>   desktop and win2k19 for server; sadly there is no automated way to
>   know that
> - other OSes: virtio may not be implemented at all, so you really want
>   generic for them
> 
>> IMO we don't really have a good story for when users are installing a
>> distro that isn't represented in libosinfo. This applies to virt-manager
>> too.
> 
> Yes, I agree with this. Sadly I don't have a good answer either,
> otherwise I'd had proposed it already.
> 
>> If users are installing 'spanking new exotic distro FOO' and
>> libosinfo can't detect their install media, what OS value should we
>> recommend they use? If they are installing linux there's a 99.9%
>> certainty that the distro is new enough to support all the major virtio
>> devices, so mostly it doesn't matter what distro they choose as long as
>> it's from the past decade. But conveying that is difficult, it's not
>> easily actionable, and it would feel weird to choose ubuntu* when you
>> are installing slackware or something.
> 
> This is IMHO split between "there is no latest version of the distro
> I want" and "there

Re: virt-install: changing default --os-variant behavior

2020-09-22 Thread Cole Robinson
On 9/22/20 11:59 AM, Peter Crowther wrote:
> On Tue, 22 Sep 2020 at 16:48, Cole Robinson  <mailto:crobi...@redhat.com>> wrote:
> 
> On 9/20/20 4:46 PM, Peter Crowther wrote:
> > On Sun, 20 Sep 2020 at 21:10, Cole Robinson  <mailto:crobi...@redhat.com>
> > <mailto:crobi...@redhat.com <mailto:crobi...@redhat.com>>> wrote:
> > [...]
> >
> >     2) Default to --os-variant detect=on,name=.
> 'give me
> >     virtio' is representative of what most virt-install users
> want. But this
> >     adds some new corner cases, ex if anyone is using virt-install
> with
> >     windows up until now they could get away without specifying a
> >     --os-variant and things would generally work, but now if we
> default to
> >     virtio windows out of the box is not going to install. I kinda
> doubt
> >     many people are using virt-install with windows though.
> >
> >
> > As feedback, this is the single largest use case in the main
> > virtualisation cluster I manage.  CentOS hosts, 90% Windows 7(!)
> and 8.1
> > guests.  We have our virt-install scripted to add a floppy drive with
> > autoinstall file, virtio drivers, and a few other bits and pieces
> like a
> > minimal puppet client install (surprisingly non-trivial in Windows 7
> > gold), so we could live with such a change; but please don't
> assume that
> > if the hosts are Linux then the guests are also likely to be Linux.
> >
> 
> But presumably you are already specifying an --os-variant though? In
> which case you would be fine even if the default changes.
> 
> 
> Yes, we are.
> 
> On the principle of least surprise, I would suggest "Option 3" of your
> two suggestions: virt-install warns but succeeds for such an install.  I
> don't think it's unreasonable to default to slower emulated hardware
> rather than virtio as long as the user gets a reasonably helpful notice
> that virt-install doesn't know what else to do so has gone for a slow
> fail-safe.
> 

We already do this but it's still problematic in my experience. Most
people aren't intentionally opting into old hardware, they are getting
it by accident or ignorance.

Is #1 palatable if you consider we hard error, but print a suggestion
that includes: 'if you want the old behavior, add --os-variant
detect=on,name=generic` to your command line'. Some people implicitly
depending on the old behavior will have to one time update their
scripts. New people writing scripts will add that change after their
first attempt. The only problem I see is if people are running the same
script across drastically different versions of virt-install.

Actually now that I think about it, that's not a problem either.
`--os-variant auto` has been available for a decade, which explicitly
opts in to the current default behavior. So users will have a cross
version way to get the old behavior back.

Thanks,
Cole



Re: virt-install: changing default --os-variant behavior

2020-09-22 Thread Cole Robinson
On 9/20/20 4:46 PM, Peter Crowther wrote:
> On Sun, 20 Sep 2020 at 21:10, Cole Robinson  <mailto:crobi...@redhat.com>> wrote:
> [...]
> 
> 2) Default to --os-variant detect=on,name=. 'give me
> virtio' is representative of what most virt-install users want. But this
> adds some new corner cases, ex if anyone is using virt-install with
> windows up until now they could get away without specifying a
> --os-variant and things would generally work, but now if we default to
> virtio windows out of the box is not going to install. I kinda doubt
> many people are using virt-install with windows though.
> 
> 
> As feedback, this is the single largest use case in the main
> virtualisation cluster I manage.  CentOS hosts, 90% Windows 7(!) and 8.1
> guests.  We have our virt-install scripted to add a floppy drive with
> autoinstall file, virtio drivers, and a few other bits and pieces like a
> minimal puppet client install (surprisingly non-trivial in Windows 7
> gold), so we could live with such a change; but please don't assume that
> if the hosts are Linux then the guests are also likely to be Linux.
> 

Thanks for the info. I don't expect we will go this route.

But presumably you are already specifying an --os-variant though? In
which case you would be fine even if the default changes.

If you aren't using --os-variant there's a chance that media OS
detection could fail (if you started using a new/updated windows ISO
that osinfo-db doesn't detect correctly) and then virt-install would
start generating different configurations on you.

- Cole



virt-install: changing default --os-variant behavior

2020-09-20 Thread Cole Robinson
Hi virt-tools-list + CCd libosinfo developers too

Use of virt-install without a detected guest OS, and without manually
specified --os-variant, is a never ending source of bugs and user
confusion and poorly performing VM configs. This mail is intended to
start a discussion about how to improve that situation.

In virt-install 3.0.0 I added some extra options to --os-variant. Examples:

* --os-variant detect=on,require=on: detect from media but fail if
detection doesn't work
* `--os-variant detect=on,name=NAME: detect from media but fall back to
os NAME if detection fails

The default didn't change though, which is `--os-variant
detect=on,name=generic`, with a warning printed if detection fails.
'generic' OS entry is a virt-install creation that basically means 'use
libvirts defaults', which are not performant to say the least.

(Caveat this is mostly about x86 qemu-using hypervisors. We largely
assume the presence of virtio for all arm32/aarhc64/s390x/ppc64/riscv
KVM guests even though that's not entirely correct.)


For virt-install, the two possible paths forward I see are

1) Default to --os-variant detect=on. Detection failure is fatal. Couple
this with a big descriptive error when it fails explaining why this
changed, and explaining how to specify a fallback name, with some
suggestions how to choose one. Probably wouldn't want to make this
change until the new --os-variant options have been around for a release
or two

2) Default to --os-variant detect=on,name=. 'give me
virtio' is representative of what most virt-install users want. But this
adds some new corner cases, ex if anyone is using virt-install with
windows up until now they could get away without specifying a
--os-variant and things would generally work, but now if we default to
virtio windows out of the box is not going to install. I kinda doubt
many people are using virt-install with windows though.


I think #1 is the way to go but either case would benefit from some
coordination with libosinfo.

IMO we don't really have a good story for when users are installing a
distro that isn't represented in libosinfo. This applies to virt-manager
too. If users are installing 'spanking new exotic distro FOO' and
libosinfo can't detect their install media, what OS value should we
recommend they use? If they are installing linux there's a 99.9%
certainty that the distro is new enough to support all the major virtio
devices, so mostly it doesn't matter what distro they choose as long as
it's from the past decade. But conveying that is difficult, it's not
easily actionable, and it would feel weird to choose ubuntu* when you
are installing slackware or something.

IMO it would be useful for osinfo-db to have some 'meta' OS entries.
Something like linux2018, linux2020, etc. I figure we can probably peg
them to roughly match ubuntu LTS device support content.

If those existed, we could show them in virt-manager's UI when we don't
find any hits for whatever distro name the user starts typing into the
search field, like we do for the 'generic' entry. We could even consider
pre-selecting them in the virt-manager UI.

Similarly we could recommend them in the virt-install error message if
detection fails. If they follow a consistent naming pattern, users could
probably guess what value they want based on the age of their distro.

Thoughts? Other ideas?

Thanks,
Cole



Re: [virt-manager PATCH 8/9] i18n: use a single string for error message

2020-09-17 Thread Cole Robinson
On 9/17/20 2:44 AM, Pino Toscano wrote:
> Signed-off-by: Pino Toscano 
> ---
>  virtManager/virtmanager.py | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/virtManager/virtmanager.py b/virtManager/virtmanager.py
> index ee7d4d89..2c8ccbb8 100755
> --- a/virtManager/virtmanager.py
> +++ b/virtManager/virtmanager.py
> @@ -41,8 +41,9 @@ def _show_startup_error(msg, details):
>exc_info=True)
>  from .error import vmmErrorDialog
>  err = vmmErrorDialog.get_instance()
> -title = _("Error starting Virtual Machine Manager")
> -err.show_err(title + ": " + msg,
> +errmsg = (_("Error starting Virtual Machine Manager: %(error)s") %
> +  {"error": msg})
> +err.show_err(errmsg,
>   details=details,
>   title=title,
>   modal=True,
> 

title is undefined after this, so I left in the previous 'title =' line
as well

- Cole



Re: [virt-manager PATCH 0/9] Various i18n fixes, part #4

2020-09-17 Thread Cole Robinson
On 9/17/20 2:43 AM, Pino Toscano wrote:
> This patch series remove some more of the i18n issues, mostly
> untranslatable strings, and string puzzles.
> 
> Pino Toscano (9):
>   i18n: fix string puzzle in volume creation dialog
>   i18n: improve title of connection dialog
>   i18n: fix string puzzles in virt-xml hotplug/hotunplug/update
>   cli: add fail helper for 2 conflicting cli options
>   virtinst: use fail_conflicting
>   i18n: translate UI messages
>   i18n: use plural forms where needed
>   i18n: use a single string for error message
>   i18n: fix string puzzles in capabilities lookup error msgs
> 
>  tests/test_cli.py  |  4 ++--
>  ui/createvol.ui| 45 
>  virtManager/createvol.py   |  9 
>  virtManager/host.py|  5 ++--
>  virtManager/virtmanager.py |  5 ++--
>  virtinst/capabilities.py   | 46 ++---
>  virtinst/cli.py| 10 
>  virtinst/virtinstall.py| 11 +
>  virtinst/virtxml.py| 47 +++---
>  9 files changed, 90 insertions(+), 92 deletions(-)
> 

Pushed with the mentioned changes, plus a follow up patch to get
virtinst/capabilities.py coverage back to 100%. Thanks!

- Cole



Re: [virt-manager PATCH 2/9] i18n: improve title of connection dialog

2020-09-17 Thread Cole Robinson
On 9/17/20 2:43 AM, Pino Toscano wrote:
> Instead of using the title of the dialog and prepending the connection
> label, create a new title as a single string. This way it is possible to
> translate this title as single sentence.
> 
> Signed-off-by: Pino Toscano 
> ---
>  virtManager/host.py | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/virtManager/host.py b/virtManager/host.py
> index 59f78c7a..663b320d 100644
> --- a/virtManager/host.py
> +++ b/virtManager/host.py
> @@ -35,8 +35,6 @@ class vmmHost(vmmGObjectUI):
>  vmmGObjectUI.__init__(self, "host.ui", "vmm-host")
>  self.conn = conn
>  
> -self._orig_title = self.topwin.get_title()
> -
>  # Set default window size
>  w, h = self.conn.get_details_window_size()
>  if w <= 0:
> @@ -177,7 +175,8 @@ class vmmHost(vmmGObjectUI):
>  conn_active = self.conn.is_active()
>  
>  self.topwin.set_title(
> -self.conn.get_pretty_desc() + " " + self._orig_title)
> +_("%(connection)s - Connection Details") %
> +{"connection": self.conn.get_pretty_desc()})
>  if not self.widget("overview-name").has_focus():
>  
> self.widget("overview-name").set_text(self.conn.get_pretty_desc())
>  
> 

I squashed in tests/uitests/ fixes to match, plus unmark in ui/host.ui
title as translatable

- Cole



ANNOUNCE: virt-manager 3.0.0 released

2020-09-15 Thread Cole Robinson
I'm happy to announce the release of virt-manager 3.0.0!

virt-manager is a desktop application for managing KVM, Xen, and LXC
virtualization via libvirt.

The release can be downloaded from:

http://virt-manager.org/download/

This release includes:

- virt-install --cloud-init support (Athina Plaskasoviti, Cole
  Robinson)
- The virt-convert tool has been removed. Please use virt-v2v instead
- A handful of UI XML configuration options have been removed. The XML
  editor can be used instead. For a larger discussion see this thread:
  https://www.redhat.com/archives/virt-tools-list/2019-June/msg00117.html
- The 'New VM' UI now has a 'Manual Install' option which creates a VM
  without any required install media
- In the 'New VM' UI, the network/pxe install option has been removed.
  If you need network boot, choose 'Manual Install' and set the boot
  device after initial VM creation
- 'Clone VM' UI has been reworked and simplified
- 'Migrate VM' UI now has an XML editor for the destination VM
- Global and per-vm option to disable graphical console autoconnect.
  This makes it easier to use virt-manager alongside another client like
  virt-viewer
- virt-manager: set guest time after VM restore (Michael Weiser)
- virt-manager: option to delete storage when removing disk device
  (Lily Nie)
- virt-manager: show warnings if snapshot operation is unsafe (Michael
  Weiser)
- Unattended install improvements (Fabiano Fidêncio)
- cli: new --xml XPATH=VAL option for making direct XML changes
- virt-install: new --reinstall=DOMAIN option
- virt-install: new --autoconsole text|graphical|none option
- virt-install: new --os-variant detect=on,require=on suboptions
- cli: --clock, --keywrap, --blkiotune, --cputune additions (Athina
  Plaskasoviti)
- cli: add --features kvm.hint-dedicated.state= (Menno Lageman)
- cli:--iommu option (Menno Lageman)
- cli: Add --graphics websocket= support (Petr Benes)
- cli: Add --disk type=nvme source.* suboptions
- cli: Fill in all --filesystem suboptions
- Translation string improvements (Pino Toscano)
- Convert from .pod to .rst for man pages
- Switch to pytest as our test runner
- Massively improved unittest and uitest code coverage
- Now using github issues as our bug tracker

Thanks to everyone who has contributed to this release through testing,
bug reporting, submitting patches, and otherwise sending in feedback!

Thanks,
Cole



Re: [virt-manager PATCH 0/3] Improve Mageia tree detection

2020-08-31 Thread Cole Robinson
On 8/26/20 6:44 AM, Pino Toscano wrote:
> Detect the version, and slightly improve the tests for it.
> 
> Pino Toscano (3):
>   tests: tweak mageia detection
>   urldetect: add version detection for Mageia
>   tests: add Mageia 5 detection test
> 
>  tests/data/urldetect/mageia/5/VERSION |  1 +
>  .../urldetect/{magaeia => mageia/8}/VERSION   |  0
>  tests/test_urldetect.py   |  6 +-
>  virtinst/install/urldetect.py | 20 ++-
>  4 files changed, 25 insertions(+), 2 deletions(-)
>  create mode 100644 tests/data/urldetect/mageia/5/VERSION
>  rename tests/data/urldetect/{magaeia => mageia/8}/VERSION (100%)


Thanks, pushed now with a pylint and coverage tweak squashed into patch 3

- Cole



Re: [virt-manager PATCH] osdict: extend non-EOL distros range to 10 years

2020-08-18 Thread Cole Robinson
On 8/18/20 12:47 PM, Pino Toscano wrote:
> Commit d52d9885c85623b8d924dbf0aceecb08b33e9122 added a logic to
> consider as EOL a distribution with no EOL date set and release date
> earlier than 5 years from the current day. This was done because there
> were (and still are, even if fewer now) many old OSes in osinfo-db with
> no EOL date set, which were thus considered "supported". Sadly, OSes
> that are still supported, like Windows 10, Windows Server 2012, or
> earlier versions of RHEL/CentOS/OL 6/7, are now considered "EOL".
> 
> As a hack on top of the initial hack, extend the range from 5 years to
> 10 years: this will consider some of the aforementioned OSes as
> supported, without adding too many other OSes.
> 
> Of course the long term solution is to make sure all the OSes in
> osinfo-db that are EOL upstream have a EOL date set, so there is no more
> need to arbitrary exclusion logic.
> 
> Signed-off-by: Pino Toscano 
> ---
>  virtinst/osdict.py | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/virtinst/osdict.py b/virtinst/osdict.py
> index 93d457e1..3be1d102 100644
> --- a/virtinst/osdict.py
> +++ b/virtinst/osdict.py
> @@ -474,9 +474,9 @@ class _OsVariant(object):
>  if release_status == "rolling":
>  return False
>  
> -# If no EOL is present, assume EOL if release was > 5 years ago
> +# If no EOL is present, assume EOL if release was > 10 years ago
>  if rel is not None:
> -rel5 = _glib_to_datetime(rel) + datetime.timedelta(days=365 * 5)
> +rel5 = _glib_to_datetime(rel) + datetime.timedelta(days=365 * 10)
>  return now > rel5
>  return False
>  
> 

Thanks, pushed now

- Cole



  1   2   3   4   5   6   7   8   9   10   >