[PATCH] vfio-helpers: Free QEMUVFIOState in qemu_vfio_close()

2020-04-22 Thread Michal Privoznik
The qemu_vfio_open_pci() allocates this QEMUVFIOState structure
but free counterpart is missing. Since we already have
qemu_vfio_close() which does cleanup of the state, it looks like
a perfect place to free the structure too. However, to avoid
confusing rename the function to make it explicit that the passed
structure is also freed.

==167296== 528 (360 direct, 168 indirect) bytes in 1 blocks are definitely lost 
in loss record 8,504 of 8,908
==167296==at 0x4837B86: calloc (vg_replace_malloc.c:762)
==167296==by 0x4B8F6A0: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.6000.7)
==167296==by 0xA7F532: qemu_vfio_open_pci (vfio-helpers.c:428)
==167296==by 0x989595: nvme_init (nvme.c:606)
==167296==by 0x989EB0: nvme_file_open (nvme.c:795)
==167296==by 0x8F9D04: bdrv_open_driver (block.c:1466)
==167296==by 0x8FA6E1: bdrv_open_common (block.c:1744)
==167296==by 0x8FDC73: bdrv_open_inherit (block.c:3291)
==167296==by 0x8FE1B5: bdrv_open (block.c:3384)
==167296==by 0x5EE828: bds_tree_init (blockdev.c:663)
==167296==by 0x5F57F8: qmp_blockdev_add (blockdev.c:3746)
==167296==by 0x5666DC: configure_blockdev (vl.c:1047)

Signed-off-by: Michal Privoznik 
---
 block/nvme.c| 2 +-
 include/qemu/vfio-helpers.h | 2 +-
 util/vfio-helpers.c | 3 ++-
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/block/nvme.c b/block/nvme.c
index 7b7c0cc5d6..7e00c4f1a7 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -766,7 +766,7 @@ static void nvme_close(BlockDriverState *bs)
false, NULL, NULL);
 event_notifier_cleanup(>irq_notifier);
 qemu_vfio_pci_unmap_bar(s->vfio, 0, (void *)s->regs, 0, NVME_BAR_SIZE);
-qemu_vfio_close(s->vfio);
+qemu_vfio_close_and_free(s->vfio);
 
 g_free(s->device);
 }
diff --git a/include/qemu/vfio-helpers.h b/include/qemu/vfio-helpers.h
index 1f057c2b9e..c96a0b1963 100644
--- a/include/qemu/vfio-helpers.h
+++ b/include/qemu/vfio-helpers.h
@@ -16,7 +16,7 @@
 typedef struct QEMUVFIOState QEMUVFIOState;
 
 QEMUVFIOState *qemu_vfio_open_pci(const char *device, Error **errp);
-void qemu_vfio_close(QEMUVFIOState *s);
+void qemu_vfio_close_and_free(QEMUVFIOState *s);
 int qemu_vfio_dma_map(QEMUVFIOState *s, void *host, size_t size,
   bool temporary, uint64_t *iova_list);
 int qemu_vfio_dma_reset_temporary(QEMUVFIOState *s);
diff --git a/util/vfio-helpers.c b/util/vfio-helpers.c
index ddd9a96e76..4c525d245b 100644
--- a/util/vfio-helpers.c
+++ b/util/vfio-helpers.c
@@ -706,7 +706,7 @@ static void qemu_vfio_reset(QEMUVFIOState *s)
 }
 
 /* Close and free the VFIO resources. */
-void qemu_vfio_close(QEMUVFIOState *s)
+void qemu_vfio_close_and_free(QEMUVFIOState *s)
 {
 int i;
 
@@ -721,4 +721,5 @@ void qemu_vfio_close(QEMUVFIOState *s)
 close(s->device);
 close(s->group);
 close(s->container);
+g_free(s);
 }
-- 
2.25.3




Re: [PATCH] Report stringified errno in VFIO related errors

2020-02-14 Thread Michal Privoznik

On 2/14/20 4:49 PM, Alex Williamson wrote:

On Fri, 14 Feb 2020 10:19:50 +0100
Michal Privoznik  wrote:



Do you guys want me to file a bug?


Probably a good idea.  Thanks,


Since I'm reproducing this against upstream, I've reported it here:

https://bugs.launchpad.net/qemu/+bug/186

Michal




Re: [PATCH] Report stringified errno in VFIO related errors

2020-02-14 Thread Michal Privoznik

On 2/14/20 9:47 AM, Michal Privoznik wrote:

In a few places we report errno formatted as a negative integer.
This is not as user friendly as it can be. Use strerror() and/or
error_setg_errno() instead.

Signed-off-by: Michal Privoznik 
---
  hw/vfio/common.c| 2 +-
  util/vfio-helpers.c | 6 +++---
  2 files changed, 4 insertions(+), 4 deletions(-)



BTW the reason I've noticed these is because I'm seeing some errors when 
assigning my NVMe disk to qemu. This is the full command line:



/home/zippy/work/qemu/qemu.git/x86_64-softmmu/qemu-system-x86_64 \
-name guest=fedora,debug-threads=on \
-S \
-object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-fedora/master-key.aes 
\

-machine pc-i440fx-4.1,accel=kvm,usb=off,dump-guest-core=off \
-cpu host \
-m size=4194304k,slots=16,maxmem=1099511627776k \
-overcommit mem-lock=off \
-smp 4,sockets=1,dies=1,cores=2,threads=2 \
-object iothread,id=iothread1 \
-object iothread,id=iothread2 \
-object iothread,id=iothread3 \
-object iothread,id=iothread4 \
-mem-prealloc \
-mem-path /hugepages2M/libvirt/qemu/2-fedora \
-numa node,nodeid=0,cpus=0,mem=4096 \
-uuid 63840878-0deb-4095-97e6-fc444d9bc9fa \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=31,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=utc \
-no-shutdown \
-global PIIX4_PM.disable_s3=0 \
-global PIIX4_PM.disable_s4=0 \
-boot menu=on,strict=on \
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
-device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 \
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 \
-blockdev 
'{"driver":"file","filename":"/var/lib/libvirt/images/fedora.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' 
\
-blockdev 
'{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","driver":"qcow2","file":"libvirt-2-storage","backing":null}' 
\
-device 
scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,device_id=drive-scsi0-0-0-0,drive=libvirt-2-format,id=scsi0-0-0-0,bootindex=1 
\
-blockdev 
'{"driver":"nvme","device":":02:00.0","namespace":1,"node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' 
\
-blockdev 
'{"node-name":"libvirt-1-format","read-only":false,"driver":"raw","file":"libvirt-1-storage"}' 
\
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=libvirt-1-format,id=virtio-disk0 
\

-netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 \
-device 
virtio-net-pci,host_mtu=9000,netdev=hostnet0,id=net0,mac=52:54:00:a4:6f:91,bus=pci.0,addr=0x3 
\

-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0 \
-chardev socket,id=charchannel0,fd=35,server,nowait \
-device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 
\

-spice port=5900,addr=0.0.0.0,disable-ticketing,seamless-migration=on \
-device virtio-vga,id=video0,virgl=on,max_outputs=1,bus=pci.0,addr=0x2 \
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 \
-sandbox 
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \

-msg timestamp=on

And these are the errors I see:

2020-02-14T09:06:18.183167Z qemu-system-x86_64: VFIO_MAP_DMA failed: 
Invalid argument
2020-02-14T09:10:49.753767Z qemu-system-x86_64: VFIO_MAP_DMA failed: 
Cannot allocate memory
2020-02-14T09:11:04.530344Z qemu-system-x86_64: VFIO_MAP_DMA failed: No 
space left on device
2020-02-14T09:11:04.531087Z qemu-system-x86_64: VFIO_MAP_DMA failed: No 
space left on device
2020-02-14T09:11:04.531230Z qemu-system-x86_64: VFIO_MAP_DMA failed: No 
space left on device



I'm doing nothing with the disk inside the guest, but:

  # dd if=/dev/vda of=/dev/null status=progress

(the disk appears as /dev/vda in the guest). Surprisingly, I do not see 
these errors when I use the traditional PCI assignment (-device 
vfio-pci). My versions of kernel and qemu:


moe ~ # uname -r
5.4.15-gentoo
moe ~ # /home/zippy/work/qemu/qemu.git/x86_64-softmmu/qemu-system-x86_64 
--version

QEMU emulator version 4.2.50 (v4.2.0-1439-g5d6542bea7-dirty)
Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers

Do you guys want me to file a bug?

Michal




[Qemu-block] [PATCH] nvme: Set number of queues later in nvme_init()

2019-07-10 Thread Michal Privoznik
When creating the admin queue in nvme_init() the variable that
holds the number of queues created is modified before actual
queue creation. This is a problem because if creating the queue
fails then the variable is left in inconsistent state. This was
actually observed when I tried to hotplug a nvme disk. The
control got to nvme_file_open() which called nvme_init() which
failed and thus nvme_close() was called which in turn called
nvme_free_queue_pair() with queue being NULL. This lead to an
instant crash:

  #0  0x55d9507ec211 in nvme_free_queue_pair (bs=0x55d952ddb880, q=0x0) at 
block/nvme.c:164
  #1  0x55d9507ee180 in nvme_close (bs=0x55d952ddb880) at block/nvme.c:729
  #2  0x55d9507ee3d5 in nvme_file_open (bs=0x55d952ddb880, 
options=0x55d952bb1410, flags=147456, errp=0x7ffd8e19e200) at block/nvme.c:781
  #3  0x55d9507629f3 in bdrv_open_driver (bs=0x55d952ddb880, 
drv=0x55d95109c1e0 , node_name=0x0, options=0x55d952bb1410, 
open_flags=147456, errp=0x7ffd8e19e310) at block.c:1291
  #4  0x55d9507633d6 in bdrv_open_common (bs=0x55d952ddb880, file=0x0, 
options=0x55d952bb1410, errp=0x7ffd8e19e310) at block.c:1551
  #5  0x55d950766881 in bdrv_open_inherit (filename=0x0, reference=0x0, 
options=0x55d952bb1410, flags=32768, parent=0x55d9538ce420, 
child_role=0x55d950eaade0 , errp=0x7ffd8e19e510) at block.c:3063
  #6  0x55d950765ae4 in bdrv_open_child_bs (filename=0x0, 
options=0x55d9541cdff0, bdref_key=0x55d950af33aa "file", parent=0x55d9538ce420, 
child_role=0x55d950eaade0 , allow_none=true, errp=0x7ffd8e19e510) 
at block.c:2712
  #7  0x55d950766633 in bdrv_open_inherit (filename=0x0, reference=0x0, 
options=0x55d9541cdff0, flags=0, parent=0x0, child_role=0x0, 
errp=0x7ffd8e19e908) at block.c:3011
  #8  0x55d950766dba in bdrv_open (filename=0x0, reference=0x0, 
options=0x55d953d00390, flags=0, errp=0x7ffd8e19e908) at block.c:3156
  #9  0x55d9507cb635 in blk_new_open (filename=0x0, reference=0x0, 
options=0x55d953d00390, flags=0, errp=0x7ffd8e19e908) at 
block/block-backend.c:389
  #10 0x55d950465ec5 in blockdev_init (file=0x0, bs_opts=0x55d953d00390, 
errp=0x7ffd8e19e908) at blockdev.c:602

Signed-off-by: Michal Privoznik 
---
 block/nvme.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/nvme.c b/block/nvme.c
index 73ed5fa75f..9896b7f7c6 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -613,12 +613,12 @@ static int nvme_init(BlockDriverState *bs, const char 
*device, int namespace,
 
 /* Set up admin queue. */
 s->queues = g_new(NVMeQueuePair *, 1);
-s->nr_queues = 1;
 s->queues[0] = nvme_create_queue_pair(bs, 0, NVME_QUEUE_SIZE, errp);
 if (!s->queues[0]) {
 ret = -EINVAL;
 goto out;
 }
+s->nr_queues = 1;
 QEMU_BUILD_BUG_ON(NVME_QUEUE_SIZE & 0xF000);
 s->regs->aqa = cpu_to_le32((NVME_QUEUE_SIZE << 16) | NVME_QUEUE_SIZE);
 s->regs->asq = cpu_to_le64(s->queues[0]->sq.iova);
-- 
2.21.0




Re: [Qemu-block] [PATCH] pr-manager-helper: fix pr process been killed when reconectting

2019-05-30 Thread Michal Privoznik

On 5/29/19 11:34 AM, Paolo Bonzini wrote:

On 29/05/19 09:33, Michal Privoznik wrote:

On 5/28/19 7:45 PM, Paolo Bonzini wrote:

On 28/05/19 15:06, Jie Wang wrote:

if pr-helper been killed and qemu send disconnect event to libvirt
and libvirt started a new pr-helper process, the new pr-heleper
been killed again when qemu is connectting to the new pr-helper,
qemu will never send disconnect to libvirt, and libvirt will never
receive connected event.


I think this would let a guest "spam" events just by sending a lot PR
commands.  Also, as you said, in this case QEMU has never sent a
"connected" event, so I'm not sure why it should send a disconnection
event.


So pr manager is initialized on the first PR command and not when qemu
is starting?


It is initialized when QEMU is started, but if it dies, that's not
detected until the first PR command.  The command is retried for 5
seconds, which should give libvirt ample time to restart the PR manager
(and it's an exceptional situation anyway).


Ah yes, I recall vaguelly testing this whilst writing patches for libvirt.




If a user inside the guest could somehow kill pr-helper process in the
host then yes, they could spam libvirt/qemu. But if a user from inside a
guest can kill a process in the host that is much bigger problem than
spaming libvirt.


This is true.


Does libvirt monitor at all the pr-helper to check if it dies?  Or does
it rely exclusively on QEMU's events?


Libvirt relies solely on QEMU's events. Just like with qemu process
itself, libvirt can't rely on SIGCHILD because the daemon might be
restarted which would reparent all qemu and pr-helper processes
rendering libvirt wait for SIGCHILD useless.

But there is an exception to this: when libvirt is spawning pr-helper it
does so by following these steps:

1) Try to acquire (lock) pidfile
2) unlink(socket)
3) spawn pr-helper process (this yields child's PID)
4) wait some time until socket is created
5) some follow up work (move child's PID into same cgroup as qemu's main
thread, relabel the socket so that qemu can access it)

If any of these steps fails then child is killed. However, the PID is
not recorded anywhere and thus is forgotten once control jumps out of
the function.


Note that qemu-pr-helper supports the systemd socket activation
protocol.  Would it help if libvirt used it?


Thing is, libvirt creates a mount namespace for domains (one namespace 
for one domain). In this namespace a dummy /dev is mounted and only 
nodes that qemu is configured to have are created. For instance, you 
won't see /dev/sda there unless your domain has it as a disk. Then, 
libvirt moves pr-helper process into the same cgroups as the qemu's main 
thread. This is all done so that pr-helper has the same view of the 
system as qemu. I don't think that he same result can be achieved using 
socket activation.
Also, libvirt spawns one pr-helper per domain (so that the socket can be 
private and share seclabel with qemu process it's attached to).


Michal



Re: [Qemu-block] [PATCH] pr-manager-helper: fix pr process been killed when reconectting

2019-05-29 Thread Michal Privoznik

On 5/28/19 7:45 PM, Paolo Bonzini wrote:

On 28/05/19 15:06, Jie Wang wrote:

if pr-helper been killed and qemu send disconnect event to libvirt
and libvirt started a new pr-helper process, the new pr-heleper
been killed again when qemu is connectting to the new pr-helper,
qemu will never send disconnect to libvirt, and libvirt will never
receive connected event.


I think this would let a guest "spam" events just by sending a lot PR
commands.  Also, as you said, in this case QEMU has never sent a
"connected" event, so I'm not sure why it should send a disconnection event.


So pr manager is initialized on the first PR command and not when qemu 
is starting?


If a user inside the guest could somehow kill pr-helper process in the 
host then yes, they could spam libvirt/qemu. But if a user from inside a 
guest can kill a process in the host that is much bigger problem than 
spaming libvirt.




Does libvirt monitor at all the pr-helper to check if it dies?  Or does
it rely exclusively on QEMU's events?


Libvirt relies solely on QEMU's events. Just like with qemu process 
itself, libvirt can't rely on SIGCHILD because the daemon might be 
restarted which would reparent all qemu and pr-helper processes 
rendering libvirt wait for SIGCHILD useless.


But there is an exception to this: when libvirt is spawning pr-helper it 
does so by following these steps:


1) Try to acquire (lock) pidfile
2) unlink(socket)
3) spawn pr-helper process (this yields child's PID)
4) wait some time until socket is created
5) some follow up work (move child's PID into same cgroup as qemu's main 
thread, relabel the socket so that qemu can access it)


If any of these steps fails then child is killed. However, the PID is 
not recorded anywhere and thus is forgotten once control jumps out of 
the function.


Michal



Re: [Qemu-block] [Qemu-devel] [PATCH v3 12/12] docs/interop/firmware.json: Prefer -machine to if=pflash

2019-03-15 Thread Michal Privoznik

On 3/14/19 8:01 PM, Markus Armbruster wrote:

Michal Privoznik  writes:


On 3/8/19 2:14 PM, Markus Armbruster wrote:

The previous commit added a way to configure firmware with -blockdev
rather than -drive if=pflash.  Document it as the preferred way.

Signed-off-by: Markus Armbruster 
---
   docs/interop/firmware.json | 20 ++--
   1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/docs/interop/firmware.json b/docs/interop/firmware.json
index 28f9bc1591..ff8c2ce5f2 100644
--- a/docs/interop/firmware.json
+++ b/docs/interop/firmware.json
@@ -212,9 +212,13 @@
   #
   # @executable: Identifies the firmware executable. The firmware
   #  executable may be shared by multiple virtual machine
-#  definitions. The corresponding QEMU command line option
-#  is "-drive
-#  
if=pflash,unit=0,readonly=on,file=@executable.@filename,format=@executable.@format".
+#  definitions. The preferred corresponding QEMU command
+#  line options are
+#  -drive 
if=none,id=pflash0,readonly=on,file=@executable.@filename,format=@executable.@format
+#  -machine pflash0=pflash0


I have a question. How is libvirt supposed to query for this? How can
it learn it can use this new, preferred command line?


You can use qom-list-properties to find out whether the machine has
property pflash0.

---> {"execute": "qom-list-properties", "arguments": {"typename": 
"pc-q35-4.0-machine"}}
<--- {"return": [... {"name": "pflash0", ...} ...]}
---> {"execute": "qom-list-properties", "arguments": {"typename": 
"isapc-machine"}}
<--- {"return": [... no such property ...]}



Ah, very well, thank you.

Michal



Re: [Qemu-block] [Qemu-devel] [PATCH v3 12/12] docs/interop/firmware.json: Prefer -machine to if=pflash

2019-03-14 Thread Michal Privoznik

On 3/8/19 2:14 PM, Markus Armbruster wrote:

The previous commit added a way to configure firmware with -blockdev
rather than -drive if=pflash.  Document it as the preferred way.

Signed-off-by: Markus Armbruster 
---
  docs/interop/firmware.json | 20 ++--
  1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/docs/interop/firmware.json b/docs/interop/firmware.json
index 28f9bc1591..ff8c2ce5f2 100644
--- a/docs/interop/firmware.json
+++ b/docs/interop/firmware.json
@@ -212,9 +212,13 @@
  #
  # @executable: Identifies the firmware executable. The firmware
  #  executable may be shared by multiple virtual machine
-#  definitions. The corresponding QEMU command line option
-#  is "-drive
-#  
if=pflash,unit=0,readonly=on,file=@executable.@filename,format=@executable.@format".
+#  definitions. The preferred corresponding QEMU command
+#  line options are
+#  -drive 
if=none,id=pflash0,readonly=on,file=@executable.@filename,format=@executable.@format
+#  -machine pflash0=pflash0


I have a question. How is libvirt supposed to query for this? How can it 
learn it can use this new, preferred command line?


Thanks,
Michal



[Qemu-block] [PATCH] pr-helper: Rework socket path handling

2018-07-03 Thread Michal Privoznik
When reviewing Paolo's pr-helper patches I've noticed couple of
problems:

1) socket_path needs to be calculated at two different places
(one for printing out help, the other if socket activation is NOT
used),

2) even though the default socket_path is allocated in
compute_default_paths() it is the only default path the function
handles. For instance, pidfile is allocated outside of this
function. And yet again, at different places than 1)

Signed-off-by: Michal Privoznik 
---
 scsi/qemu-pr-helper.c | 36 ++--
 1 file changed, 10 insertions(+), 26 deletions(-)

diff --git a/scsi/qemu-pr-helper.c b/scsi/qemu-pr-helper.c
index 0218d65bbf..59df3fd608 100644
--- a/scsi/qemu-pr-helper.c
+++ b/scsi/qemu-pr-helper.c
@@ -76,14 +76,12 @@ static int gid = -1;
 
 static void compute_default_paths(void)
 {
-if (!socket_path) {
-socket_path = qemu_get_local_state_pathname("run/qemu-pr-helper.sock");
-}
+socket_path = qemu_get_local_state_pathname("run/qemu-pr-helper.sock");
+pidfile = qemu_get_local_state_pathname("run/qemu-pr-helper.pid");
 }
 
 static void usage(const char *name)
 {
-compute_default_paths();
 (printf) (
 "Usage: %s [OPTIONS] FILE\n"
 "Persistent Reservation helper program for QEMU\n"
@@ -844,19 +842,6 @@ static gboolean accept_client(QIOChannel *ioc, 
GIOCondition cond, gpointer opaqu
 return TRUE;
 }
 
-
-/*
- * Check socket parameters compatibility when socket activation is used.
- */
-static const char *socket_activation_validate_opts(void)
-{
-if (socket_path != NULL) {
-return "Unix socket can't be set when using socket activation";
-}
-
-return NULL;
-}
-
 static void termsig_handler(int signum)
 {
 atomic_cmpxchg(, RUNNING, TERMINATE);
@@ -930,6 +915,7 @@ int main(int argc, char **argv)
 char *trace_file = NULL;
 bool daemonize = false;
 bool pidfile_specified = false;
+bool socket_path_specified = false;
 unsigned socket_activation;
 
 struct sigaction sa_sigterm;
@@ -946,12 +932,14 @@ int main(int argc, char **argv)
 qemu_add_opts(_trace_opts);
 qemu_init_exec_dir(argv[0]);
 
-pidfile = qemu_get_local_state_pathname("run/qemu-pr-helper.pid");
+compute_default_paths();
 
 while ((ch = getopt_long(argc, argv, sopt, lopt, _ind)) != -1) {
 switch (ch) {
 case 'k':
-socket_path = optarg;
+g_free(socket_path);
+socket_path = g_strdup(optarg);
+socket_path_specified = true;
 if (socket_path[0] != '/') {
 error_report("socket path must be absolute");
 exit(EXIT_FAILURE);
@@ -1042,10 +1030,9 @@ int main(int argc, char **argv)
 socket_activation = check_socket_activation();
 if (socket_activation == 0) {
 SocketAddress saddr;
-compute_default_paths();
 saddr = (SocketAddress){
 .type = SOCKET_ADDRESS_TYPE_UNIX,
-.u.q_unix.path = g_strdup(socket_path)
+.u.q_unix.path = socket_path,
 };
 server_ioc = qio_channel_socket_new();
 if (qio_channel_socket_listen_sync(server_ioc, , _err) < 
0) {
@@ -1053,12 +1040,10 @@ int main(int argc, char **argv)
 error_report_err(local_err);
 return 1;
 }
-g_free(saddr.u.q_unix.path);
 } else {
 /* Using socket activation - check user didn't use -p etc. */
-const char *err_msg = socket_activation_validate_opts();
-if (err_msg != NULL) {
-error_report("%s", err_msg);
+if (socket_path_specified) {
+error_report("Unix socket can't be set when using socket 
activation");
 exit(EXIT_FAILURE);
 }
 
@@ -1075,7 +1060,6 @@ int main(int argc, char **argv)
  error_get_pretty(local_err));
 exit(EXIT_FAILURE);
 }
-socket_path = NULL;
 }
 
 if (qemu_init_main_loop(_err)) {
-- 
2.16.4




[Qemu-block] [PATCH] qemu-pr-helper: Actually allow users to specify pidfile

2018-03-23 Thread Michal Privoznik
Due to wrong specification of arguments to getopt_long() any
attempt to set pidfile resulted in:

1) the default to be leaked
2) the @pidfile variable to be set to NULL (because optarg is
NULL without this patch).

Signed-off-by: Michal Privoznik <mpriv...@redhat.com>
---
 scsi/qemu-pr-helper.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/scsi/qemu-pr-helper.c b/scsi/qemu-pr-helper.c
index 3facbba170..21e1b8ea60 100644
--- a/scsi/qemu-pr-helper.c
+++ b/scsi/qemu-pr-helper.c
@@ -903,12 +903,12 @@ static int drop_privileges(void)
 
 int main(int argc, char **argv)
 {
-const char *sopt = "hVk:fdT:u:g:vq";
+const char *sopt = "hVk:f:dT:u:g:vq";
 struct option lopt[] = {
 { "help", no_argument, NULL, 'h' },
 { "version", no_argument, NULL, 'V' },
 { "socket", required_argument, NULL, 'k' },
-{ "pidfile", no_argument, NULL, 'f' },
+{ "pidfile", required_argument, NULL, 'f' },
 { "daemon", no_argument, NULL, 'd' },
 { "trace", required_argument, NULL, 'T' },
 { "user", required_argument, NULL, 'u' },
@@ -952,7 +952,8 @@ int main(int argc, char **argv)
 }
 break;
 case 'f':
-pidfile = optarg;
+g_free(pidfile);
+pidfile = g_strdup(optarg);
 break;
 #ifdef CONFIG_LIBCAP
 case 'u': {
-- 
2.16.1




Re: [Qemu-block] [Qemu-devel] QEMU not honouring bootorder

2017-12-14 Thread Michal Privoznik
On 12/13/2017 10:21 PM, Paolo Bonzini wrote:
> On 11/12/2017 13:14, Michal Privoznik wrote:
>>>>   qemu-system-x86_64 \
>>>>   -boot menu=on,strict=on \
>>>>   -device lsi,id=scsi0,bus=pci.0 \
>>>>   -drive 
>>>> file=/var/lib/libvirt/images/fedora.qcow2,format=qcow2,if=none,id=drive-scsi0
>>>>  \
>>>>   -device scsi-hd,bus=scsi0.0,drive=drive-scsi0,bootindex=1 \
>>>>   -drive file=/dev/sde,format=raw,if=none,id=drive-scsi1 \
>>>>   -device scsi-block,bus=scsi0.0,drive=drive-scsi1,bootindex=2
>>>>
>>>> It was found that if 'drive-scsi1' is scsi-hd instead of scsi-block
>>>> everything works as expected and I can boot my guest successfully.
>>> Does it help if you add SCSI level ordering with 
>>> "lun={0,1},channel=0,scsi-id=0"
>>> for both devices?
>> Setting lun helps. On the other hand, I had to change from LSI
>> controller to virtio-scsi as LSI doesn't support more than 1 LUNs.
> 
> Does it help to use virtio-scsi and keep the default assignment?

Yes. If I s/lsi/virtio-scsi-pci/ on the cmd line it does help too.

Michal



Re: [Qemu-block] [Qemu-devel] QEMU not honouring bootorder

2017-12-11 Thread Michal Privoznik
On 12/11/2017 12:41 PM, Fam Zheng wrote:
> On Thu, 12/07 13:10, Michal Privoznik wrote:
>> Dear list,
>>
>> I've encountered the following problem. I have two disks:
>>
>>   /var/lib/libvirt/images/fedora.qcow2  (which contains OS)
>>   /dev/sde (iSCSI dummy disk just for testing)
>>
>> Now, when I configure QEMU to start with both of them, QEMU/Seabios
>> tries to boot from /dev/sde which fails obviously. Even setting
>> bootorder does not help. Here's my command line:
>>
>>   qemu-system-x86_64 \
>>   -boot menu=on,strict=on \
>>   -device lsi,id=scsi0,bus=pci.0 \
>>   -drive 
>> file=/var/lib/libvirt/images/fedora.qcow2,format=qcow2,if=none,id=drive-scsi0
>>  \
>>   -device scsi-hd,bus=scsi0.0,drive=drive-scsi0,bootindex=1 \
>>   -drive file=/dev/sde,format=raw,if=none,id=drive-scsi1 \
>>   -device scsi-block,bus=scsi0.0,drive=drive-scsi1,bootindex=2
>>
>> It was found that if 'drive-scsi1' is scsi-hd instead of scsi-block
>> everything works as expected and I can boot my guest successfully.
> 
> Does it help if you add SCSI level ordering with 
> "lun={0,1},channel=0,scsi-id=0"
> for both devices?

Setting lun helps. On the other hand, I had to change from LSI
controller to virtio-scsi as LSI doesn't support more than 1 LUNs.

Michal



Re: [Qemu-block] QEMU not honouring bootorder

2017-12-11 Thread Michal Privoznik
On 12/07/2017 01:10 PM, Michal Privoznik wrote:
> Dear list,
> 

Ping. Is there anything I can help you with to have this sorted out?

Michal




[Qemu-block] QEMU not honouring bootorder

2017-12-07 Thread Michal Privoznik
Dear list,

I've encountered the following problem. I have two disks:

  /var/lib/libvirt/images/fedora.qcow2  (which contains OS)
  /dev/sde (iSCSI dummy disk just for testing)

Now, when I configure QEMU to start with both of them, QEMU/Seabios
tries to boot from /dev/sde which fails obviously. Even setting
bootorder does not help. Here's my command line:

  qemu-system-x86_64 \
  -boot menu=on,strict=on \
  -device lsi,id=scsi0,bus=pci.0 \
  -drive 
file=/var/lib/libvirt/images/fedora.qcow2,format=qcow2,if=none,id=drive-scsi0 \
  -device scsi-hd,bus=scsi0.0,drive=drive-scsi0,bootindex=1 \
  -drive file=/dev/sde,format=raw,if=none,id=drive-scsi1 \
  -device scsi-block,bus=scsi0.0,drive=drive-scsi1,bootindex=2

It was found that if 'drive-scsi1' is scsi-hd instead of scsi-block
everything works as expected and I can boot my guest successfully.

Michal



Re: [Qemu-block] [PATCH] block: Add bdrv_runtime_opts to query-command-line-options

2016-10-06 Thread Michal Privoznik
On 06.10.2016 11:40, Kevin Wolf wrote:
> Recently we moved a few options from QemuOptsLists in blockdev.c to
> bdrv_runtime_opts in block.c in order to make them accissble using
> blockdev-add. However, this has the side effect that these options are
> missing from query-command-line-options now, and libvirt consequently
> disables the corresponding feature.
> 
> This problem was reported as a regression for the 'discard' option,
> introduced in commit 818584a4. However, it is more general than that.
> 
> Fix it by adding bdrv_runtime_opts to the list of QemuOptsLists that are
> returned in query-command-line-options. For the future, libvirt is
> advised to use QMP schema introspection for block device options.
> 
> Reported-by: Michal Privoznik <mpriv...@redhat.com>
> Signed-off-by: Kevin Wolf <kw...@redhat.com>

Tested-by: Michal Privoznik <mpriv...@redhat.com>

Thank you for such a prompt fix!

Michal