[Qemu-devel] [PATCH] qtest/bios-tables: Add DMAR unit test on intel_iommu for q35

2014-11-22 Thread Vasilis Liaskovitis
The test enables intel_iommu on q35 and reads the DMAR table and its only
DRHC structure (for now), checking only the header and checksums.

Signed-off-by: Vasilis Liaskovitis 
---
 tests/bios-tables-test.c | 34 +-
 1 file changed, 33 insertions(+), 1 deletion(-)

diff --git a/tests/bios-tables-test.c b/tests/bios-tables-test.c
index 9e4d205..f09b0cb 100644
--- a/tests/bios-tables-test.c
+++ b/tests/bios-tables-test.c
@@ -45,6 +45,8 @@ typedef struct {
 AcpiRsdtDescriptorRev1 rsdt_table;
 AcpiFadtDescriptorRev1 fadt_table;
 AcpiFacsDescriptorRev1 facs_table;
+AcpiTableDmar dmar_table;
+AcpiDmarHardwareUnit drhd;
 uint32_t *rsdt_tables_addr;
 int rsdt_tables_nr;
 GArray *tables;
@@ -371,6 +373,33 @@ static void test_acpi_dsdt_table(test_data *data)
 g_array_append_val(data->tables, dsdt_table);
 }
 
+static void test_acpi_dmar_table(test_data *data)
+{
+AcpiTableDmar *dmar_table = &data->dmar_table;
+AcpiDmarHardwareUnit *drhd = &data->drhd;
+struct AcpiTableHeader *header = (struct AcpiTableHeader *) dmar_table;
+int tables_nr = data->rsdt_tables_nr - 1;
+uint32_t addr = data->rsdt_tables_addr[tables_nr]; /* dmar is last */
+
+memset(dmar_table, 0, sizeof(*dmar_table));
+ACPI_READ_TABLE_HEADER(dmar_table, addr);
+ACPI_ASSERT_CMP(header->signature, "DMAR");
+
+ACPI_READ_FIELD(dmar_table->host_address_width, addr);
+ACPI_READ_FIELD(dmar_table->flags, addr);
+ACPI_READ_ARRAY_PTR(dmar_table->reserved, 10, addr);
+
+memset(drhd, 0, sizeof(*drhd));
+ACPI_READ_FIELD(drhd->type, addr);
+ACPI_READ_FIELD(drhd->length, addr);
+ACPI_READ_FIELD(drhd->flags, addr);
+ACPI_READ_FIELD(drhd->pci_segment, addr);
+ACPI_READ_FIELD(drhd->address, addr);
+g_assert(!acpi_checksum((uint8_t *)dmar_table, sizeof(AcpiTableDmar) +
+drhd->length));
+
+}
+
 static void test_acpi_tables(test_data *data)
 {
 int tables_nr = data->rsdt_tables_nr - 1; /* fadt is first */
@@ -747,6 +776,9 @@ static void test_acpi_one(const char *params, test_data 
*data)
 test_acpi_fadt_table(data);
 test_acpi_facs_table(data);
 test_acpi_dsdt_table(data);
+if (strstr(params, "iommu=on")) {
+test_acpi_dmar_table(data);
+}
 test_acpi_tables(data);
 
 if (iasl) {
@@ -779,7 +811,7 @@ static void test_acpi_tcg(void)
 
 memset(&data, 0, sizeof(data));
 data.machine = MACHINE_Q35;
-test_acpi_one("-machine q35,accel=tcg", &data);
+test_acpi_one("-machine q35,accel=tcg,iommu=on", &data);
 free_test_data(&data);
 }
 
-- 
1.9.1




[Qemu-devel] [Bug 1392504] Re: USB Passthrough is not working anymore

2014-11-22 Thread Joe Hickey
I just wanted to add another data point -- I migrated my old WinXP VM
from my 14.04 install to my new 14.10 install and found out that the USB
passthrough is not working -- Same libvirt XML definition file.   Tried
removing and re-adding with no luck.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1392504

Title:
  USB Passthrough is not working anymore

Status in QEMU:
  New
Status in “qemu” package in Ubuntu:
  Incomplete

Bug description:
  After upgrading from Ubuntu 14.04 to Ubuntu 14.10 USB passthrough in
  QEMU (version is now 2.1.0 - Debian2.1+dfsg-4ubuntu6.1) is not working
  any more. Already tried to remove and attach the USB devices. I use 1
  USB2 HDD  +  1 USB3 HDD to a virtual linux machine; 1 USB2 HDD to a
  virual FNAS machine and a USB 2camera to virtual Win7 machine. All
  these devices are not working any more.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1392504/+subscriptions



Re: [Qemu-devel] [dpdk-dev] [PATCH v4 00/10] VM Power Management

2014-11-22 Thread Vincent JARDIN

Tim,

cc-ing Paolo and qemu-devel@ again in order to get their take on it.


Did you make any progress in Qemu/KVM community?
We need to be sync'ed up with them to be sure we share the same goal.
I want also to avoid using a solution which doesn't fit with their plan.
Remember that we already had this problem with ivshmem which was
planned to be dropped.




Unfortunately, I have not yet received any feedback:
http://lists.nongnu.org/archive/html/qemu-devel/2014-11/msg01103.html


Just to add to what Alan said above, this capability does not exist in qemu at 
the moment, and based on there having been no feedback on the qemu mailing list 
so far, I think it's reasonable to assume that it will not be implemented in 
the immediate future. The VM Power Management feature has also been designed to 
allow easy migration to a qemu-based solution when this is supported in future. 
Therefore, I'd be in favour of accepting this feature into DPDK now.

It's true that the implementation is a work-around, but there have been similar 
cases in DPDK in the past. One recent example that comes to mind is userspace 
vhost. The original implementation could also be considered a work-around, but 
it met the needs of many in the community. Now, with support for vhost-user in 
qemu 2.1, that implementation is being improved. I'd see VM Power Management 
following a similar path when this capability is supported in qemu.


Best regards,
  Vincent



Re: [Qemu-devel] "File too large" error from "qemu-img snapshot" (was Re: AW: Bug Repoting Directions Request)

2014-11-22 Thread Prof. Dr. Michael Schefczyk
Dear All,

after some trying, my impression is that the following steps do work with plain 
Centos 7:

virsh snapshot-create-as VM backsnap

qemu-img convert -f qcow2 -s backsnap -O qcow2 VM.img backup.img

virsh snapshot-delete VM backsnap

Am I on the right track with these commands?


The further features seem to be reserved to RHEL (and potentially other 
distributions) but not included in Centos:

virsh snapshot-create-as issues "Live Disk Snapshot not supported in this 
version of QEMU" (retranslated from German).

virsh blockcommit issues "Online Transfer not supported ..."

Thus, with Centos 7 it should be possible to back up VMs without shutting them 
down. They are being paused, however, as long as the snapshot is created. The 
qemu-guest-agent does not help in this context, as the features which depend on 
it are not available in Centos.

Regards,

Michael




-Ursprüngliche Nachricht-
Von: Eric Blake [mailto:ebl...@redhat.com] 
Gesendet: Mittwoch, 19. November 2014 19:13
An: Prof. Dr. Michael Schefczyk; Paolo Bonzini; qemu-devel
Betreff: Re: AW: [Qemu-devel] "File too large" error from "qemu-img snapshot" 
(was Re: AW: Bug Repoting Directions Request)

On 11/19/2014 10:32 AM, Prof. Dr. Michael Schefczyk wrote:
> Dear Eric, dear all,
> 
> Again, thank you very much. I now gather that I took the wrong path towards 
> nightly backups of running VM. I remain surprised that I did work for a 
> relatively long time.

[can you convince your mailer to wrap long lines?  also, we tend to frown on 
top-posting on technical lists]

Yeah, you were just getting lucky that two different processes weren't both 
trying to allocate a cluster for different purposes at the same time.  When the 
collision finally did happen, it had catastrophic results on your image.

> 
> A major book on KVM in German language by Kofler & Spenneberg recommends the 
> following approach for online backups (with and without "--disk-only"):
> 
> virsh snapshot-create-as vm XXX --disk-only qemu-img convert -f qcow2 
> -s XXX -O qcow2 XXX.img /backup/YYY.img virsh snapshot-delete vm XXX

Yes, virsh is using QMP commands under the hood, so this method is safe.
 One slight issue is that this sequence is incomplete (it does not shrink the 
backing file chain after the copy is complete), so if you keep repeating it, 
you will eventually cause reduced performance when you have a really long chain 
of multiple qcow2 overlays, or even cause qemu to run into fd exhaustion.  
Also, this command does not show that unless you clean things up, then the 
second time you run this you do not want to copy XXX.img into backup, but 
instead the qcow2 wrapper file that was created by the first snapshot (and 
which itself wrapped by the second snapshot).

With new enough libvirt and qemu, you can shrink the chain back down with an 
active commit, as in:

virsh blockcommit vm XXX --verbose --active --pivot

Also, the use of --disk-only means that your disks have a snapshot taken as if 
at a point in time when you yank the power cord; reverting to such a backup may 
require fsck, and may suffer from other problems from anything that was 
depending on I/O that had not yet been flushed to disk.  If you add the 
--quiesce option (which implies that you set up your guest to use 
qemu-guest-agent, and told libvirt to manage the agent), then you can guarantee 
that your guest has flushed and frozen all filesystems prior to the point in 
time where the snapshot is created; and you can even install hooks in the guest 
to extend that stability to things like databases.  Then your backups are much 
easier to use.  If you omit --disk-only, and take a full snapshot, then you 
have the guest memory state that is necessary to restore all pending I/O, and 
don't need to worry about freezing the guest file systems; but then you have to 
remember to back up the memory state in addition to your disk state.

> 
> Would this be any better than my script, because it uses virsh 
> snapshot-create-as instead of qemu-img snapshot? The second command is still 
> qemu-img convert which may be problematical.

No, remember what I said.  qemu-img may not be used on any image that is opened 
read-write by qemu, but is perfectly safe to do read-only operations on any 
image that is opened read-only by qemu.  That sequence of commands goes from 
the initial:

disk.img [read-write]

then the snapshot-create command causes libvirt to issue a QMP command to 
switch qemu to:

disk.img [read-only] <- overlay.qcow2 [read-write]

at which point you can do anything read-only to disk.img (whether 'qemu-img 
convert' or 'cp' or any other command that doesn't alter the contents of the 
file)

finally, the 'virsh blockcommit' command would take you back to:

disk.img [read-write]

> 
> The problem I am facing is that the documentation is not easy to understand 
> for the average user/administrator who is not among the kvm developers and 
> experts. I have of course tried to read section 14.2.

[Qemu-devel] [Bug 1392504] Re: USB Passthrough is not working anymore

2014-11-22 Thread Leen Keus
Same issue appeard after upgrading from Ubuntu 13.04 to 13.10; bug:
1245251 (Apparmor blocks usb devices in libvirt in Saucy). Could it be
related to this issue? Maybe a new or other line in
/etc/apparmor.d/abstractions/libvirt-qemu?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1392504

Title:
  USB Passthrough is not working anymore

Status in QEMU:
  New
Status in “qemu” package in Ubuntu:
  Incomplete

Bug description:
  After upgrading from Ubuntu 14.04 to Ubuntu 14.10 USB passthrough in
  QEMU (version is now 2.1.0 - Debian2.1+dfsg-4ubuntu6.1) is not working
  any more. Already tried to remove and attach the USB devices. I use 1
  USB2 HDD  +  1 USB3 HDD to a virtual linux machine; 1 USB2 HDD to a
  virual FNAS machine and a USB 2camera to virtual Win7 machine. All
  these devices are not working any more.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1392504/+subscriptions



Re: [Qemu-devel] [PATCH 00/13] linux-aio/virtio-scsi: support AioContext wide IO submission as batch

2014-11-22 Thread Ming Lei
On Tue, Nov 18, 2014 at 9:57 PM, Paolo Bonzini  wrote:
>
>
> On 09/11/2014 08:42, Ming Lei wrote:
>> This patch implements AioContext wide IO submission as batch, and
>> the idea behind is very simple:
>>
>>   - linux native aio(io_submit) supports to enqueue read/write requests
>>   to different files
>>
>>   - in one AioContext, I/O requests from VM can be submitted to different
>>   backend in host, one typical example is multi-lun scsi
>>
>> This patch changes 'struct qemu_laio_state' as per AioContext, and
>> multiple 'bs' can be associted with one single instance of
>> 'struct qemu_laio_state', then AioContext wide IO submission as batch
>> becomes easy to implement.
>>
>> One simple test in my laptop shows ~20% throughput improvement
>> on randread from VM(using AioContext wide IO batch vs. not using io batch)
>> with below config:
>>
>>   -drive 
>> id=drive_scsi1-0-0-0,if=none,format=raw,cache=none,aio=native,file=/dev/nullb2
>>  \
>>   -drive 
>> id=drive_scsi1-0-0-1,if=none,format=raw,cache=none,aio=native,file=/dev/nullb3
>>  \
>>   -device 
>> virtio-scsi-pci,num_queues=4,id=scsi1,addr=07,iothread=iothread0 \
>>   -device 
>> scsi-disk,bus=scsi1.0,channel=0,scsi-id=1,lun=0,drive=drive_scsi1-0-0-0,id=scsi1-0-0-0
>>  \
>>   -device 
>> scsi-disk,bus=scsi1.0,channel=0,scsi-id=1,lun=1,drive=drive_scsi1-0-0-1,id=scsi1-0-0-1
>>  \
>>
>> BTW, maybe more boost can be obtained since ~33K/sec write() system call
>> can be observed when this test case is running, and it might be a recent
>> regression(BH?).
>
> Ming,
>
> these patches are interesting.  I would like to compare them with the
> opposite approach (and, I think, more similar to your old work) where
> the qemu_laio_state API is moved entirely into AioContext, with lazy
> allocation (reference-counted too, probably).

Yes, it can be done in that way, but the feature is linux native aio
specific, so it might not be good to put it into AioContext.

Basically most of the implementation should be same, and the
difference should be where the io queue is put.

>
> Most of the patches would be the same, but you would replace
> aio_attach_aio_bs/aio_detach_aio_bs with something like
> aio_native_get/aio_native_unref.  Ultimately block/{linux,win32}-aio.c
> could be merged into block/aio-{posix,win32}.c, but you do not have to
> do that now.
>
> Could you try that?  This way we can see which API turns out to be nicer.

I can try that, could you share what the APIs you prefer to?

IMO, the APIs can be defined flexiably in this patchset, and only
the AioContext parameter is enough.

Thanks,



Re: [Qemu-devel] [PATCH v3 2/3] linux-aio: handling -EAGAIN for !s->io_q.plugged case

2014-11-22 Thread Ming Lei
On Tue, Nov 18, 2014 at 10:06 PM, Paolo Bonzini  wrote:
>
>
> On 06/11/2014 16:10, Ming Lei wrote:
>> +/* don't submit until next completion for -EAGAIN of non plug case */
>> +if (unlikely(!s->io_q.plugged)) {
>> +return 0;
>> +}
>> +
>
> Is this an optimization or a fix for something?

It is for avoiding unnecessary submission which will cause
another -EAGAIN.

>
>> +/*
>> + * Switch to queue mode until -EAGAIN is handled, we suppose
>> + * there is always uncompleted I/O, so try to enqueue it first,
>> + * and will be submitted again in following aio completion cb.
>> + */
>> +if (ret == -EAGAIN) {
>> +goto enqueue;
>> +} else if (ret < 0) {
>>  goto out_free_aiocb;
>>  }
>
> Better:
>
>  if (!s->io_q.plugged && !s->io_q.idx) {
> ret = io_submit(s->ctx, 1, &iocbs);
> if (ret >= 0) {
> return &laiocb->common;
> }
> if (ret != -EAGAIN) {
> goto out_free_aiocb;
> }
> }

Right.

Thanks,
Ming Lei



Re: [Qemu-devel] [PATCH v3 1/3] linux-aio: fix submit aio as a batch

2014-11-22 Thread Ming Lei
On Tue, Nov 18, 2014 at 10:18 PM, Paolo Bonzini  wrote:
>
>
>> @@ -137,6 +145,12 @@ static void qemu_laio_completion_bh(void *opaque)
>>  }
>>  }
>>
>> +static void qemu_laio_start_retry(struct qemu_laio_state *s)
>> +{
>> +if (s->io_q.idx)
>> +qemu_bh_schedule(s->io_q.retry);
>> +}
>> +
>>  static void qemu_laio_completion_cb(EventNotifier *e)
>>  {
>>  struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, e);
>> @@ -144,6 +158,7 @@ static void qemu_laio_completion_cb(EventNotifier *e)
>>  if (event_notifier_test_and_clear(&s->e)) {
>>  qemu_bh_schedule(s->completion_bh);
>>  }
>> +qemu_laio_start_retry(s);
>
> I think you do not even need two bottom halves.  Just call ioq_submit
> from completion_bh instead, after the call to io_getevents.

Yes, that can save one BH, actually the patch was written when
there wasn't completion BH, :-)

>
>>  }
>>
>>  static void laio_cancel(BlockAIOCB *blockacb)
>> @@ -163,6 +178,9 @@ static void laio_cancel(BlockAIOCB *blockacb)
>>  }
>>
>>  laiocb->common.cb(laiocb->common.opaque, laiocb->ret);
>> +
>> +/* check if there are requests in io queue */
>> +qemu_laio_start_retry(laiocb->ctx);
>>  }
>>
>>  static const AIOCBInfo laio_aiocb_info = {
>> @@ -177,45 +195,80 @@ static void ioq_init(LaioQueue *io_q)
>>  io_q->plugged = 0;
>>  }
>>
>> -static int ioq_submit(struct qemu_laio_state *s)
>> +static void abort_queue(struct qemu_laio_state *s)
>> +{
>> +int i;
>> +for (i = 0; i < s->io_q.idx; i++) {
>> +struct qemu_laiocb *laiocb = container_of(s->io_q.iocbs[i],
>> +  struct qemu_laiocb,
>> +  iocb);
>> +laiocb->ret = -EIO;
>> +qemu_laio_process_completion(s, laiocb);
>> +}
>> +}
>> +
>> +static int ioq_submit(struct qemu_laio_state *s, bool enqueue)
>>  {
>>  int ret, i = 0;
>>  int len = s->io_q.idx;
>> +int j = 0;
>>
>> -do {
>> -ret = io_submit(s->ctx, len, s->io_q.iocbs);
>> -} while (i++ < 3 && ret == -EAGAIN);
>> +if (!len) {
>> +return 0;
>> +}
>>
>> -/* empty io queue */
>> -s->io_q.idx = 0;
>> +ret = io_submit(s->ctx, len, s->io_q.iocbs);
>> +if (ret == -EAGAIN) { /* retry in following completion cb */
>> +return 0;
>> +} else if (ret < 0) {
>> +if (enqueue) {
>> +return ret;
>> +}
>>
>> -if (ret < 0) {
>> -i = 0;
>> -} else {
>> -i = ret;
>> +/* in non-queue path, all IOs have to be completed */
>> +abort_queue(s);
>> +ret = len;
>> +} else if (ret == 0) {
>> +goto out;
>
> No need for goto; just move the "for" loop inside this conditional.  Or
> better, just use memmove.  That is:
>
> if (ret < 0) {
> if (ret == -EAGAIN) {
> return 0;
> }
> if (enqueue) {
> return ret;
> }
>
> abort_queue(s);
> ret = len;
> }
>
> if (ret > 0) {
> memmove(...)
> s->io_q.idx -= ret;
> }
> return ret;

The above is better.

>> + * update io queue, for partial completion, retry will be
>> + * started automatically in following completion cb.
>> + */
>> +s->io_q.idx -= ret;
>> +
>>  return ret;
>>  }
>>
>> -static void ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb)
>> +static void ioq_submit_retry(void *opaque)
>> +{
>> +struct qemu_laio_state *s = opaque;
>> +ioq_submit(s, false);
>> +}
>> +
>> +static int ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb)
>>  {
>>  unsigned int idx = s->io_q.idx;
>>
>> +if (unlikely(idx == s->io_q.size)) {
>> +return -1;
>
> return -EAGAIN?

It means the io queue is full, so the code has to fail the current
request.

Thanks,
Ming Lei



[Qemu-devel] [Bug 1395217] Re: Networking in qemu 2.0.0 and beyond is not compatible with Open Solaris (Illumos) 5.11

2014-11-22 Thread Paolo Bonzini
Can you try bisecting between 1.7 and 2.0 with git?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1395217

Title:
  Networking in qemu 2.0.0 and beyond is not compatible with Open
  Solaris (Illumos) 5.11

Status in QEMU:
  New

Bug description:
  The networking code in qemu in versions 2.0.0 and beyond is non-
  functional with Solaris/Illumos 5.11 images.

  Building 1.7.1, 2.0.0, 2.0.2, 2.1.2,and 2.2.0rc1with the following
  standard Slackware config:

  # From Slackware build tree . . . 
  ./configure \
--prefix=/usr \
--libdir=/usr/lib64 \
--sysconfdir=/etc \
--localstatedir=/var \
--enable-gtk \
--enable-system \
--enable-kvm \
--disable-debug-info \
--enable-virtfs \
--enable-sdl \
--audio-drv-list=alsa,oss,sdl,esd \
--enable-libusb \
--disable-vnc \
--target-list=x86_64-linux-user,i386-linux-user,x86_64-softmmu,i386-softmmu 
\
--enable-spice \
--enable-usb-redir 

  
  And attempting to run the same VM image with the following command (or via 
virt-manager):

  macaddress="DE:AD:BE:EF:3F:A4"

  qemu-system-x86_64 nex4x -cdrom /dev/cdrom -name "Nex41" -cpu Westmere
  -machine accel=kvm -smp 2 -m 4000 -net nic,macaddr=$macaddress  -net 
bridge,br=b
  r0 -net dump,file=/usr1/tmp/ -drive file=nex4x_d1 -drive 
file=nex4x_d2
   -enable-kvm

  Gives success on 1.7.1, and a deaf VM on all subsequent versions.

  Notable in validating my config, is that a Windows 7 image runs
  cleanly with networking on *all* builds, so my configuration appears
  to be good - qemu just hates Solaris at this point.

  Watching with wireshark (as well as pulling network traces from qemu
  as noted above) it appears that the notable difference in the two
  configs is that for some reason, Solaris gets stuck arping for it's
  own interface on startup, and never really comes on line on the
  network.  If other hosts attempt to ping the Solaris instance, they
  can successfully arp the bad VM, but not the other way around.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1395217/+subscriptions