Re: [Qemu-devel] [PATCH v3 0/4] target-arm: Handle tagged addresses when loading PC

2016-10-13 Thread Tom Hanson
On 10/12/2016 01:50 PM, Thomas Hanson wrote:
...
> 
>   Still looking into handling of tagged addresses for exceptions and
>   exception returns.  Will handle that as a separate patch set.

Peter,

Looking at arm_cpu_do_interrupt_aarch64() and the ARM spec, the new PC value is 
always an offset from the appropriate VBAR. The only place I can find the the 
VBAR being set is at boot time (i.e. UEFI).

Can the boot code use a tagged pointer to specify the VBAR?

Is there some other place/time when the VBAR can be modified post-boot?

Thanks,
Tom



Re: [Qemu-devel] [PATCH 2/3] target-arm: Code changes to implement overwrite of tag field on PC load

2016-10-12 Thread Tom Hanson
On 10/11/2016 10:12 AM, Peter Maydell wrote:
> On 11 October 2016 at 16:51, Thomas Hanson  wrote:
>> On 5 October 2016 at 16:01, Peter Maydell  wrote:
>>> It matches the style of the rest of the code which generally
>>> prefers to convert register numbers into TCGv earlier rather
>>> than later (at the level which is doing decode of instruction
>>> bits, rather than inside utility functions), and gives you a
>>> more flexible utility function, which can do a "write value to PC"
>>> for any value, not just something that happens to be in a CPU
>>> register. And as you say it avoids calling cpu_reg() multiple times
>>> as a side benefit.
> 
>> This approach seems counter to both structured and OO design principles
>> which would push common code (like type conversion) down into the lower
>> level function in order to increase re-use and minimize code duplication.
>> Those principles suggest that if we need a gen_a64_set_pc_value() function
>> that can load the PC from something other than a register or an immediate,
>> then it should be a lower level function than, and be called by,
>> gen_a64_set_pc_reg().  This also has the benefit of reducing clutter in the
>> caller, making it more readable and more maintainable.
> 
> The 'lower level' stuff here has a general pattern of taking either
> (1) a TCGv or (2) an integer immediate. We should follow that pattern.
> 
>> As a separate issue, we now have functions to load the PC from an immediate
>> value and from a register.  Where else could we legitimately load the PC
>> from?
> 
> Anything where we found ourselves wanting to do some preliminary
> manipulation of the value before writing it to the PC.
> 
> thanks
> -- PMM
> 

I split gen_a64_set_pc_reg() into 2 funtions, upper that takes a register and 
lower
that takes a variable.  Patch v3 submitted.




Re: [Qemu-devel] [PATCH 2/3] target-arm: Code changes to implement overwrite of tag field on PC load

2016-10-05 Thread Tom Hanson
On 09/29/2016 07:24 PM, Peter Maydell wrote:
> On 16 September 2016 at 10:34, Thomas Hanson  wrote:
...
>> diff --git a/target-arm/translate-a64.c b/target-arm/translate-a64.c
>> index f5e29d2..4d6f951 100644
...
>> @@ -176,6 +177,58 @@ void gen_a64_set_pc_im(uint64_t val)
>>  tcg_gen_movi_i64(cpu_pc, val);
>>  }
>>
>> +void gen_a64_set_pc_reg(DisasContext *s, unsigned int rn)
> 
> I think it would be better to take a TCGv_i64 here rather than
> unsigned int rn (ie have the caller do the cpu_reg(s, rn)).
> (You probably don't need that prototype of cpu_reg() above if
> you do this, though that's not why it's better.)
> 
Why would this be better? 

To me, the caller has a register number and wants that register used to load 
the PC.  So, it passes in the register number.  

The fact that gen_a64_set_pc_reg() needs to convert that into a TCGv_i64 is
an implementation detail that should be encapsulated/hidden from the caller.

If the desire is to eliminate the multiple cpu_reg() calls inside of 
gen_a64_set_pc_reg() then that mapping could be done at the top of the 
function before the outer if().




Re: [Qemu-devel] [PATCH 3/3] target-arm: Comments to mark location of pending work for 56 bit addresses

2016-10-03 Thread Tom Hanson
On 09/30/2016 05:24 PM, Peter Maydell wrote:
> On 30 September 2016 at 15:46, Tom Hanson <thomas.han...@linaro.org> wrote:
>> On 09/29/2016 07:27 PM, Peter Maydell wrote:
>> ...
>>>> This work was not done at this time since the changes could not be tested
>>>> with current CPU models.  Comments have been added to flag the locations
>>>> where this will need to be fixed once a model is available.
>>>
>>> This is *not* why we haven't done this work. We haven't done it
>>> because the maximum virtual address size permitted by the
>>> architecture is less than 56 bits, and so this is a "can't happen"
>>> situation.
>>
>> But, in an earlier discussion which we had about the desire to use QEMU
>> to test potential new ARM-based architectures with large address spaces
>> I suggested that these changes be made now.  You said that the changes
>> shouldn't be made because:
>> where there is no supported guest CPU that could use
>> that code, the code shouldn't be there because it's untested
>> and untestable
>> Isn't that the same thing I said above?
> 
> That's a general statement of principle about what I think we
> should or shouldn't write code for in QEMU. In this particular case,
> it's true, but the reason it's true isn't just that we don't
> currently have any 56 bit-VA CPUs implemented, but because such
> a CPU is not permitted by the architecture. That's a stronger
> statement and I think it's worth making.
> 

Per the current spec (and v2) that's true.  But the intent was to enable 
testing of "new ARM-based architectures with large address spaces."  Vendors 
and OEMs may have difficulty in determining whether to ask for / push for / 
support a future, larger address space in the absence of a platform which is 
capable of emulating the future architecture.  



Re: [Qemu-devel] [PATCH 3/3] target-arm: Comments to mark location of pending work for 56 bit addresses

2016-10-03 Thread Tom Hanson
On 09/30/2016 05:24 PM, Peter Maydell wrote:
 3 comments added in same file to identify cases in a switch.
>>>
>>> This should be a separate patch, because it is unrelated to the
>>> tagged address stuff.
>>
>> As part of that same conversation you suggested adding these
>> comments rather than making the changes:
>> If we can assert, or failing that have a comment in the place
>> that would be modified anyway for 56 bit addresses then that
>> ought to catch the future case I think.
> 
> Yes, I still think this. What does it have to do with adding
> "SVC", "HVC", etc comments to the switch cases? Those have
> nothing to do with tagged addresses or 56 bit VAs, and should
> not be in this patch (though I don't object to them inherently).
> 
> thanks
> -- PMM

Sorry, moving too fast and didn't look at which comments you were referring to. 
 I'll drop them.

-Tom




Re: [Qemu-devel] [PATCH 3/3] target-arm: Comments to mark location of pending work for 56 bit addresses

2016-09-30 Thread Tom Hanson
On 09/29/2016 07:27 PM, Peter Maydell wrote:
...
>> This work was not done at this time since the changes could not be tested
>> with current CPU models.  Comments have been added to flag the locations
>> where this will need to be fixed once a model is available.
> 
> This is *not* why we haven't done this work. We haven't done it
> because the maximum virtual address size permitted by the
> architecture is less than 56 bits, and so this is a "can't happen"
> situation.

But, in an earlier discussion which we had about the desire to use QEMU to test 
potential new ARM-based architectures with large address spaces I suggested 
that these changes be made now.  You said that the changes shouldn't be made 
because:
where there is no supported guest CPU that could use
that code, the code shouldn't be there because it's untested
and untestable
Isn't that the same thing I said above?

>> 3 comments added in same file to identify cases in a switch.
> 
> This should be a separate patch, because it is unrelated to the
> tagged address stuff.

As part of that same conversation you suggested adding these comments rather 
than making the changes:
If we can assert, or failing that have a comment in the place
that would be modified anyway for 56 bit addresses then that
ought to catch the future case I think.




Re: [Qemu-devel] [PATCH 0/3] tareget-arm: Handle tagged addresses when loading PC

2016-09-30 Thread Tom Hanson

On 09/29/2016 07:37 PM, Peter Maydell wrote:

On 16 September 2016 at 10:34, Thomas Hanson  wrote:

 If tagged addresses are enabled, then addresses being loaded into the
 PC must be cleaned up by overwriting the tag bits with either all 0's
 or all 1's as specified in the ARM ARM spec.  The decision process is
 dependent on whether the code will be running in EL0/1 or in EL2/3 and
 is controlled by a combination of Top Byte Ignored (TBI) bits in the
 TCR and the value of bit 55 in the address being loaded.

 TBI values are extracted from the appropriate TCR and made available
 to TCG code generation routines by inserting them into the TB flags
 field and then transferring them to DisasContext structure in
 gen_intermediate_code_a64().

 New function gen_a64_set_pc_reg() encapsulates the logic required to
 determine whether clean up of the tag byte is required and then
 generating the code to correctly load the PC.

 In addition to those instruction which can directly load a tagged
 address into the PC, there are others which increment or add a value to
 the PC.  If 56 bit addressing is used, these instructions can cause an
 arithmetic roll-over into the tag bits.  The ARM ARM specification for
 handling tagged addresses requires that these cases also be addressed
 by cleaning up the tag field.  This work has been deferred because
 there is currently no CPU model available for testing with 56 bit
 addresses.

These changes are OK (other than the comments I've made on the
patches), but do not cover all the cases where values can be
loaded into the PC and may need to be cleansed of their tags.

In particular:
  * on exception entry to AArch64 we may need to clean a tag out of
the vector table base address register VBAR_ELx
(in QEMU this would be in arm_cpu_do_interrupt_aarch64())
  * on exception return to AArch64 we may need to clean a tag out of
the return address we got from ELR_ELx
(in QEMU, in the exception_return helper)

Note that D4.1.1 of the ARM ARM describes a potential relaxation
of the requirement that tag bits not be propagated into the PC
in the case of an illegal exception return; I recommend not
taking advantage of that relaxation unless it really does fall
out of the implementation much more trivially that way.

Watch out that you use the TBI bits for the destination EL in
each case, not the EL you start in...

thanks
-- PMM

Peter,

As I read arm_cpu_do_interrupt_aarch64() it sets the return address in 
env->elr_el[new_el] to env->pc (for AArch64).


Since the PC is alway clean, how can a tagged address get saved off? Am 
I missing something?


-Tom



Re: [Qemu-devel] [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

2016-07-05 Thread Tom Hanson
So, given the 1 register block per virt-mmio "bus" then I agree that we
need a "dev path" distinction between them.

On 5 July 2016 at 14:22, Thomas Hanson  wrote:

> OK, that makes sense.  I was thinking that the MMIO transport would/could
> support multiple register blocks and thus multiple devices.
>
> On 5 July 2016 at 13:26, Laszlo Ersek (Red Hat)  wrote:
>
>> A virtio-mmio "bus" is a single-device transport. It has a fixed base
>> address that is set at board creation time. The MMIO area is 0x200 bytes
>> in size, and hosts the virtio registers for one device that can sit on
>> this transport. Transports can be unused.
>>
>> The "virt" machtype creates 32 transports (= 32 virtio-mmio "buses"
>> suitable for one virtio device each). This allows for 32 virtio devices
>> exposed via virtio-mmio. The placement of the different virtio-mmio
>> "buses" at specific addresses in MMIO space is board specific.
>>
>> So yes, it definitely makes sense to create several of these "buses".
>> It's better to think of a single virtio-mmio "bus" as a virtio-mmio
>> "transport" or "register block". The "bus" terminology is just an
>> internal QEMU detail. (It is not enumerable in hardware, for example.)
>>
>> --
>> You received this bug notification because you are subscribed to the bug
>> report.
>> https://bugs.launchpad.net/bugs/1594239
>>
>> Title:
>>   After adding more scsi disks for Aarch64 virtual machine, start the VM
>>   and got Qemu Error
>>
>> Status in QEMU:
>>   Confirmed
>>
>> Bug description:
>>   Description
>>   ===
>>   Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
>>   Add scsi disk to the VM. After add four or more scsi disks, start the
>> VM and will got Qemu error.
>>
>>   Steps to reproduce
>>   ==
>>   1.Use virt-manager to create a VM.
>>   2.After the VM is started, add scsi disk to the VM. They will be
>> allocated to "sdb,sdc,sdd." .
>>   3.If we got a disk name > sdg, virt-manager will also assign a
>> virtio-scsi controller for this disk.And the VM will be shutdown.
>>   4.Start the VM, will see the error log.
>>
>>
>>   Expected result
>>   ===
>>   Start the vm smoothly.The added disks can work.
>>
>>   Actual result
>>   =
>>   Got the error:
>>   starting domain: internal error: process exited while connecting to
>> monitor: qemu-system-aarch64:
>> /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620:
>> vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id
>> == 0' failed.
>>   details=Traceback (most recent call last):
>> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in
>> cb_wrapper
>>   callback(asyncjob, *args, **kwargs)
>> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in
>> tmpcb
>>   callback(*args, **kwargs)
>> File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83,
>> in newfn
>>   ret = fn(self, *args, **kwargs)
>> File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in
>> startup
>>   self._backend.create()
>> File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035,
>> in create
>>   if ret == -1: raise libvirtError ('virDomainCreate() failed',
>> dom=self)
>>   libvirtError: internal error: process exited while connecting to
>> monitor: qemu-system-aarch64:
>> /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620:
>> vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id
>> == 0' failed.
>>
>>
>>   Environment
>>   ===
>>   1. virt-manager version is 1.3.2
>>
>>   2. Which hypervisor did you use?
>>   Libvirt+KVM
>>   $ kvm --version
>>   QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1),
>> Copyright (c) 2003-2008 Fabrice Bellard
>>   $ libvirtd --version
>>   libvirtd (libvirt) 1.3.1
>>
>>   3. Which storage type did you use?
>>  In the host file system,all in one physics machine.
>>   stack@u202154:/opt/stack/nova$ df -hl
>>   Filesystem Size Used Avail Use% Mounted on
>>   udev 7.8G 0 7.8G 0% /dev
>>   tmpfs 1.6G 61M 1.6G 4% /run
>>   /dev/sda2 917G 41G 830G 5% /
>>   tmpfs 7.9G 0 7.9G 0% /dev/shm
>>   tmpfs 5.0M 0 5.0M 0% /run/lock
>>   tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
>>   /dev/sda1 511M 888K 511M 1% /boot/efi
>>   cgmfs 100K 0 100K 0% /run/cgmanager/fs
>>   tmpfs 1.6G 0 1.6G 0% /run/user/1002
>>   tmpfs 1.6G 0 1.6G 0% /run/user/1000
>>   tmpfs 1.6G 0 1.6G 0% /run/user/0
>>
>>   4. Environment information:
>>  Architecture : AARCH64
>>  OS: Ubuntu 16.04
>>
>>   The Qemu commmand of libvirt is :
>>   2016-06-20 02:39:46.561+: starting up libvirt version: 1.3.1,
>> package: 1ubuntu10 (William Grant  Fri, 15 Apr 2016
>> 12:08:21 +1000), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1),
>> hostname: u202154
>>   LC_ALL=C
>> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
>> QEMU_AUDIO_DRV=none /usr/bin/kvm 

Re: [Qemu-devel] [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

2016-07-05 Thread Tom Hanson
OK, that makes sense.  I was thinking that the MMIO transport would/could
support multiple register blocks and thus multiple devices.

On 5 July 2016 at 13:26, Laszlo Ersek (Red Hat) 
wrote:

> A virtio-mmio "bus" is a single-device transport. It has a fixed base
> address that is set at board creation time. The MMIO area is 0x200 bytes
> in size, and hosts the virtio registers for one device that can sit on
> this transport. Transports can be unused.
>
> The "virt" machtype creates 32 transports (= 32 virtio-mmio "buses"
> suitable for one virtio device each). This allows for 32 virtio devices
> exposed via virtio-mmio. The placement of the different virtio-mmio
> "buses" at specific addresses in MMIO space is board specific.
>
> So yes, it definitely makes sense to create several of these "buses".
> It's better to think of a single virtio-mmio "bus" as a virtio-mmio
> "transport" or "register block". The "bus" terminology is just an
> internal QEMU detail. (It is not enumerable in hardware, for example.)
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1594239
>
> Title:
>   After adding more scsi disks for Aarch64 virtual machine, start the VM
>   and got Qemu Error
>
> Status in QEMU:
>   Confirmed
>
> Bug description:
>   Description
>   ===
>   Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
>   Add scsi disk to the VM. After add four or more scsi disks, start the VM
> and will got Qemu error.
>
>   Steps to reproduce
>   ==
>   1.Use virt-manager to create a VM.
>   2.After the VM is started, add scsi disk to the VM. They will be
> allocated to "sdb,sdc,sdd." .
>   3.If we got a disk name > sdg, virt-manager will also assign a
> virtio-scsi controller for this disk.And the VM will be shutdown.
>   4.Start the VM, will see the error log.
>
>
>   Expected result
>   ===
>   Start the vm smoothly.The added disks can work.
>
>   Actual result
>   =
>   Got the error:
>   starting domain: internal error: process exited while connecting to
> monitor: qemu-system-aarch64:
> /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620:
> vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id
> == 0' failed.
>   details=Traceback (most recent call last):
> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in
> cb_wrapper
>   callback(asyncjob, *args, **kwargs)
> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in
> tmpcb
>   callback(*args, **kwargs)
> File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83,
> in newfn
>   ret = fn(self, *args, **kwargs)
> File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in
> startup
>   self._backend.create()
> File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035,
> in create
>   if ret == -1: raise libvirtError ('virDomainCreate() failed',
> dom=self)
>   libvirtError: internal error: process exited while connecting to
> monitor: qemu-system-aarch64:
> /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620:
> vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id
> == 0' failed.
>
>
>   Environment
>   ===
>   1. virt-manager version is 1.3.2
>
>   2. Which hypervisor did you use?
>   Libvirt+KVM
>   $ kvm --version
>   QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1),
> Copyright (c) 2003-2008 Fabrice Bellard
>   $ libvirtd --version
>   libvirtd (libvirt) 1.3.1
>
>   3. Which storage type did you use?
>  In the host file system,all in one physics machine.
>   stack@u202154:/opt/stack/nova$ df -hl
>   Filesystem Size Used Avail Use% Mounted on
>   udev 7.8G 0 7.8G 0% /dev
>   tmpfs 1.6G 61M 1.6G 4% /run
>   /dev/sda2 917G 41G 830G 5% /
>   tmpfs 7.9G 0 7.9G 0% /dev/shm
>   tmpfs 5.0M 0 5.0M 0% /run/lock
>   tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
>   /dev/sda1 511M 888K 511M 1% /boot/efi
>   cgmfs 100K 0 100K 0% /run/cgmanager/fs
>   tmpfs 1.6G 0 1.6G 0% /run/user/1002
>   tmpfs 1.6G 0 1.6G 0% /run/user/1000
>   tmpfs 1.6G 0 1.6G 0% /run/user/0
>
>   4. Environment information:
>  Architecture : AARCH64
>  OS: Ubuntu 16.04
>
>   The Qemu commmand of libvirt is :
>   2016-06-20 02:39:46.561+: starting up libvirt version: 1.3.1,
> package: 1ubuntu10 (William Grant  Fri, 15 Apr 2016
> 12:08:21 +1000), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1),
> hostname: u202154
>   LC_ALL=C
> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
> QEMU_AUDIO_DRV=none /usr/bin/kvm -name cent7 -S -machine
> virt,accel=kvm,usb=off -cpu host -drive
> file=/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw,if=pflash,format=raw,unit=0,readonly=on
> -drive
> file=/var/lib/libvirt/qemu/nvram/cent7_VARS.fd,if=pflash,format=raw,unit=1
> -m 2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid
> 

[Qemu-devel] [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

2016-07-05 Thread Tom Hanson
I tested Laszlo's patch against this scenario and it eliminated the
error.

However, I'm still not convinced that it's needed.

Let's start with a basic question: Does it make sense for there to be
more than one MMIO "bus" on a system?

After all, it's NOT a physical bus and there's only one set of physical
memory (at least on anything we're currently modelling).

Would it make as much sense to disallow a second virtio-mmio bus??

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1594239

Title:
  After adding more scsi disks for Aarch64 virtual machine, start the VM
  and got Qemu Error

Status in QEMU:
  Confirmed

Bug description:
  Description
  ===
  Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
  Add scsi disk to the VM. After add four or more scsi disks, start the VM and 
will got Qemu error.

  Steps to reproduce
  ==
  1.Use virt-manager to create a VM.
  2.After the VM is started, add scsi disk to the VM. They will be allocated to 
"sdb,sdc,sdd." .
  3.If we got a disk name > sdg, virt-manager will also assign a virtio-scsi 
controller for this disk.And the VM will be shutdown.
  4.Start the VM, will see the error log.

  
  Expected result
  ===
  Start the vm smoothly.The added disks can work.

  Actual result
  =
  Got the error:
  starting domain: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.
  details=Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in 
cb_wrapper
  callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in tmpcb
  callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83, in 
newfn
  ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in startup
  self._backend.create()
File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035, in 
create
  if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
  libvirtError: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.

  
  Environment
  ===
  1. virt-manager version is 1.3.2

  2. Which hypervisor did you use?
  Libvirt+KVM
  $ kvm --version
  QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright 
(c) 2003-2008 Fabrice Bellard
  $ libvirtd --version
  libvirtd (libvirt) 1.3.1

  3. Which storage type did you use?
 In the host file system,all in one physics machine.
  stack@u202154:/opt/stack/nova$ df -hl
  Filesystem Size Used Avail Use% Mounted on
  udev 7.8G 0 7.8G 0% /dev
  tmpfs 1.6G 61M 1.6G 4% /run
  /dev/sda2 917G 41G 830G 5% /
  tmpfs 7.9G 0 7.9G 0% /dev/shm
  tmpfs 5.0M 0 5.0M 0% /run/lock
  tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
  /dev/sda1 511M 888K 511M 1% /boot/efi
  cgmfs 100K 0 100K 0% /run/cgmanager/fs
  tmpfs 1.6G 0 1.6G 0% /run/user/1002
  tmpfs 1.6G 0 1.6G 0% /run/user/1000
  tmpfs 1.6G 0 1.6G 0% /run/user/0

  4. Environment information:
 Architecture : AARCH64
 OS: Ubuntu 16.04

  The Qemu commmand of libvirt is :
  2016-06-20 02:39:46.561+: starting up libvirt version: 1.3.1, package: 
1ubuntu10 (William Grant  Fri, 15 Apr 2016 12:08:21 +1000), 
qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), hostname: u202154
  LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/kvm -name cent7 -S -machine virt,accel=kvm,usb=off 
-cpu host -drive 
file=/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw,if=pflash,format=raw,unit=0,readonly=on
 -drive 
file=/var/lib/libvirt/qemu/nvram/cent7_VARS.fd,if=pflash,format=raw,unit=1 -m 
2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 
d5462bb6-159e-4dbd-9266-bf8c07fa1695 -nographic -no-user-config -nodefaults 
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-cent7/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1 -device 
pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x1 -device 
virtio-scsi-device,id=scsi0 -device lsi,id=scsi1 -device lsi,id=scsi2 -device 
virtio-scsi-device,id=scsi3 -usb -drive 
file=/var/lib/libvirt/images/cent7-2.img,format=qcow2,if=none,id=drive-scsi0-0-0-0
 -device 
scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1
 -drive if=none,id=drive-scsi0-0-0-1,readonly=on -device 

Re: [Qemu-devel] [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

2016-07-05 Thread Tom Hanson
On 07/05/2016 10:29 AM, Peter Maydell wrote:
...
> The virt board creates a collection of virtio-mmio transports,
> so if you create just a backend on the command line (via
> "-device virtio-scsi-device") it will be plugged into a
> virtio-bus on a virtio-mmio transport.
>
> You almost certainly didn't want to do this -- virtio-mmio
> is only there for legacy reasons [it predates pci support
> in the 'virt' board and the device-tree-driven kernel and
> for a time it was the only way to do virtio].

Should this behavior be changed?  Use PCI as a default instead?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1594239

Title:
  After adding more scsi disks for Aarch64 virtual machine, start the VM
  and got Qemu Error

Status in QEMU:
  Confirmed

Bug description:
  Description
  ===
  Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
  Add scsi disk to the VM. After add four or more scsi disks, start the VM and 
will got Qemu error.

  Steps to reproduce
  ==
  1.Use virt-manager to create a VM.
  2.After the VM is started, add scsi disk to the VM. They will be allocated to 
"sdb,sdc,sdd." .
  3.If we got a disk name > sdg, virt-manager will also assign a virtio-scsi 
controller for this disk.And the VM will be shutdown.
  4.Start the VM, will see the error log.

  
  Expected result
  ===
  Start the vm smoothly.The added disks can work.

  Actual result
  =
  Got the error:
  starting domain: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.
  details=Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in 
cb_wrapper
  callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in tmpcb
  callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83, in 
newfn
  ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in startup
  self._backend.create()
File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035, in 
create
  if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
  libvirtError: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.

  
  Environment
  ===
  1. virt-manager version is 1.3.2

  2. Which hypervisor did you use?
  Libvirt+KVM
  $ kvm --version
  QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright 
(c) 2003-2008 Fabrice Bellard
  $ libvirtd --version
  libvirtd (libvirt) 1.3.1

  3. Which storage type did you use?
 In the host file system,all in one physics machine.
  stack@u202154:/opt/stack/nova$ df -hl
  Filesystem Size Used Avail Use% Mounted on
  udev 7.8G 0 7.8G 0% /dev
  tmpfs 1.6G 61M 1.6G 4% /run
  /dev/sda2 917G 41G 830G 5% /
  tmpfs 7.9G 0 7.9G 0% /dev/shm
  tmpfs 5.0M 0 5.0M 0% /run/lock
  tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
  /dev/sda1 511M 888K 511M 1% /boot/efi
  cgmfs 100K 0 100K 0% /run/cgmanager/fs
  tmpfs 1.6G 0 1.6G 0% /run/user/1002
  tmpfs 1.6G 0 1.6G 0% /run/user/1000
  tmpfs 1.6G 0 1.6G 0% /run/user/0

  4. Environment information:
 Architecture : AARCH64
 OS: Ubuntu 16.04

  The Qemu commmand of libvirt is :
  2016-06-20 02:39:46.561+: starting up libvirt version: 1.3.1, package: 
1ubuntu10 (William Grant  Fri, 15 Apr 2016 12:08:21 +1000), 
qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), hostname: u202154
  LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/kvm -name cent7 -S -machine virt,accel=kvm,usb=off 
-cpu host -drive 
file=/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw,if=pflash,format=raw,unit=0,readonly=on
 -drive 
file=/var/lib/libvirt/qemu/nvram/cent7_VARS.fd,if=pflash,format=raw,unit=1 -m 
2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 
d5462bb6-159e-4dbd-9266-bf8c07fa1695 -nographic -no-user-config -nodefaults 
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-cent7/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1 -device 
pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x1 -device 
virtio-scsi-device,id=scsi0 -device lsi,id=scsi1 -device lsi,id=scsi2 -device 
virtio-scsi-device,id=scsi3 -usb -drive 
file=/var/lib/libvirt/images/cent7-2.img,format=qcow2,if=none,id=drive-scsi0-0-0-0
 -device 

[Qemu-devel] [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

2016-07-05 Thread Tom Hanson
I haven't dug into the code for this particular aspect (yet) but it
sounds like when a scsi-hd device is specified with a virtio backend but
with no virtio bus specified, it is defaulting to an MMIO bus.   Is this
correct?

A few questions:

1) Is it valid for a SCSI drive to default to an MMIO bus/backend? Or
should it have defaulted to PCI?

2) Given that the 2 scsi-hd devices were specified with no bus, no ID,
and no LUN was there anything incorrect in how QEMU handled them? (Other
than a more verbose error message being desirable.)

3) In the general case of a MMIO device, do they need to have a unique
dev path?  In the real world, there's no bus, no bus address, nothing
that looks like a dev path.  Just a memory address.

4) Or is it the case that MMIO devices need to be unique based solely on
the device characteristics?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1594239

Title:
  After adding more scsi disks for Aarch64 virtual machine, start the VM
  and got Qemu Error

Status in QEMU:
  Confirmed

Bug description:
  Description
  ===
  Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
  Add scsi disk to the VM. After add four or more scsi disks, start the VM and 
will got Qemu error.

  Steps to reproduce
  ==
  1.Use virt-manager to create a VM.
  2.After the VM is started, add scsi disk to the VM. They will be allocated to 
"sdb,sdc,sdd." .
  3.If we got a disk name > sdg, virt-manager will also assign a virtio-scsi 
controller for this disk.And the VM will be shutdown.
  4.Start the VM, will see the error log.

  
  Expected result
  ===
  Start the vm smoothly.The added disks can work.

  Actual result
  =
  Got the error:
  starting domain: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.
  details=Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in 
cb_wrapper
  callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in tmpcb
  callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83, in 
newfn
  ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in startup
  self._backend.create()
File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035, in 
create
  if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
  libvirtError: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.

  
  Environment
  ===
  1. virt-manager version is 1.3.2

  2. Which hypervisor did you use?
  Libvirt+KVM
  $ kvm --version
  QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright 
(c) 2003-2008 Fabrice Bellard
  $ libvirtd --version
  libvirtd (libvirt) 1.3.1

  3. Which storage type did you use?
 In the host file system,all in one physics machine.
  stack@u202154:/opt/stack/nova$ df -hl
  Filesystem Size Used Avail Use% Mounted on
  udev 7.8G 0 7.8G 0% /dev
  tmpfs 1.6G 61M 1.6G 4% /run
  /dev/sda2 917G 41G 830G 5% /
  tmpfs 7.9G 0 7.9G 0% /dev/shm
  tmpfs 5.0M 0 5.0M 0% /run/lock
  tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
  /dev/sda1 511M 888K 511M 1% /boot/efi
  cgmfs 100K 0 100K 0% /run/cgmanager/fs
  tmpfs 1.6G 0 1.6G 0% /run/user/1002
  tmpfs 1.6G 0 1.6G 0% /run/user/1000
  tmpfs 1.6G 0 1.6G 0% /run/user/0

  4. Environment information:
 Architecture : AARCH64
 OS: Ubuntu 16.04

  The Qemu commmand of libvirt is :
  2016-06-20 02:39:46.561+: starting up libvirt version: 1.3.1, package: 
1ubuntu10 (William Grant  Fri, 15 Apr 2016 12:08:21 +1000), 
qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), hostname: u202154
  LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/kvm -name cent7 -S -machine virt,accel=kvm,usb=off 
-cpu host -drive 
file=/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw,if=pflash,format=raw,unit=0,readonly=on
 -drive 
file=/var/lib/libvirt/qemu/nvram/cent7_VARS.fd,if=pflash,format=raw,unit=1 -m 
2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 
d5462bb6-159e-4dbd-9266-bf8c07fa1695 -nographic -no-user-config -nodefaults 
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-cent7/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1 -device 

Re: [Qemu-devel] [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

2016-07-05 Thread Tom Hanson
What would a device path look like for an MMIO backend?
On Jul 5, 2016 9:35 AM, "Laszlo Ersek (Red Hat)"  wrote:

> I don't think this difference is intentional. I think we're seeing an
> interplay between the following two commits:
>
> * http://git.qemu.org/?p=qemu.git;a=commitdiff;h=4d2ffa08b601b
> * http://git.qemu.org/?p=qemu.git;a=commitdiff;h=7685ee6abcb93
>
> Referring to the message of the second commit above, the problem is that
> sysbus doesn't implement get_dev_path, hence it doesn't support
> "creating unique savevm id strings".
>
> virtio_bus_get_dev_path() simply defers to the parent bus ("main-system-
> bus" in this case).
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1594239
>
> Title:
>   After adding more scsi disks for Aarch64 virtual machine, start the VM
>   and got Qemu Error
>
> Status in QEMU:
>   Confirmed
>
> Bug description:
>   Description
>   ===
>   Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
>   Add scsi disk to the VM. After add four or more scsi disks, start the VM
> and will got Qemu error.
>
>   Steps to reproduce
>   ==
>   1.Use virt-manager to create a VM.
>   2.After the VM is started, add scsi disk to the VM. They will be
> allocated to "sdb,sdc,sdd." .
>   3.If we got a disk name > sdg, virt-manager will also assign a
> virtio-scsi controller for this disk.And the VM will be shutdown.
>   4.Start the VM, will see the error log.
>
>
>   Expected result
>   ===
>   Start the vm smoothly.The added disks can work.
>
>   Actual result
>   =
>   Got the error:
>   starting domain: internal error: process exited while connecting to
> monitor: qemu-system-aarch64:
> /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620:
> vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id
> == 0' failed.
>   details=Traceback (most recent call last):
> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in
> cb_wrapper
>   callback(asyncjob, *args, **kwargs)
> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in
> tmpcb
>   callback(*args, **kwargs)
> File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83,
> in newfn
>   ret = fn(self, *args, **kwargs)
> File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in
> startup
>   self._backend.create()
> File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035,
> in create
>   if ret == -1: raise libvirtError ('virDomainCreate() failed',
> dom=self)
>   libvirtError: internal error: process exited while connecting to
> monitor: qemu-system-aarch64:
> /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620:
> vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id
> == 0' failed.
>
>
>   Environment
>   ===
>   1. virt-manager version is 1.3.2
>
>   2. Which hypervisor did you use?
>   Libvirt+KVM
>   $ kvm --version
>   QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1),
> Copyright (c) 2003-2008 Fabrice Bellard
>   $ libvirtd --version
>   libvirtd (libvirt) 1.3.1
>
>   3. Which storage type did you use?
>  In the host file system,all in one physics machine.
>   stack@u202154:/opt/stack/nova$ df -hl
>   Filesystem Size Used Avail Use% Mounted on
>   udev 7.8G 0 7.8G 0% /dev
>   tmpfs 1.6G 61M 1.6G 4% /run
>   /dev/sda2 917G 41G 830G 5% /
>   tmpfs 7.9G 0 7.9G 0% /dev/shm
>   tmpfs 5.0M 0 5.0M 0% /run/lock
>   tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
>   /dev/sda1 511M 888K 511M 1% /boot/efi
>   cgmfs 100K 0 100K 0% /run/cgmanager/fs
>   tmpfs 1.6G 0 1.6G 0% /run/user/1002
>   tmpfs 1.6G 0 1.6G 0% /run/user/1000
>   tmpfs 1.6G 0 1.6G 0% /run/user/0
>
>   4. Environment information:
>  Architecture : AARCH64
>  OS: Ubuntu 16.04
>
>   The Qemu commmand of libvirt is :
>   2016-06-20 02:39:46.561+: starting up libvirt version: 1.3.1,
> package: 1ubuntu10 (William Grant  Fri, 15 Apr 2016
> 12:08:21 +1000), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1),
> hostname: u202154
>   LC_ALL=C
> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
> QEMU_AUDIO_DRV=none /usr/bin/kvm -name cent7 -S -machine
> virt,accel=kvm,usb=off -cpu host -drive
> file=/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw,if=pflash,format=raw,unit=0,readonly=on
> -drive
> file=/var/lib/libvirt/qemu/nvram/cent7_VARS.fd,if=pflash,format=raw,unit=1
> -m 2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid
> d5462bb6-159e-4dbd-9266-bf8c07fa1695 -nographic -no-user-config -nodefaults
> -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-cent7/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
> -boot strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1
> -device 

[Qemu-devel] [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

2016-07-05 Thread Tom Hanson
So, in the original minimal command line above (#3) is the transport/bus
missing?  Or is mmio implied? Or?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1594239

Title:
  After adding more scsi disks for Aarch64 virtual machine, start the VM
  and got Qemu Error

Status in QEMU:
  Confirmed

Bug description:
  Description
  ===
  Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
  Add scsi disk to the VM. After add four or more scsi disks, start the VM and 
will got Qemu error.

  Steps to reproduce
  ==
  1.Use virt-manager to create a VM.
  2.After the VM is started, add scsi disk to the VM. They will be allocated to 
"sdb,sdc,sdd." .
  3.If we got a disk name > sdg, virt-manager will also assign a virtio-scsi 
controller for this disk.And the VM will be shutdown.
  4.Start the VM, will see the error log.

  
  Expected result
  ===
  Start the vm smoothly.The added disks can work.

  Actual result
  =
  Got the error:
  starting domain: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.
  details=Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in 
cb_wrapper
  callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in tmpcb
  callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83, in 
newfn
  ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in startup
  self._backend.create()
File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035, in 
create
  if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
  libvirtError: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.

  
  Environment
  ===
  1. virt-manager version is 1.3.2

  2. Which hypervisor did you use?
  Libvirt+KVM
  $ kvm --version
  QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright 
(c) 2003-2008 Fabrice Bellard
  $ libvirtd --version
  libvirtd (libvirt) 1.3.1

  3. Which storage type did you use?
 In the host file system,all in one physics machine.
  stack@u202154:/opt/stack/nova$ df -hl
  Filesystem Size Used Avail Use% Mounted on
  udev 7.8G 0 7.8G 0% /dev
  tmpfs 1.6G 61M 1.6G 4% /run
  /dev/sda2 917G 41G 830G 5% /
  tmpfs 7.9G 0 7.9G 0% /dev/shm
  tmpfs 5.0M 0 5.0M 0% /run/lock
  tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
  /dev/sda1 511M 888K 511M 1% /boot/efi
  cgmfs 100K 0 100K 0% /run/cgmanager/fs
  tmpfs 1.6G 0 1.6G 0% /run/user/1002
  tmpfs 1.6G 0 1.6G 0% /run/user/1000
  tmpfs 1.6G 0 1.6G 0% /run/user/0

  4. Environment information:
 Architecture : AARCH64
 OS: Ubuntu 16.04

  The Qemu commmand of libvirt is :
  2016-06-20 02:39:46.561+: starting up libvirt version: 1.3.1, package: 
1ubuntu10 (William Grant  Fri, 15 Apr 2016 12:08:21 +1000), 
qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), hostname: u202154
  LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/kvm -name cent7 -S -machine virt,accel=kvm,usb=off 
-cpu host -drive 
file=/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw,if=pflash,format=raw,unit=0,readonly=on
 -drive 
file=/var/lib/libvirt/qemu/nvram/cent7_VARS.fd,if=pflash,format=raw,unit=1 -m 
2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 
d5462bb6-159e-4dbd-9266-bf8c07fa1695 -nographic -no-user-config -nodefaults 
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-cent7/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1 -device 
pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x1 -device 
virtio-scsi-device,id=scsi0 -device lsi,id=scsi1 -device lsi,id=scsi2 -device 
virtio-scsi-device,id=scsi3 -usb -drive 
file=/var/lib/libvirt/images/cent7-2.img,format=qcow2,if=none,id=drive-scsi0-0-0-0
 -device 
scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1
 -drive if=none,id=drive-scsi0-0-0-1,readonly=on -device 
scsi-cd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1
 -drive 
file=/var/lib/libvirt/images/cent7-10.img,format=qcow2,if=none,id=drive-scsi0-0-0-2
 -device 
scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi0-0-0-2,id=scsi0-0-0-2
 -drive 

[Qemu-devel] [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

2016-07-05 Thread Tom Hanson
As noted above, virtio-scsi-pci uses a bus address as part of the
internal ID string while virtio-scsi-device does not.

  * Is this difference intentional?

  * Are they intended to support different use cases? If so, what?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1594239

Title:
  After adding more scsi disks for Aarch64 virtual machine, start the VM
  and got Qemu Error

Status in QEMU:
  Confirmed

Bug description:
  Description
  ===
  Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
  Add scsi disk to the VM. After add four or more scsi disks, start the VM and 
will got Qemu error.

  Steps to reproduce
  ==
  1.Use virt-manager to create a VM.
  2.After the VM is started, add scsi disk to the VM. They will be allocated to 
"sdb,sdc,sdd." .
  3.If we got a disk name > sdg, virt-manager will also assign a virtio-scsi 
controller for this disk.And the VM will be shutdown.
  4.Start the VM, will see the error log.

  
  Expected result
  ===
  Start the vm smoothly.The added disks can work.

  Actual result
  =
  Got the error:
  starting domain: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.
  details=Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in 
cb_wrapper
  callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in tmpcb
  callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83, in 
newfn
  ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in startup
  self._backend.create()
File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035, in 
create
  if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
  libvirtError: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.

  
  Environment
  ===
  1. virt-manager version is 1.3.2

  2. Which hypervisor did you use?
  Libvirt+KVM
  $ kvm --version
  QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright 
(c) 2003-2008 Fabrice Bellard
  $ libvirtd --version
  libvirtd (libvirt) 1.3.1

  3. Which storage type did you use?
 In the host file system,all in one physics machine.
  stack@u202154:/opt/stack/nova$ df -hl
  Filesystem Size Used Avail Use% Mounted on
  udev 7.8G 0 7.8G 0% /dev
  tmpfs 1.6G 61M 1.6G 4% /run
  /dev/sda2 917G 41G 830G 5% /
  tmpfs 7.9G 0 7.9G 0% /dev/shm
  tmpfs 5.0M 0 5.0M 0% /run/lock
  tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
  /dev/sda1 511M 888K 511M 1% /boot/efi
  cgmfs 100K 0 100K 0% /run/cgmanager/fs
  tmpfs 1.6G 0 1.6G 0% /run/user/1002
  tmpfs 1.6G 0 1.6G 0% /run/user/1000
  tmpfs 1.6G 0 1.6G 0% /run/user/0

  4. Environment information:
 Architecture : AARCH64
 OS: Ubuntu 16.04

  The Qemu commmand of libvirt is :
  2016-06-20 02:39:46.561+: starting up libvirt version: 1.3.1, package: 
1ubuntu10 (William Grant  Fri, 15 Apr 2016 12:08:21 +1000), 
qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), hostname: u202154
  LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/kvm -name cent7 -S -machine virt,accel=kvm,usb=off 
-cpu host -drive 
file=/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw,if=pflash,format=raw,unit=0,readonly=on
 -drive 
file=/var/lib/libvirt/qemu/nvram/cent7_VARS.fd,if=pflash,format=raw,unit=1 -m 
2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 
d5462bb6-159e-4dbd-9266-bf8c07fa1695 -nographic -no-user-config -nodefaults 
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-cent7/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1 -device 
pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x1 -device 
virtio-scsi-device,id=scsi0 -device lsi,id=scsi1 -device lsi,id=scsi2 -device 
virtio-scsi-device,id=scsi3 -usb -drive 
file=/var/lib/libvirt/images/cent7-2.img,format=qcow2,if=none,id=drive-scsi0-0-0-0
 -device 
scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1
 -drive if=none,id=drive-scsi0-0-0-1,readonly=on -device 
scsi-cd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1
 -drive 
file=/var/lib/libvirt/images/cent7-10.img,format=qcow2,if=none,id=drive-scsi0-0-0-2
 -device 

[Qemu-devel] [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

2016-07-01 Thread Tom Hanson
This looks like a command line / configuration issue which results in a
name collision as Dave predicted above.

I had to piece this together out of bits of information since documentation is 
a bit sparse but the following works.  Note the explicit ID and LUN values on 
the -device declarations:
sudo qemu-system-aarch64 -enable-kvm -machine virt -cpu host -machine type=virt 
-nographic -smp 1 -m 2048 -kernel aarch64-linux-3.15rc2-buildroot.img  --append 
"console=ttyAMA0" \
  -device virtio-scsi-device,id=scsi0 \
  -device virtio-scsi-device,id=scsi1 \
  -drive file=scsi_1.img,format=raw,if=none,id=d0 \
  -device scsi-hd,bus=scsi0.0,scsi-id=0,lun=0,drive=d0 \
  -drive file=scsi_2.img,format=raw,if=none,id=d1 \
  -device scsi-hd,bus=scsi1.0,scsi-id=0,lun=1,drive=d1

Added debug shows the following (Note the LUN value of 1 for the second drive):
calculate_compat_instance_id: For [scsi-disk], Init instance_id to [0]
calculate_new_instance_id: For [0:0:0/scsi-disk], Init instance_id to [0]
calculate_compat_instance_id: For [scsi-disk], Init instance_id to [0]
calculate_compat_instance_id: Found match for [scsi-disk], incrementing 
instance_id is now [1]
calculate_new_instance_id: For [0:0:1/scsi-disk], Init instance_id to [0]

Note: even though it's on a different bus, specifying the same id & lun
will cause a collision.

If desired, the above can be simplified to use a single bus:
sudo qemu-system-aarch64 -enable-kvm -machine virt -cpu host -machine type=virt 
-nographic -smp 1 -m 2048 -kernel aarch64-linux-3.15rc2-buildroot.img  --append 
"console=ttyAMA0" \
  -device virtio-scsi-device,id=scsi0 \
  -drive file=scsi_1.img,format=raw,if=none,id=d0 \
  -device scsi-hd,bus=scsi0.0,scsi-id=0,lun=0,drive=d0 \
  -drive file=scsi_2.img,format=raw,if=none,id=d1 \
  -device scsi-hd,bus=scsi0.0,scsi-id=0,lun=1,drive=d1

Searching the web, I saw this more commonly done with virtio-scsi-pci instead 
of virtio-scsi-device (but I can't tell you why):
sudo qemu-system-aarch64 -enable-kvm -machine virt -cpu host -machine type=virt 
-nographic -smp 1 -m 2048 -kernel aarch64-linux-3.15rc2-buildroot.img  --append 
"console=ttyAMA0" \
  -device virtio-scsi-pci,id=scsi0 \
  -device virtio-scsi-pci,id=scsi1 \
  -drive file=scsi_1.img,format=raw,if=none,id=d0 \
  -device scsi-hd,bus=scsi0.0,scsi-id=0,lun=0,drive=d0 \
  -drive file=scsi_2.img,format=raw,if=none,id=d1 \
  -device scsi-hd,bus=scsi1.0,scsi-id=0,lun=1,drive=d1

Note that the name used internally now includes the bus id:
calculate_compat_instance_id: For [scsi-disk], Init instance_id to [0]
calculate_new_instance_id: For [:00:02.0/0:0:0/scsi-disk], Init instance_id 
to [0]
calculate_compat_instance_id: For [scsi-disk], Init instance_id to [0]
calculate_compat_instance_id: Found match for [scsi-disk], incrementing 
instance_id is now [1]
calculate_new_instance_id: For [:00:03.0/0:0:1/scsi-disk], Init instance_id 
to [0]

This means that it is now possible to use the same LUN for the drives on the 2 
different buses:
sudo qemu-system-aarch64 -enable-kvm -machine virt -cpu host -machine type=virt 
-nographic -smp 1 -m 2048 -kernel aarch64-linux-3.15rc2-buildroot.img  --append 
"console=ttyAMA0" \
  -device virtio-scsi-pci,id=scsi0 \
  -device virtio-scsi-pci,id=scsi1 \
  -drive file=scsi_1.img,format=raw,if=none,id=d0 \
  -device scsi-hd,bus=scsi0.0,scsi-id=0,lun=0,drive=d0 \
  -drive file=scsi_2.img,format=raw,if=none,id=d1 \
  -device scsi-hd,bus=scsi1.0,scsi-id=0,lun=0,drive=d1

Internally:
calculate_compat_instance_id: For [scsi-disk], Init instance_id to [0]
calculate_new_instance_id: For [:00:02.0/0:0:0/scsi-disk], Init instance_id 
to [0]
calculate_compat_instance_id: For [scsi-disk], Init instance_id to [0]
calculate_compat_instance_id: Found match for [scsi-disk], incrementing 
instance_id is now [1]
calculate_new_instance_id: For [:00:03.0/0:0:0/scsi-disk], Init instance_id 
to [0]

Here also, a single bus works fine as long as ID + LUN is unique:
sudo qemu-system-aarch64 -enable-kvm -machine virt -cpu host -machine type=virt 
-nographic -smp 1 -m 2048 -kernel aarch64-linux-3.15rc2-buildroot.img  --append 
"console=ttyAMA0" \
  -device virtio-scsi-pci,id=scsi0 \
  -drive file=scsi_1.img,format=raw,if=none,id=d0 \
  -device scsi-hd,bus=scsi0.0,scsi-id=0,lun=1,drive=d0 \
  -drive file=scsi_2.img,format=raw,if=none,id=d1 \
  -device scsi-hd,bus=scsi0.0,scsi-id=0,lun=5,drive=d1

Internally:
calculate_new_instance_id: For [:00:02.0/virtio-scsi], Init instance_id to 
[0]
calculate_compat_instance_id: For [scsi-disk], Init instance_id to [0]
calculate_new_instance_id: For [:00:02.0/0:0:1/scsi-disk], Init instance_id 
to [0]
calculate_compat_instance_id: For [scsi-disk], Init instance_id to [0]
calculate_compat_instance_id: Found match for [scsi-disk], incrementing 
instance_id is now [1]
calculate_new_instance_id: For [:00:02.0/0:0:5/scsi-disk], Init instance_id 
to [0]

-- 
You received this bug notification because you are a member of 

Re: [Qemu-devel] [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

2016-07-01 Thread Tom Hanson
We may be saying the same thing, but I'd word it differently.  If a
 "device" has a "path" then it gets a se->compat (compatibility?) record.
   -  Within that record each device gets an instance_id value based on its
name.  Multiple IDs for the same name are allowed.
   -  At the "se" level each device also gets an instance id but now based
on path + name.  There can only be one instance for that combination which
requires that the path must be unique for each device name.

In this case both SCSI device have the path "0:0:0" (chan:id:lun) which
violates the above requirement.

Looking at the debug info I noticed that for "virtio-net" the (PCI) path is
not all zeroes (:00:01.0).  Makes me wonder if maybe something on the
SCSI side of things should be generating valid paths.

Still digging.

On 1 July 2016 at 09:08, Dr. David Alan Gilbert 
wrote:

> Yeh I *think* the idea is that you either:
>  a) have an instance_id
> or
>  b) have a unique name
>  in which case you're also allowed to have an old compatibility
> name/instance_id to work with old code that didn't have a unique name
> (that's in se->compat)
>
> so the assert is:
>assert(!se->compat || se->instance_id == 0);
>
>  The !se->compat  corresponds to (a)
>  se->instance_id == 0 corresponds to (b)
>
> Having a unique name is a very good idea for hotplug - it lets you
> unplug the middle one and still receive a migration correctly.
>
> Dave
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1594239
>
> Title:
>   After adding more scsi disks for Aarch64 virtual machine, start the VM
>   and got Qemu Error
>
> Status in QEMU:
>   Confirmed
>
> Bug description:
>   Description
>   ===
>   Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
>   Add scsi disk to the VM. After add four or more scsi disks, start the VM
> and will got Qemu error.
>
>   Steps to reproduce
>   ==
>   1.Use virt-manager to create a VM.
>   2.After the VM is started, add scsi disk to the VM. They will be
> allocated to "sdb,sdc,sdd." .
>   3.If we got a disk name > sdg, virt-manager will also assign a
> virtio-scsi controller for this disk.And the VM will be shutdown.
>   4.Start the VM, will see the error log.
>
>
>   Expected result
>   ===
>   Start the vm smoothly.The added disks can work.
>
>   Actual result
>   =
>   Got the error:
>   starting domain: internal error: process exited while connecting to
> monitor: qemu-system-aarch64:
> /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620:
> vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id
> == 0' failed.
>   details=Traceback (most recent call last):
> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in
> cb_wrapper
>   callback(asyncjob, *args, **kwargs)
> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in
> tmpcb
>   callback(*args, **kwargs)
> File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83,
> in newfn
>   ret = fn(self, *args, **kwargs)
> File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in
> startup
>   self._backend.create()
> File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035,
> in create
>   if ret == -1: raise libvirtError ('virDomainCreate() failed',
> dom=self)
>   libvirtError: internal error: process exited while connecting to
> monitor: qemu-system-aarch64:
> /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620:
> vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id
> == 0' failed.
>
>
>   Environment
>   ===
>   1. virt-manager version is 1.3.2
>
>   2. Which hypervisor did you use?
>   Libvirt+KVM
>   $ kvm --version
>   QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1),
> Copyright (c) 2003-2008 Fabrice Bellard
>   $ libvirtd --version
>   libvirtd (libvirt) 1.3.1
>
>   3. Which storage type did you use?
>  In the host file system,all in one physics machine.
>   stack@u202154:/opt/stack/nova$ df -hl
>   Filesystem Size Used Avail Use% Mounted on
>   udev 7.8G 0 7.8G 0% /dev
>   tmpfs 1.6G 61M 1.6G 4% /run
>   /dev/sda2 917G 41G 830G 5% /
>   tmpfs 7.9G 0 7.9G 0% /dev/shm
>   tmpfs 5.0M 0 5.0M 0% /run/lock
>   tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
>   /dev/sda1 511M 888K 511M 1% /boot/efi
>   cgmfs 100K 0 100K 0% /run/cgmanager/fs
>   tmpfs 1.6G 0 1.6G 0% /run/user/1002
>   tmpfs 1.6G 0 1.6G 0% /run/user/1000
>   tmpfs 1.6G 0 1.6G 0% /run/user/0
>
>   4. Environment information:
>  Architecture : AARCH64
>  OS: Ubuntu 16.04
>
>   The Qemu commmand of libvirt is :
>   2016-06-20 02:39:46.561+: starting up libvirt version: 1.3.1,
> package: 1ubuntu10 (William Grant  Fri, 15 Apr 2016
> 12:08:21 +1000), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1),
> 

Re: [Qemu-devel] [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

2016-07-01 Thread Tom Hanson
Thanks!  That makes sense.  But, off the cuff,  it seems odd that
there's an instance_id if it can only be zero. But then again, it may be
overloaded or be applicable in other cases.  I'll dig into the code
today.


On 07/01/2016 02:27 AM, Dr. David Alan Gilbert wrote:
> Hi Tom,
>Yeh it's just vmstate_register_with_alias_id printing vmsd->name at entry,
> and then after the char *id =   printing that as well (that's what I 
> labelled as the dev/id case).
> Then just before the assert I was printing the se->compat and se->instance_id 
> values.
>
> I noticed this bug because one of our test team had hit the same assert
> a few weeks back on x86, but it was on a truly bizarre setup (~50 nested
> PCIe bridges) so I knew where to look for it.
>
> I think the idea is that if you have a se->compat string then it had
> better be unique (that is instance_id == 0); and the compat string is
> formed by concatenation of the qdev path and the name of this device.
> Then we have '0.0.0' as the name of this scsi device (i.e. local to this
> SCSI adapter) but no path that gives a unique string for the adapter
> like we do on the x86.
>
> Dave
>

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1594239

Title:
  After adding more scsi disks for Aarch64 virtual machine, start the VM
  and got Qemu Error

Status in QEMU:
  Confirmed

Bug description:
  Description
  ===
  Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
  Add scsi disk to the VM. After add four or more scsi disks, start the VM and 
will got Qemu error.

  Steps to reproduce
  ==
  1.Use virt-manager to create a VM.
  2.After the VM is started, add scsi disk to the VM. They will be allocated to 
"sdb,sdc,sdd." .
  3.If we got a disk name > sdg, virt-manager will also assign a virtio-scsi 
controller for this disk.And the VM will be shutdown.
  4.Start the VM, will see the error log.

  
  Expected result
  ===
  Start the vm smoothly.The added disks can work.

  Actual result
  =
  Got the error:
  starting domain: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.
  details=Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in 
cb_wrapper
  callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in tmpcb
  callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83, in 
newfn
  ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in startup
  self._backend.create()
File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035, in 
create
  if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
  libvirtError: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.

  
  Environment
  ===
  1. virt-manager version is 1.3.2

  2. Which hypervisor did you use?
  Libvirt+KVM
  $ kvm --version
  QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright 
(c) 2003-2008 Fabrice Bellard
  $ libvirtd --version
  libvirtd (libvirt) 1.3.1

  3. Which storage type did you use?
 In the host file system,all in one physics machine.
  stack@u202154:/opt/stack/nova$ df -hl
  Filesystem Size Used Avail Use% Mounted on
  udev 7.8G 0 7.8G 0% /dev
  tmpfs 1.6G 61M 1.6G 4% /run
  /dev/sda2 917G 41G 830G 5% /
  tmpfs 7.9G 0 7.9G 0% /dev/shm
  tmpfs 5.0M 0 5.0M 0% /run/lock
  tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
  /dev/sda1 511M 888K 511M 1% /boot/efi
  cgmfs 100K 0 100K 0% /run/cgmanager/fs
  tmpfs 1.6G 0 1.6G 0% /run/user/1002
  tmpfs 1.6G 0 1.6G 0% /run/user/1000
  tmpfs 1.6G 0 1.6G 0% /run/user/0

  4. Environment information:
 Architecture : AARCH64
 OS: Ubuntu 16.04

  The Qemu commmand of libvirt is :
  2016-06-20 02:39:46.561+: starting up libvirt version: 1.3.1, package: 
1ubuntu10 (William Grant  Fri, 15 Apr 2016 12:08:21 +1000), 
qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), hostname: u202154
  LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/kvm -name cent7 -S -machine virt,accel=kvm,usb=off 
-cpu host -drive 
file=/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw,if=pflash,format=raw,unit=0,readonly=on
 -drive 
file=/var/lib/libvirt/qemu/nvram/cent7_VARS.fd,if=pflash,format=raw,unit=1 -m 
2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 
d5462bb6-159e-4dbd-9266-bf8c07fa1695 

[Qemu-devel] [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

2016-06-30 Thread Tom Hanson
Dave,

Yeah, well, never mind.  :-)

If I'd looked at the code first I'd have seen the function name and
thought about which data I'd want to dump and where, I'd have figured
out where the debug data came from.

-Tom

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1594239

Title:
  After adding more scsi disks for Aarch64 virtual machine, start the VM
  and got Qemu Error

Status in QEMU:
  Confirmed

Bug description:
  Description
  ===
  Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
  Add scsi disk to the VM. After add four or more scsi disks, start the VM and 
will got Qemu error.

  Steps to reproduce
  ==
  1.Use virt-manager to create a VM.
  2.After the VM is started, add scsi disk to the VM. They will be allocated to 
"sdb,sdc,sdd." .
  3.If we got a disk name > sdg, virt-manager will also assign a virtio-scsi 
controller for this disk.And the VM will be shutdown.
  4.Start the VM, will see the error log.

  
  Expected result
  ===
  Start the vm smoothly.The added disks can work.

  Actual result
  =
  Got the error:
  starting domain: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.
  details=Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in 
cb_wrapper
  callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in tmpcb
  callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83, in 
newfn
  ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in startup
  self._backend.create()
File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035, in 
create
  if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
  libvirtError: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.

  
  Environment
  ===
  1. virt-manager version is 1.3.2

  2. Which hypervisor did you use?
  Libvirt+KVM
  $ kvm --version
  QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright 
(c) 2003-2008 Fabrice Bellard
  $ libvirtd --version
  libvirtd (libvirt) 1.3.1

  3. Which storage type did you use?
 In the host file system,all in one physics machine.
  stack@u202154:/opt/stack/nova$ df -hl
  Filesystem Size Used Avail Use% Mounted on
  udev 7.8G 0 7.8G 0% /dev
  tmpfs 1.6G 61M 1.6G 4% /run
  /dev/sda2 917G 41G 830G 5% /
  tmpfs 7.9G 0 7.9G 0% /dev/shm
  tmpfs 5.0M 0 5.0M 0% /run/lock
  tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
  /dev/sda1 511M 888K 511M 1% /boot/efi
  cgmfs 100K 0 100K 0% /run/cgmanager/fs
  tmpfs 1.6G 0 1.6G 0% /run/user/1002
  tmpfs 1.6G 0 1.6G 0% /run/user/1000
  tmpfs 1.6G 0 1.6G 0% /run/user/0

  4. Environment information:
 Architecture : AARCH64
 OS: Ubuntu 16.04

  The Qemu commmand of libvirt is :
  2016-06-20 02:39:46.561+: starting up libvirt version: 1.3.1, package: 
1ubuntu10 (William Grant  Fri, 15 Apr 2016 12:08:21 +1000), 
qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), hostname: u202154
  LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/kvm -name cent7 -S -machine virt,accel=kvm,usb=off 
-cpu host -drive 
file=/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw,if=pflash,format=raw,unit=0,readonly=on
 -drive 
file=/var/lib/libvirt/qemu/nvram/cent7_VARS.fd,if=pflash,format=raw,unit=1 -m 
2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 
d5462bb6-159e-4dbd-9266-bf8c07fa1695 -nographic -no-user-config -nodefaults 
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-cent7/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1 -device 
pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x1 -device 
virtio-scsi-device,id=scsi0 -device lsi,id=scsi1 -device lsi,id=scsi2 -device 
virtio-scsi-device,id=scsi3 -usb -drive 
file=/var/lib/libvirt/images/cent7-2.img,format=qcow2,if=none,id=drive-scsi0-0-0-0
 -device 
scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1
 -drive if=none,id=drive-scsi0-0-0-1,readonly=on -device 
scsi-cd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1
 -drive 
file=/var/lib/libvirt/images/cent7-10.img,format=qcow2,if=none,id=drive-scsi0-0-0-2
 -device 

[Qemu-devel] [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

2016-06-30 Thread Tom Hanson
Dave,

How did you get the debug info in #4 above?

I can now replicate the error, but can't get to the monitor.  I'm new to
qemu so it's probably one of those things I haven't learned yet.

Thanks,
Tom

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1594239

Title:
  After adding more scsi disks for Aarch64 virtual machine, start the VM
  and got Qemu Error

Status in QEMU:
  Confirmed

Bug description:
  Description
  ===
  Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
  Add scsi disk to the VM. After add four or more scsi disks, start the VM and 
will got Qemu error.

  Steps to reproduce
  ==
  1.Use virt-manager to create a VM.
  2.After the VM is started, add scsi disk to the VM. They will be allocated to 
"sdb,sdc,sdd." .
  3.If we got a disk name > sdg, virt-manager will also assign a virtio-scsi 
controller for this disk.And the VM will be shutdown.
  4.Start the VM, will see the error log.

  
  Expected result
  ===
  Start the vm smoothly.The added disks can work.

  Actual result
  =
  Got the error:
  starting domain: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.
  details=Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in 
cb_wrapper
  callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in tmpcb
  callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83, in 
newfn
  ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in startup
  self._backend.create()
File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035, in 
create
  if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
  libvirtError: internal error: process exited while connecting to monitor: 
qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: 
vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' 
failed.

  
  Environment
  ===
  1. virt-manager version is 1.3.2

  2. Which hypervisor did you use?
  Libvirt+KVM
  $ kvm --version
  QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright 
(c) 2003-2008 Fabrice Bellard
  $ libvirtd --version
  libvirtd (libvirt) 1.3.1

  3. Which storage type did you use?
 In the host file system,all in one physics machine.
  stack@u202154:/opt/stack/nova$ df -hl
  Filesystem Size Used Avail Use% Mounted on
  udev 7.8G 0 7.8G 0% /dev
  tmpfs 1.6G 61M 1.6G 4% /run
  /dev/sda2 917G 41G 830G 5% /
  tmpfs 7.9G 0 7.9G 0% /dev/shm
  tmpfs 5.0M 0 5.0M 0% /run/lock
  tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
  /dev/sda1 511M 888K 511M 1% /boot/efi
  cgmfs 100K 0 100K 0% /run/cgmanager/fs
  tmpfs 1.6G 0 1.6G 0% /run/user/1002
  tmpfs 1.6G 0 1.6G 0% /run/user/1000
  tmpfs 1.6G 0 1.6G 0% /run/user/0

  4. Environment information:
 Architecture : AARCH64
 OS: Ubuntu 16.04

  The Qemu commmand of libvirt is :
  2016-06-20 02:39:46.561+: starting up libvirt version: 1.3.1, package: 
1ubuntu10 (William Grant  Fri, 15 Apr 2016 12:08:21 +1000), 
qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), hostname: u202154
  LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/kvm -name cent7 -S -machine virt,accel=kvm,usb=off 
-cpu host -drive 
file=/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw,if=pflash,format=raw,unit=0,readonly=on
 -drive 
file=/var/lib/libvirt/qemu/nvram/cent7_VARS.fd,if=pflash,format=raw,unit=1 -m 
2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 
d5462bb6-159e-4dbd-9266-bf8c07fa1695 -nographic -no-user-config -nodefaults 
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-cent7/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1 -device 
pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x1 -device 
virtio-scsi-device,id=scsi0 -device lsi,id=scsi1 -device lsi,id=scsi2 -device 
virtio-scsi-device,id=scsi3 -usb -drive 
file=/var/lib/libvirt/images/cent7-2.img,format=qcow2,if=none,id=drive-scsi0-0-0-0
 -device 
scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1
 -drive if=none,id=drive-scsi0-0-0-1,readonly=on -device 
scsi-cd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1
 -drive 
file=/var/lib/libvirt/images/cent7-10.img,format=qcow2,if=none,id=drive-scsi0-0-0-2
 -device 

Re: [Qemu-devel] [Qemu-arm] [PATCH] target-arm: Fix descriptor address masking in ARM address translation

2016-04-26 Thread Tom Hanson

On 03/21/2016 09:56 AM, Sergey Sorokin wrote:

17.03.2016, 18:24, "Peter Maydell" :

  On 17 March 2016 at 15:21, Sergey Sorokin  wrote:

   17.03.2016, 14:40, "Peter Maydell" :

   On 13 March 2016 at 18:28, Sergey Sorokin  wrote:

   If you want to implement the AddressSize checks that's fine,
   but otherwise please leave this bit of the code alone.


You said me that my code is not correct, I have proved that it conforms
to the documentation.
It's a bit obfuscating when the doc explicitly says to take bits up to 39
from the descriptor, but in QEMU we take bits up to 47 relying on the check 
in
another part of the code, even if both ways are correct.


   The way the code in QEMU is structured is that we extract the
   descriptor field in one go and then will operate on it
   (checking for need to AddressSize fault, etc) as a second
   action. The field descriptors themselves are the sizes I said.


   Well, may be it's enough just to change this comment as you intend:


   - /* The address field in the descriptor goes up to bit 39 for ARMv7
   - * but up to bit 47 for ARMv8.
   + /* The address field in the descriptor goes up to bit 39 for AArch32
   + * but up to bit 47 for AArch64.
 */


  The comment is correct as it stands.

  thanks
  -- PMM


I mean in the patch.
We need to fix lower bits in descaddrmask anyway.
So:

I could describe in the comment, that the descriptor field is up to bit 47 for 
ARMv8 (as long as you want it),
but we use the descaddrmask up to bit 39 for AArch32,
because we don't need other bits in that case to construct next descriptor 
address.
It is clearly described in the ARM pseudo-code.
Why should we keep in the mask bits from 40 up to 47 if we don't need them? 
Even if they are all zeroes.
It is a bit obfuscating, as I said.


 I agree with Peter.  The original comment is correct.

Looking at the TLBRecord AArch32.TranslationTableWalkLD pseudocode, it is 
treating the AArch32 address as 48 bits long.  For example:
if !IsZero(baseregister<47:40>) then
level = 0;
result.addrdesc.fault = AArch32.AddressSizeFault(ipaddress, domain, 
level, acctype, iswrite,
 secondstage, 
s2fs1walk);
return result;

This requires that an AArch32 address have specific values up through bit 47.



Re: [Qemu-devel] best way to implement emulation of AArch64 tagged addresses

2016-04-13 Thread Tom Hanson

On 04/11/2016 06:58 AM, Thomas Hanson wrote:

Ah, true.

On 9 April 2016 at 09:57, Richard Henderson > wrote:

On 04/08/2016 05:29 PM, Thomas Hanson wrote:

Looking at tcg_out_tlb_load():
If I'm reading the pseudo-assembler of the function names
correctly, it looks
like in the i386 code we're already masking the address being
checked:
  tgen_arithi(s, ARITH_AND + trexw, r1, TARGET_PAGE_MASK |
(aligned ? s_mask
: 0), 0);
where  TARGET_PAGE_MASK is a simple all-1's mask in the
appropriate upper bits.

Can we just poke some 0's into that mask in the tag locations?


No, because we'd no longer have a sign-extended 32-bit value, as
fits in that immediate operand field.  To load the constant you're
asking for, we'd need a 64-bit move insn and another register.


r~



[Sorry for the previous top post(s).  I've switched email clients...]

So, is the consensus that it's not worth adding an instruction to the 
fast path to avoid kicking out TLB entries with non-matching tags?


Or is this still under consideration?



Re: [Qemu-devel] best way to implement emulation of AArch64 tagged addresses

2016-04-08 Thread Tom Hanson
On Mon, 2016-04-04 at 10:56 -0700, Richard Henderson wrote:
> On 04/04/2016 09:31 AM, Peter Maydell wrote:
> > On 4 April 2016 at 17:28, Richard Henderson  wrote:
> >> On 04/04/2016 08:51 AM, Peter Maydell wrote:
> >>> In particular I think if you just do the relevant handling of the tag
> >>> bits in target-arm's get_phys_addr() and its subroutines then this
> >>> should work ok, with the exceptions that:
> >>>* the QEMU TLB code will think that [tag A + address X] and
> >>>  [tag B + address X] are different virtual addresses and they will
> >>>  miss each other in the TLB
> >>
> >>
> >> Yep.  Not only miss, but actively contend with each other.
> >
> > Yes. Can we avoid that, or do we just have to live with it? I guess
> > if the TCG fast path is doing a compare on full insn+tag then we
> > pretty much have to live with it.
> 
> We have to live with it.  Implementing a more complex hashing algorithm in 
> the 
> fast path is probably a non-starter.
> 
> Hopefully if one is using multiple tags, they'll still be in the victim cache 
> and so you won't have to fall back to the full tlb lookup.
> 
> 
> r~

It seems like the "best" solution would be to mask the tag in the TLB
and it feels like it should be possible.  BUT I need to dig into the
code more.

Is it an option to mask off the tag bits in all cases? Is there any case
it which those bits are valid address bits?

-TWH