[Qemu-devel] [Bug 1531632] Re: Can't compile qemu because of errors in the Xen code

2016-01-14 Thread Leo
Hello pranith,

  Well, as I'm using the "ABS" system from Arch Linux, I had to study
how it compile things, but I found it:

./configure --prefix=/usr --sysconfdir=/etc --audio-drv-list='pa alsa sdl' \
  --python=/usr/bin/python2 --smbd=/usr/bin/smbd \
  --enable-docs --libexecdir=/usr/lib/qemu \
  --disable-gtk --enable-linux-aio --enable-seccomp \
  --enable-spice --localstatedir=/var \
  --enable-tpm \
  --enable-modules --enable-{rbd,glusterfs,libiscsi,curl}

Then I downloaded a copy of qemu with git  and I run the configure help 
(configure --help), then I saw that I can "enable/disable" xen, so I added the 
---disable-xen to the above line in the PKGBUILD file from ABS and it compiled.
So, on **my** box I just had to disable Xen as I don't use it.
Thank you for your help.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1531632

Title:
  Can't compile qemu because of errors in the Xen code

Status in QEMU:
  New

Bug description:
  I'm using Arch Linux, with all needed libs packages installed via ABS 
(compiled from source).
  I tried to clone the master repository, the v2.5.0 and the stable-2.4.0, all 
I had the same problems:

  First I have to disable -Werror, because it claims about some
  uninitialized variables.

  Trying to compile the code, it stops when compiling the xen code
  (hw/block/xendisk.o), complaining that ioservid_t is declared twice,
  first as 16bit and then as 32bit.

  Output of make:

CChw/block/xen_disk.o
  In file included from /home/leo/qemu/include/hw/xen/xen_backend.h:4:0,
   from /home/leo/qemu/hw/block/xen_disk.c:39:
  /home/leo/qemu/include/hw/xen/xen_common.h:198:18: error: conflicting types 
for ‘ioservid_t’
   typedef uint16_t ioservid_t;
^
  In file included from /usr/include/xenctrl.h:37:0,
   from /home/leo/qemu/include/hw/xen/xen_common.h:9,
   from /home/leo/qemu/include/hw/xen/xen_backend.h:4,
   from /home/leo/qemu/hw/block/xen_disk.c:39:
  /usr/include/xen/xen.h:353:18: note: previous declaration of ‘ioservid_t’ was 
here
   typedef uint32_t ioservid_t;
        ^
  In file included from /home/leo/qemu/include/hw/xen/xen_backend.h:4:0,
   from /home/leo/qemu/hw/block/xen_disk.c:39:
  /home/leo/qemu/include/hw/xen/xen_common.h: In function 
‘xen_get_ioreq_server_info’:
  /home/leo/qemu/include/hw/xen/xen_common.h:256:36: error: 
‘HVM_PARAM_IOREQ_PFN’ undeclared (first use in this function)
   rc = xc_get_hvm_param(xc, dom, HVM_PARAM_IOREQ_PFN, ¶m);
  ^
  /home/leo/qemu/include/hw/xen/xen_common.h:256:36: note: each undeclared 
identifier is reported only once for each function it appears in
  In file included from /home/leo/qemu/include/hw/xen/xen_backend.h:4:0,
   from /home/leo/qemu/hw/block/xen_disk.c:39:
  /home/leo/qemu/include/hw/xen/xen_common.h:264:36: error: 
‘HVM_PARAM_BUFIOREQ_PFN’ undeclared (first use in this function)
   rc = xc_get_hvm_param(xc, dom, HVM_PARAM_BUFIOREQ_PFN, ¶m);
  ^
  /home/leo/qemu/rules.mak:57: recipe for target 'hw/block/xen_disk.o' failed
  make: *** [hw/block/xen_disk.o] Error 1
  [leo@AlphaArch build]$ make
    CChw/block/xen_disk.o
  In file included from /home/leo/qemu/include/hw/xen/xen_backend.h:4:0,
       from /home/leo/qemu/hw/block/xen_disk.c:39:
  /home/leo/qemu/include/hw/xen/xen_common.h:198:18: error: conflicting types 
for ‘ioservid_t’
   typedef uint16_t ioservid_t;
^
  In file included from /usr/include/xenctrl.h:37:0,
   from /home/leo/qemu/include/hw/xen/xen_common.h:9,
   from /home/leo/qemu/include/hw/xen/xen_backend.h:4,
   from /home/leo/qemu/hw/block/xen_disk.c:39:
  /usr/include/xen/xen.h:353:18: note: previous declaration of ‘ioservid_t’ was 
here
   typedef uint32_t ioservid_t;
^
  In file included from /home/leo/qemu/include/hw/xen/xen_backend.h:4:0,
       from /home/leo/qemu/hw/block/xen_disk.c:39:
  /home/leo/qemu/include/hw/xen/xen_common.h: In function 
‘xen_get_ioreq_server_info’:
  /home/leo/qemu/include/hw/xen/xen_common.h:256:36: error: 
‘HVM_PARAM_IOREQ_PFN’ undeclared (first use in this function)
   rc = xc_get_hvm_param(xc, dom, HVM_PARAM_IOREQ_PFN, ¶m);
  ^
  /home/leo/qemu/include/hw/xen/xen_common.h:256:36: note: each undeclared 
identifier is reported only once for each function it appears in
  In file included from /home/leo/qemu/include/hw/xen/xen_backend.h:4:0,
       from /home/leo/qemu/hw/block/xen_disk.c:39:
  /home/leo/qemu/include/hw/xen/xen_common.h:264:36: error: 
‘HV

[Qemu-devel] [Bug 1531632] [NEW] Can't compile qemu because of errors in the Xen code

2016-01-07 Thread Leo
Public bug reported:

I'm using Arch Linux, with all needed libs packages installed via ABS (compiled 
from source).
I tried to clone the master repository, the v2.5.0 and the stable-2.4.0, all I 
had the same problems:

First I have to disable -Werror, because it claims about some
uninitialized variables.

Trying to compile the code, it stops when compiling the xen code
(hw/block/xendisk.o), complaining that ioservid_t is declared twice,
first as 16bit and then as 32bit.

Output of make:

  CChw/block/xen_disk.o
In file included from /home/leo/qemu/include/hw/xen/xen_backend.h:4:0,
 from /home/leo/qemu/hw/block/xen_disk.c:39:
/home/leo/qemu/include/hw/xen/xen_common.h:198:18: error: conflicting types for 
‘ioservid_t’
 typedef uint16_t ioservid_t;
  ^
In file included from /usr/include/xenctrl.h:37:0,
 from /home/leo/qemu/include/hw/xen/xen_common.h:9,
 from /home/leo/qemu/include/hw/xen/xen_backend.h:4,
 from /home/leo/qemu/hw/block/xen_disk.c:39:
/usr/include/xen/xen.h:353:18: note: previous declaration of ‘ioservid_t’ was 
here
 typedef uint32_t ioservid_t;
  ^
In file included from /home/leo/qemu/include/hw/xen/xen_backend.h:4:0,
 from /home/leo/qemu/hw/block/xen_disk.c:39:
/home/leo/qemu/include/hw/xen/xen_common.h: In function 
‘xen_get_ioreq_server_info’:
/home/leo/qemu/include/hw/xen/xen_common.h:256:36: error: ‘HVM_PARAM_IOREQ_PFN’ 
undeclared (first use in this function)
 rc = xc_get_hvm_param(xc, dom, HVM_PARAM_IOREQ_PFN, ¶m);
^
/home/leo/qemu/include/hw/xen/xen_common.h:256:36: note: each undeclared 
identifier is reported only once for each function it appears in
In file included from /home/leo/qemu/include/hw/xen/xen_backend.h:4:0,
 from /home/leo/qemu/hw/block/xen_disk.c:39:
/home/leo/qemu/include/hw/xen/xen_common.h:264:36: error: 
‘HVM_PARAM_BUFIOREQ_PFN’ undeclared (first use in this function)
 rc = xc_get_hvm_param(xc, dom, HVM_PARAM_BUFIOREQ_PFN, ¶m);
^
/home/leo/qemu/rules.mak:57: recipe for target 'hw/block/xen_disk.o' failed
make: *** [hw/block/xen_disk.o] Error 1
[leo@AlphaArch build]$ make
  CChw/block/xen_disk.o
In file included from /home/leo/qemu/include/hw/xen/xen_backend.h:4:0,
     from /home/leo/qemu/hw/block/xen_disk.c:39:
/home/leo/qemu/include/hw/xen/xen_common.h:198:18: error: conflicting types for 
‘ioservid_t’
 typedef uint16_t ioservid_t;
  ^
In file included from /usr/include/xenctrl.h:37:0,
     from /home/leo/qemu/include/hw/xen/xen_common.h:9,
     from /home/leo/qemu/include/hw/xen/xen_backend.h:4,
     from /home/leo/qemu/hw/block/xen_disk.c:39:
/usr/include/xen/xen.h:353:18: note: previous declaration of ‘ioservid_t’ was 
here
 typedef uint32_t ioservid_t;
  ^
In file included from /home/leo/qemu/include/hw/xen/xen_backend.h:4:0,
     from /home/leo/qemu/hw/block/xen_disk.c:39:
/home/leo/qemu/include/hw/xen/xen_common.h: In function 
‘xen_get_ioreq_server_info’:
/home/leo/qemu/include/hw/xen/xen_common.h:256:36: error: ‘HVM_PARAM_IOREQ_PFN’ 
undeclared (first use in this function)
 rc = xc_get_hvm_param(xc, dom, HVM_PARAM_IOREQ_PFN, ¶m);
    ^
/home/leo/qemu/include/hw/xen/xen_common.h:256:36: note: each undeclared 
identifier is reported only once for each function it appears in
In file included from /home/leo/qemu/include/hw/xen/xen_backend.h:4:0,
     from /home/leo/qemu/hw/block/xen_disk.c:39:
/home/leo/qemu/include/hw/xen/xen_common.h:264:36: error: 
‘HVM_PARAM_BUFIOREQ_PFN’ undeclared (first use in this function)
 rc = xc_get_hvm_param(xc, dom, HVM_PARAM_BUFIOREQ_PFN, ¶m);
    ^
/home/leo/qemu/rules.mak:57: recipe for target 'hw/block/xen_disk.o' failed
make: *** [hw/block/xen_disk.o] Error 1
[leo@AlphaArch build]$ make
  CChw/block/xen_disk.o
In file included from /home/leo/qemu/include/hw/xen/xen_backend.h:4:0,
         from /home/leo/qemu/hw/block/xen_disk.c:39:
/home/leo/qemu/include/hw/xen/xen_common.h:198:18: error: conflicting types for 
‘ioservid_t’
 typedef uint16_t ioservid_t;
  ^
In file included from /usr/include/xenctrl.h:37:0,
         from /home/leo/qemu/include/hw/xen/xen_common.h:9,
         from /home/leo/qemu/include/hw/xen/xen_backend.h:4,
         from /home/leo/qemu/hw/block/xen_disk.c:39:
/usr/include/xen/xen.h:353:18: note: previous declaration of ‘ioservid_t’ was 
here
 typedef uint32_t ioservid_t;
  ^
In file included from /home/leo/qemu/include/hw/xen/xen_backend.h:4:0,
         from /home/leo/qemu/hw/block/xen_disk.c:39:
/home/leo/qemu/include/hw/xen/xen_common.h: In function 
‘xen_get_ioreq_server_info’:
/home/leo/qemu/include/hw/xen/x

[Qemu-devel] [Bug 1305402] Re: libvirt fails to start VirtualMachines

2016-09-22 Thread Leo Arias
This has just happened to me. For some reason, all my machines had 
machine='pc-i440fx-wily'.
After an update in yakkety, they stopped working.

$ qemu-system-x86_64 -enable-kvm -machine help | grep wily

So I updated the machine xml to a supported machine as Charles
suggested, and they work again.

Here's the note for my future self about how to do the update:

$ virsh dumpxml $machine-name > /tmp/machine.xml
Edit the xml.
$ virsh define /tmp/machine.xml

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1305402

Title:
  libvirt fails to start VirtualMachines

Status in QEMU:
  Invalid
Status in libvirt package in Ubuntu:
  Confirmed
Status in qemu package in Ubuntu:
  Confirmed

Bug description:
  I've created several kvm based machines using virtual machine manager.
  They have operated well over the last 4 days without issue. I did an
  apt-get upgrade, and qemu was in the list of updates.

  After upgrading, I am unable to start any of the provisioned virtual
  machines with the following error output

  virsh # start node2
  error: Failed to start domain node2
  error: internal error: process exited while connecting to monitor: 
qemu-system-x86_64: -machine trusty,accel=kvm,usb=off: Unsupported machine type
  Use -machine help to list supported machines!

  virsh # start node3
  error: Failed to start domain node3
  error: internal error: process exited while connecting to monitor: 
qemu-system-x86_64: -machine trusty,accel=kvm,usb=off: Unsupported machine type
  Use -machine help to list supported machines!

  $ dpkg -l | grep kvm
  ii  qemu-kvm 2.0.0~rc1+dfsg-0ubuntu3 
amd64QEMU Full virtualization on x86 hardware (transitional package)

  Log snippet from vm 'media' that was verified working, and fails to
  start after the upgrade.

  LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/kvm-spice -name media -S -machine 
trusty,accel=kvm,usb=off -m 1548 -realtime mlock=off -smp 
1,sockets=1,cores=1,threads=1 -uuid 60b20f7b-6d20-bcb3-f4fc-808a9b2fe0d0 
-no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/media.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot menu=off,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 
-drive 
file=/var/lib/libvirt/images/media.img,if=none,id=drive-virtio-disk0,format=raw 
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -drive 
file=/home/charles/iso/ubuntu-desktop-12.04.4-amd64.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw
 -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev 
tap,fd=24,id=hostnet0,vhost=on,vhostfd=26 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:a0:69:d9,bus=pci.0,addr=0x3 
-chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 
-vnc 127.0.0.1:1 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
  char device redirected to /dev/pts/13 (label charserial0)
  qemu: terminating on signal 15 from pid 31688
  2014-04-10 03:36:54.593+: shutting down
  2014-04-10 03:59:25.487+: starting up
  LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin 
QEMU_AUDIO_DRV=none /usr/bin/kvm-spice -name media -S -machine 
trusty,accel=kvm,usb=off -m 1548 -realtime mlock=off -smp 
1,sockets=1,cores=1,threads=1 -uuid 60b20f7b-6d20-bcb3-f4fc-808a9b2fe0d0 
-no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/media.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot menu=off,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 
-drive 
file=/var/lib/libvirt/images/media.img,if=none,id=drive-virtio-disk0,format=raw 
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -drive 
file=/home/charles/iso/ubuntu-desktop-12.04.4-amd64.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw
 -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev 
tap,fd=24,id=hostnet0,vhost=on,vhostfd=25 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:a0:69:d9,bus=pci.0,addr=0x3 
-chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 
-vnc 127.0.0.1:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
  qemu-system-x86_64: -machine trusty,accel=kvm,usb=off: Unsupported machine 
type
  Use -machine help to list supported machines!
  2014-04-10 03:59:25.696+: shutting down

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1305402/+subscriptions



Re: [Qemu-devel] [PATCH 0/5] 9pfs: local: fix metadata of mapped-file security mode

2017-05-08 Thread Leo Gaspard
Greg,

I just tested on 2.9.0 with the 5 patches applied, and it appears to
work on my setup, thanks!

Just a side note: .virtfs_metadata_root is set as u=rwx on the host file
system (the "ret = fchmod(map_fd, 0700);" line in patch 4 I guess),
while u=rw would be more appropriate, I think.

Thank you,
Leo


On 05/05/2017 04:36 PM, Greg Kurz wrote:
> This series fixes two issues in the local backend when using the mapped-file
> security mode:
> - allow chmod and chown to succeed on the virtfs root (patch 4)
> - completely hide the metadata files from the client (patch 5)
> 
> Patch 2 resolves '.' and '..' in paths, and patch 3 reworks the way we open
> files accordingly. They could be squashed together in a single patch (this
> was the case in earlier versions actually), but I decided to separate them
> for easier review.
> 
> Léo,
> 
> I'd appreciate if you could test this series (especially patch 4) on your
> setup.
> 
> Cheers.
> 
> --
> Greg
> 
> ---
> 
> Greg Kurz (5):
>   9pfs: check return value of v9fs_co_name_to_path()
>   9pfs: local: resolve special directories in paths
>   9pfs: local: simplify file opening
>   9pfs: local: metadata file for the VirtFS root
>   9pfs: local: forbid client access to metadata
> 
> 
>  hw/9pfs/9p-local.c |  164 
> 
>  hw/9pfs/9p-util.c  |   26 +++-
>  hw/9pfs/9p.c   |   36 ---
>  3 files changed, 160 insertions(+), 66 deletions(-)
> 



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-devel] [PATCH v10 2/5] i386: Populate AMD Processor Cache Information for cpuid 0x8000001D

2018-05-21 Thread Duran, Leo
Babu,

If num_sharing_l3_cache() uses MAX_NODES_EPYC, then that function It’s EPYC 
specific.

An alternative would be to use a data member (e.g., max_nodes_per_socket)) that 
get initialized (via another helper function) to MAX_NODES_EPYC.

Basically, ideally the functions that return CPUID information do *not* use 
EPYC-specific macros, like MAX_NODES_EPYC.

Leo.

> -Original Message-
> From: Moger, Babu
> Sent: Monday, May 21, 2018 7:41 PM
> To: m...@redhat.com; marcel.apfelb...@gmail.com; pbonz...@redhat.com;
> r...@twiddle.net; ehabk...@redhat.com; mtosa...@redhat.com
> Cc: qemu-devel@nongnu.org; k...@vger.kernel.org; Moger, Babu
> ; k...@tripleback.net; ge...@hostfission.com
> Subject: [PATCH v10 2/5] i386: Populate AMD Processor Cache Information
> for cpuid 0x801D
> 
> Add information for cpuid 0x801D leaf. Populate cache topology
> information for different cache types(Data Cache, Instruction Cache, L2 and
> L3) supported by 0x801D leaf. Please refer Processor Programming
> Reference (PPR) for AMD Family 17h Model for more details.
> 
> Signed-off-by: Babu Moger 
> ---
>  target/i386/cpu.c | 103
> ++
>  target/i386/kvm.c |  29 +--
>  2 files changed, 129 insertions(+), 3 deletions(-)
> 
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c index d9773b6..1dd060a
> 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -336,6 +336,85 @@ static void
> encode_cache_cpuid8006(CPUCacheInfo *l2,
>  }
>  }
> 
> +/* Definitions used for building CPUID Leaf 0x801D and 0x801E
> +*/
> +/* Please refer AMD64 Architecture Programmer’s Manual Volume 3 */
> +#define MAX_CCX 2 #define MAX_CORES_IN_CCX 4 #define
> MAX_NODES_EPYC 4
> +#define MAX_CORES_IN_NODE 8
> +
> +/* Number of logical processors sharing L3 cache */
> +#define NUM_SHARING_CACHE(threads, num_sharing)   ((threads > 1) ? \
> + (((num_sharing - 1) * threads) + 1)  : \
> + (num_sharing - 1))
> +/*
> + * L3 Cache is shared between all the cores in a core complex.
> + * Maximum cores that can share L3 is 4.
> + */
> +static int num_sharing_l3_cache(int nr_cores) {
> +int i, nodes = 1;
> +
> +/* Check if we can fit all the cores in one CCX */
> +if (nr_cores <= MAX_CORES_IN_CCX) {
> +return nr_cores;
> +}
> +/*
> + * Figure out the number of nodes(or dies) required to build
> + * this config. Max cores in a node is 8
> + */
> +for (i = nodes; i <= MAX_NODES_EPYC; i++) {
> +if (nr_cores <= (i * MAX_CORES_IN_NODE)) {
> +nodes = i;
> +break;
> +}
> +/* We support nodes 1, 2, 4 */
> +if (i == 3) {
> +continue;
> +}
> +}
> +/* Spread the cores accros all the CCXs and return max cores in a ccx */
> +return (nr_cores / (nodes * MAX_CCX)) +
> +((nr_cores % (nodes * MAX_CCX)) ? 1 : 0); }
> +
> +/* Encode cache info for CPUID[801D] */ static void
> +encode_cache_cpuid801d(CPUCacheInfo *cache, CPUState *cs,
> +uint32_t *eax, uint32_t *ebx,
> +uint32_t *ecx, uint32_t *edx) {
> +uint32_t num_share_l3;
> +assert(cache->size == cache->line_size * cache->associativity *
> +  cache->partitions * cache->sets);
> +
> +*eax = CACHE_TYPE(cache->type) | CACHE_LEVEL(cache->level) |
> +   (cache->self_init ? CACHE_SELF_INIT_LEVEL : 0);
> +
> +/* L3 is shared among multiple cores */
> +if (cache->level == 3) {
> +num_share_l3 = num_sharing_l3_cache(cs->nr_cores);
> +*eax |= (NUM_SHARING_CACHE(cs->nr_threads, num_share_l3) <<
> 14);
> +} else {
> +*eax |= ((cs->nr_threads - 1) << 14);
> +}
> +
> +assert(cache->line_size > 0);
> +assert(cache->partitions > 0);
> +assert(cache->associativity > 0);
> +/* We don't implement fully-associative caches */
> +assert(cache->associativity < cache->sets);
> +*ebx = (cache->line_size - 1) |
> +   ((cache->partitions - 1) << 12) |
> +   ((cache->associativity - 1) << 22);
> +
> +assert(cache->sets > 0);
> +*ecx = cache->sets - 1;
> +
> +*edx = (cache->no_invd_sharing ? CACHE_NO_INVD_SHARING : 0) |
> +   (cache->inclusive ? CACHE_INCLUSIVE : 0) |
> +   (cache->complex_indexing ? CACHE_COMPLEX_IDX : 0); }
> +
>  /*
>   * Definitions of the hardcoded cache entries we expose:

Re: Domain backup file explodes on s3fs

2020-04-13 Thread Leo Luan
Hi Eric and all,


When invoking "virsh backup-begin" to do a full backup using qcow2
driver to a new backup

target file that does not have a backing chain, is it safe to not zero
the unallocated

parts of the virtual disk?  Do we still depend on SEEK_DATA support in
this case to avoid

forcing zeros?


It looks like backup_run() in block/backup.c unsets the unallocated
parts of a copy bitmap

before starting the backup loop if s->sync_mode ==
MIRROR_SYNC_MODE_TOP. In a virsh backup-begin

full backup scenario, we observe that the mode is
MIRROR_SYNC_MODE_FULL, and the backup_loop()

function subsequently copies zeros for the entire virtual size,
including the unallocated parts

in the source qcow2 file.  Would it be safe to also unset the
unallocated parts in the copy

map when the sync_mode is MIRROR_SYNC_MODE_FULL if we know there is no
need to force zeros

because the target file is a new empty qcow2 file without a backing
file?  If so, maybe a

knob can be added to effect this behavior?


I guess the related code is changing in 5.0 and this issue may already
be adddressed.

Any updates/insights would be appreciated!


Thanks,

Leo


*From*: Eric Blake
*Subject*: Re: Domain backup file explodes on s3fs
*Date*: Tue, 7 Apr 2020 14:37:26 -0500
*User-agent*: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
Thunderbird/68.6.0
--

[adding libvirt list]

On 4/7/20 2:13 PM, Tim Haley wrote:

Hi all,


Have been playing with `virsh backup-begin` of late and think it's an excellent
feature. I've noticed one behavior I'm not sure I understand.

It looks like https://bugzilla.redhat.com/show_bug.cgi?id=1814664 is a similar
description of the same problem: namely, if qemu is not able to determine
that the destination already reads as zero, then it forcefully zeroes the
destination of a backup job. We may want to copy the fact that qemu 5.0 is
adding 'qemu-img convert --target-is-zero' to add a similar knob to the QMP
commands that trigger disk copying (blockdev-backup, blockdev-mirror,
possibly others) as well as logic to avoid writing zeroes when the
destination is already treated as zero (whether by a probe, or by the knob
being set).

...


If my /backups directory is just XFS, I get a backup file that looks like
it is just the size of data blocks in use

-rw--- 1 root  root  2769551360 Mar 19 16:56
vda.2aa450cc-6d2e-11ea-8de0-52542e0d008a

For a local file, qemu is easily able to probe whether the destination starts
as all zeroes (thanks to lseek(SEEK_DATA));

but if I write to an s3fs (object storage backend) the file blows up to the
whole size of the disk

-rw--- 1 root  root  8591507456 Mar 18 19:03
vda.2aa450cc-6d2e-11ea-8de0-52542e0d008a

whereas for s3fs, it looks like qemu does not have access to a quick test
to learn if the image starts all zero (POSIX does not provide a quick way
for doing this on a generic block device, but if you are aware of an ioctl
or otherwise that qemu could use, that might be helpful). Or maybe the s3fs
really is random contents rather than all zero, in which case forcefully
writing zeroes is the only correct behavior.

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org


Avoid copying unallocated clusters during full backup

2020-04-17 Thread Leo Luan
When doing a full backup from a single layer qcow2 disk file to a new qcow2
file, the backup_run function does not unset unallocated parts in the copy
bit map.  The subsequent backup_loop call goes through these unallocated
clusters unnecessarily.  In the case when the target and source reside in
different file systems, an EXDEV error would cause zeroes to be actually
copied into the target and that causes a target file size explosion to the
full virtual disk size.

This patch aims to unset the unallocated parts in the copy bitmap when it
is safe to do so, thereby avoid dealing with unallocated clusters in the
backup loop to prevent significant performance or storage efficiency
impacts when running full backup jobs.

Any insights or corrections?

diff --git a/block/backup.c b/block/backup.c
index cf62b1a38c..609d551b1e 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -139,6 +139,29 @@ static void backup_clean(Job *job)
 bdrv_backup_top_drop(s->backup_top);
 }

+static bool backup_ok_to_skip_unallocated(BackupBlockJob *s)
+{
+/* Checks whether this backup job can avoid copying or dealing with
+   unallocated clusters in the backup loop and their associated
+   performance and storage effciency impacts. Check for the condition
+   when it's safe to skip copying unallocated clusters that allows the
+   corresponding bits in the copy bitmap to be unset.  The assumption
+   here is that it is ok to do so when we are doing a full backup,
+   the target file is a qcow2, and the source is single layer.
+   Do we need to add additional checks (so that it does not break
+   something) or add addtional conditions to optimize additional use
+   cases?
+ */
+
+if (s->sync_mode == MIRROR_SYNC_MODE_FULL &&
+   s->bcs->target->bs->drv != NULL &&
+   strncmp(s->bcs->target->bs->drv->format_name, "qcow2", 5) == 0 &&
+   s->bcs->source->bs->backing_file[0] == '\0')
+   return true;
+else
+return false;
+}
+
 void backup_do_checkpoint(BlockJob *job, Error **errp)
 {
 BackupBlockJob *backup_job = container_of(job, BackupBlockJob, common);
@@ -248,7 +271,7 @@ static int coroutine_fn backup_run(Job *job, Error
**errp)

 backup_init_copy_bitmap(s);

-if (s->sync_mode == MIRROR_SYNC_MODE_TOP) {
+if (s->sync_mode == MIRROR_SYNC_MODE_TOP ||
backup_ok_to_skip_unallocated(s)) {
 int64_t offset = 0;
 int64_t count;


Re: Avoid copying unallocated clusters during full backup

2020-04-17 Thread Leo Luan
On Fri, Apr 17, 2020 at 1:11 PM John Snow  wrote:

>
>
> On 4/17/20 2:33 PM, Leo Luan wrote:
> > When doing a full backup from a single layer qcow2 disk file to a new
> > qcow2 file, the backup_run function does not unset unallocated parts in
> > the copy bit map.  The subsequent backup_loop call goes through these
> > unallocated clusters unnecessarily.  In the case when the target and
> > source reside in different file systems, an EXDEV error would cause
> > zeroes to be actually copied into the target and that causes a target
> > file size explosion to the full virtual disk size.
> >
>
> I think the idea, generally, is to leave the detection of unallocated
> portions to the format (qcow2) and the protocol (posix file) respectively.
>
> As far as I know, it is incorrect to assume that unallocated data
> can/will/should always be read as zeroes; so it may not be the case that
> it is "safe" to skip this data, because the target may or may not need
> explicit zeroing.
>

Thanks for pointing this out.  Would it be safe to skip unallocated
clusters if both source and target's bdrv_unallocated_blocks_are_zero()
returns true?

> This patch aims to unset the unallocated parts in the copy bitmap when
> > it is safe to do so, thereby avoid dealing with unallocated clusters in
> > the backup loop to prevent significant performance or storage efficiency
> > impacts when running full backup jobs.
> >
> > Any insights or corrections?
> >
> > diff --git a/block/backup.c b/block/backup.c
> > index cf62b1a38c..609d551b1e 100644
> > --- a/block/backup.c
> > +++ b/block/backup.c
> > @@ -139,6 +139,29 @@ static void backup_clean(Job *job)
> >  bdrv_backup_top_drop(s->backup_top);
> >  }
> >
> > +static bool backup_ok_to_skip_unallocated(BackupBlockJob *s)
> > +{
> > +/* Checks whether this backup job can avoid copying or dealing with
> > +   unallocated clusters in the backup loop and their associated
> > +   performance and storage effciency impacts. Check for the
> condition
> > +   when it's safe to skip copying unallocated clusters that allows
> the
> > +   corresponding bits in the copy bitmap to be unset.  The
> assumption
> > +   here is that it is ok to do so when we are doing a full backup,
> > +   the target file is a qcow2, and the source is single layer.
> > +   Do we need to add additional checks (so that it does not break
> > +   something) or add addtional conditions to optimize additional use
> > +   cases?
> > + */
> > +
> > +if (s->sync_mode == MIRROR_SYNC_MODE_FULL &&
> > +   s->bcs->target->bs->drv != NULL &&
> > +   strncmp(s->bcs->target->bs->drv->format_name, "qcow2", 5) == 0 &&
> > +   s->bcs->source->bs->backing_file[0] == '\0')
>
> This isn't going to suffice upstream; the backup job can't be performing
> format introspection to determine behavior on the fly.
>
> I think what you're really after is something like
> bdrv_unallocated_blocks_are_zero().
>

Thanks for this pointer.


>
> > +   return true;
> > +else
> > +return false;
> > +}
> > +
> >  void backup_do_checkpoint(BlockJob *job, Error **errp)
> >  {
> >  BackupBlockJob *backup_job = container_of(job, BackupBlockJob,
> common);
> > @@ -248,7 +271,7 @@ static int coroutine_fn backup_run(Job *job, Error
> > **errp)
> >
> >  backup_init_copy_bitmap(s);
> >
> > -if (s->sync_mode == MIRROR_SYNC_MODE_TOP) {
> > +if (s->sync_mode == MIRROR_SYNC_MODE_TOP ||
>
> So the basic premise is that if you are copying a qcow2 file and the
> unallocated portions as defined by the qcow2 metadata are zero, it's
> safe to skip those, so you can treat it like SYNC_MODE_TOP.
>

In the MIRROR_SYNC_MODE_TOP case, the check for unallocated clusters does
not go all the way to the base level.  So it would be incorrect to treat
the MIRROR_SYNC_MODE_FULL the same as MIRROR_SYNC_MODE_TOP unless the
source does not have a backing and the base itself.  That's why I added a
check for the backing_file field of the source.  I guess the code can be
potentially extended with a new flag to do the block status check all the
way into the base level for the case of the FULL mode?

I think you *also* have to know if the *source* needs those regions
> explicitly zeroed, and it's not always safe to just skip them at the
> manifest level.
>

Do you mean some operation changing the target into non-sparse?

>
> I thought there was code that handled this to some extent already, but I
> don't know. I think Vladimir has worked on it recently and can probably
> let you know where I am mistaken :)
>

Thanks for the reply!


> --js
>
> > backup_ok_to_skip_unallocated(s)) {
> >  int64_t offset = 0;
> >  int64_t count;
> >
>
> John Snow


Re: Avoid copying unallocated clusters during full backup

2020-04-17 Thread Leo Luan
On Fri, Apr 17, 2020 at 1:24 PM Eric Blake  wrote:

> On 4/17/20 3:11 PM, John Snow wrote:
>
> >> +
> >> +if (s->sync_mode == MIRROR_SYNC_MODE_FULL &&
> >> +   s->bcs->target->bs->drv != NULL &&
> >> +   strncmp(s->bcs->target->bs->drv->format_name, "qcow2", 5) == 0
> &&
> >> +   s->bcs->source->bs->backing_file[0] == '\0')
> >
> > This isn't going to suffice upstream; the backup job can't be performing
> > format introspection to determine behavior on the fly.
>
> Agreed.  The idea is right (we NEED to make backup operations smarter
> based on knowledge about both source and destination block status), but
> the implementation is not (a check for strcncmp("qcow2") is not ideal).
>

I see/agree that using strncmp("qcow2") is not general enough for the
upstream.  Would changing it to bdrv_unallocated_blocks_are_zero() suffice?


> >
> > I think what you're really after is something like
> > bdrv_unallocated_blocks_are_zero().
>
> The fact that qemu-img already has a lot of optimizations makes me
> wonder what we can salvage from there into reusable code that both
> qemu-img and block backup can share, so that we're not reimplementing
> block status handling in multiple places.
>

A general fix reusing some existing code would be great.  When will it
appear in the upstream?  We are hoping to avoid needing to use a private
branch if possible.

>
> > So the basic premise is that if you are copying a qcow2 file and the
> > unallocated portions as defined by the qcow2 metadata are zero, it's
> > safe to skip those, so you can treat it like SYNC_MODE_TOP.
> >
> > I think you *also* have to know if the *source* needs those regions
> > explicitly zeroed, and it's not always safe to just skip them at the
> > manifest level.
> >
> > I thought there was code that handled this to some extent already, but I
> > don't know. I think Vladimir has worked on it recently and can probably
> > let you know where I am mistaken :)
>
> Yes, I'm hoping Vladimir (or his other buddies at Virtuozzo) can chime
> in.  Meanwhile, I've working on v2 of some patches that will improve
> qemu's ability to tell if a destination qcow2 file already reads as all
> zeroes, and we already have bdrv_block_status() for telling which
> portions of a source image already read as all zeroes (whether or not it
> is due to not being allocated, the goal here is that we should NOT have
> to copy anything that reads as zero on the source over to the
> destination if the destination already starts life as reading all zero).
>

Can the eventual/optimal solution allow unallocated clusters to be skipped
entirely in the backup loop and make the detection of allocated zeroes an
option, not forcing the backup thread to loop through a potentially huge
empty virtual disk?

>
> And if nothing else, qemu 5.0 just added 'qemu-img convert
> --target-is-zero' as a last-ditch means of telling qemu to assume the
> destination reads as all zeroes, even if it cannot quickly prove it; we
> probably want to add a similar knob into the QMP commands for initiating
> block backup, for the same reasons.
>

This seems a good way of assuring the status of the target file.

Thanks!

>
> --
> Eric Blake, Principal Software Engineer
> Red Hat, Inc.   +1-919-301-3226
> Virtualization:  qemu.org | libvirt.org
>
>


Re: Avoid copying unallocated clusters during full backup

2020-04-17 Thread Leo Luan
On Fri, Apr 17, 2020 at 5:34 PM John Snow  wrote:

>
>
> On 4/17/20 6:57 PM, Leo Luan wrote:
> > On Fri, Apr 17, 2020 at 1:24 PM Eric Blake  > <mailto:ebl...@redhat.com>> wrote:
> >
> > On 4/17/20 3:11 PM, John Snow wrote:
> >
> > >> +
> > >> +if (s->sync_mode == MIRROR_SYNC_MODE_FULL &&
> > >> +   s->bcs->target->bs->drv != NULL &&
> > >> +   strncmp(s->bcs->target->bs->drv->format_name, "qcow2", 5)
> > == 0 &&
> > >> +   s->bcs->source->bs->backing_file[0] == '\0')
> > >
> > > This isn't going to suffice upstream; the backup job can't be
> > performing
> > > format introspection to determine behavior on the fly.
> >
> > Agreed.  The idea is right (we NEED to make backup operations smarter
> > based on knowledge about both source and destination block status),
> but
> > the implementation is not (a check for strcncmp("qcow2") is not
> ideal).
> >
> >
> > I see/agree that using strncmp("qcow2") is not general enough for the
> > upstream.  Would changing it to bdrv_unallocated_blocks_are_zero()
> suffice?
> >
>
> I don't know, to be really honest with you. Vladimir reworked the backup
> code recently and Virtuozzo et al have shown a very aggressive interest
> in optimizing the backup loop. I haven't really worked on that code
> since their rewrite.
>
> Dropping unallocated regions from the backup manifest is one strategy,
> but I think there will be cases where we won't be able to treat it like
> "TOP", but may still have unallocated regions we don't want to copy (We
> have a backing file which is itself unallocated.)
>
> I'm interested in a more general purpose mechanism for efficient
> copying. I think that instead of the backup job itself doing this in
> backup.c by populating the copy manifest, that it's also appropriate to
> try to copy every last block and have the backup loop implementation
> decide it doesn't actually need to copy that block.
>
> That way, the copy optimizations can be shared by any implementation
> that needs to do efficient copying, and we can avoid special format and
> graph-inspection code in the backup job main interface code.
>
> To be clear, I see these as identical amounts of work:
>
> - backup job runs a loop to inspect every cluster to see if it is
> allocated or not, and modifies its cluster backup manifest accordingly
>

This inspection can detect more than 1GB of unallocated (64KB) clusters per
loop and it's a shallower path.

>
> - backup job loops through the entire block and calls a smart_copy()
> function that might degrade into a no-op if the right conditions are met
> (source is unallocated, explicit zeroes are not needed on the destination)
>

If I am not mistaken, the copy loop does one cluster per iteration using a
twice deeper call path (trying to copy and eventually finding unallocated
clusters).  So with 64KB cluster size, it's 2 * 1G/64K ~= 32 million times
less efficient with the CPU cycles for large sparse virtual disks.

>
> Either way, you're looping and interrogating the disk, but in one case
> the efficiencies go deeper than *just* the backup code.
>

I think the early stop of inefficiency can help minimize the CPU impact of
the backup job on the VM instance.


> I think Vladimir has put a lot of work into making the backup code
> highly optimized, so I would consult with him to find out where the best
> place to put new optimizations are, if any -- he'll know!
>

Yes, hope that he will chime in.

Thanks!

>
> --js
>
>
> >
> > >
> > > I think what you're really after is something like
> > > bdrv_unallocated_blocks_are_zero().
> >
> > The fact that qemu-img already has a lot of optimizations makes me
> > wonder what we can salvage from there into reusable code that both
> > qemu-img and block backup can share, so that we're not reimplementing
> > block status handling in multiple places.
> >
> >
> > A general fix reusing some existing code would be great.  When will it
> > appear in the upstream?  We are hoping to avoid needing to use a private
> > branch if possible.
> >
> >
> > > So the basic premise is that if you are copying a qcow2 file and
> the
> > > unallocated portions as defined by the qcow2 metadata are zero,
> it's
> > > safe to skip those, so you can treat it like SYNC_

[Qemu-devel] standalone C program "Hello World" on qemu-system-mipsel

2011-07-01 Thread Leo Chen.
Hi, all

I am trying to run a standalone C program "Hello World" on qemu-system-mipsel,
but it failed to print the message via serial port.

What I have done now (step by step):
1. Successfully run a standalone C program "Hello World" on qemu-system-arm, by
following this guide:
http://balau82.wordpress.com/2010/02/14/simplest-bare-metal-program-for-arm/

2. Try to run the same standalone C program on qemu-system-mipsel, but
it failed.
I used GDB to trace the program, and I can make sure the program is
working fine, it just failed
to print the message via serial port, which mapped to address 0x3f8.

I know the qemu MIPS system emulator -m "mipssim" has a PC style
serial port (8250 UART).
And my code about operating the serial port is following the 8250 spec
(http://wiki.osdev.org/Serial_ports)

But now, I am stuck to make the 8250 serial port work, any suggestion
or sample codes will be appreciated.


Here is my compile/link/run commands, and the codes:

mips-linux-gnu-gcc -mips32 -EL -c startup.s -o obj/startup.o
mips-linux-gnu-gcc -mips32 -EL -static -Wall -Werror -g -msoft-float
-nostdlib -fno-exceptions -fno-builtin -nostartfiles -nodefaultlibs
-fno-stack-protector  -c test.c -o obj/test.o
mips-linux-gnu-ld -mips32 -EL -T test.ld obj/startup.o obj/test.o -o
bin/test.elf
mips-linux-gnu-objcopy -O binary bin/test.elf bin/test.bin

qemu-system-mipsel -M mipssim -nographic -kernel bin/test.elf
or
qemu-system-mipsel -M malta -nographic -kernel bin/test.elf


file: test.ld
---
ENTRY(_Reset)
SECTIONS
{
. = 0x80100400;
.startup . : { obj/startup.o(.text) }
.text : { *(.text) }
.data : { *(.data) }
.bss : { *(.bss) }
. = . + 0x1000; /* 4kB of stack memory */
stack_top = .;
}

file: startup.s
---
.text
.globl _Reset
_Reset:
lui $sp, %hi(stack_top)
addiu   $sp, $sp, %lo(stack_top)
lui $t9, %hi(c_entry)
addiu   $t9, %lo(c_entry)
jalr$t9
nop
hang:
b   hang

file: test.c
---
void c_entry()
{
#if 1
init_serial();
print_uart0("Hello world!\n");
#endif
#if 0
*(char *)0x3f8 = 'H';  //this also failed too
#endif
}

#define UART0_BASE 0x3f8  /* 8250 COM1 */
void init_serial() {
volatile char * addr;
addr = (volatile char*)(UART0_BASE + 1);
*addr = 0x00;
addr = (volatile char*)(UART0_BASE + 3);
*addr = 0x80;
addr = (volatile char*)(UART0_BASE + 0);
*addr = 0x03;
addr = (volatile char*)(UART0_BASE + 1);
*addr = 0x00;
addr = (volatile char*)(UART0_BASE + 3);
*addr = 0x03;
addr = (volatile char*)(UART0_BASE + 2);
*addr = 0xc7;
addr = (volatile char*)(UART0_BASE + 4);
*addr = 0x0b;
}
int is_transmit_empty()
{
volatile char *addr = (volatile char *)(UART0_BASE + 5);
return *addr & 0x20;
}
void write_serial(char a)
{
while (is_transmit_empty() == 0);
volatile char *addr = (volatile char*)UART0_BASE;
*addr = a;
}
void print_uart0(const char *s)
{
while(*s != '\0') { /* Loop until end of string */
write_serial( *s );
s++; /* Next char */
}
}



Re: [Qemu-devel] standalone C program "Hello World" on qemu-system-mipsel

2011-07-03 Thread Leo Chen.
Hi,  Andreas

Thanks for your reply.

I tried the -bios *.elf:
qemu-system-mipsel -M mipssim -nographic -bios bin/test.elf
The result is the same with -kernel *.elf: C program can work, but the
serial port still not working.

And I also tried the normal way: -kernel *.bin
mips-linux-gnu-objcopy -O binary bin/test.elf bin/test.bin
qemu-system-mipsel -M mipssim -nographic -kernel bin/test.bin
and get this failure message:
 qemu: could not load kernel 'bin/test.bin'

I know the C programe is working by doing these:
step1:
add some useless code in my C entry:
void c_entry()
{
   init_serial();
   int a, b, c;  //useless code, for remote GDB trace
   a = 1;
   b = 2;
   c = a+b;
   print_uart0("Hello world!\n");
}
step2:
using mips-linux-gdb to connect the qemu like this:
   mips-linux-gnu-gdb
   target remote localhost:1234
Then, I trace the code step by step, and get the correct
result "c=3";

So, I think the problem is I am driving the serial port in
a wrong way. I know there're some linux kernels working
fine on qemu-system-mipsel, maybe I should read these
kernel codes to see how to get the 8250 serial port work.


2011/7/2 Andreas Färber :
> Hi,
>
> Am 02.07.2011 um 08:13 schrieb Leo Chen.:
>
>> qemu-system-mipsel -M mipssim -nographic -kernel bin/test.elf
>> or
>> qemu-system-mipsel -M malta -nographic -kernel bin/test.elf
>
> The use of -kernel for a random ELF executable looks strange, even if it
> happens to work on arm. Have you tried -bios instead?
>
> Andreas
>

Leo Chen



[Qemu-devel] [Bug 1476800] [NEW] Instant runtime error (Host: Windows 8.1 VM: WinXP ISO)

2015-07-22 Thread Leo Nesfield
Public bug reported:

I have Qemu Manager on my Windows 8.1 laptop and have a WXP iso and a
blank disk image (from here
http://www.mediafire.com/download/rtec86bwwmee00s/Blank_Disk.zip ) and
as soon as I try to open it I get a Runtime Error (
http://i.gyazo.com/bfebf7e1e7a670f8e52cc95c5923a67e.png )

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1476800

Title:
  Instant runtime error (Host: Windows 8.1 VM: WinXP ISO)

Status in QEMU:
  New

Bug description:
  I have Qemu Manager on my Windows 8.1 laptop and have a WXP iso and a
  blank disk image (from here
  http://www.mediafire.com/download/rtec86bwwmee00s/Blank_Disk.zip ) and
  as soon as I try to open it I get a Runtime Error (
  http://i.gyazo.com/bfebf7e1e7a670f8e52cc95c5923a67e.png )

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1476800/+subscriptions



[Qemu-devel] Kernel Panic: qemu 0.8.1, kqemu 1.3.0pre7, linux 2.6.16.16, host: linux 2.6.16.16

2006-08-20 Thread Leo Antunes

Hey

I've been experiencing consistent kernel panics with the configuration
above and the -kernel-kqemu option. Solved it by simply ignoring the
BIOS's PCI table (kernel option pci=nobios).

Here's what the kernel spat out (if it helps):

general protection fault: 0060 [#1]
Modules linked in:
CPU:0
EIP:0060:[]Not tainted VLI
EFLAGS: 00010212   (2.6.16-2-686 #2)
EIP is at 0xb00fa04a
eax: 12370086   ebx: b030   ecx: 12378086   edx: 0cfc
esi:    edi: b02b6b74   ebp: b1058400   esp: b105ffa0
ds: 007b   es: 007b   ss: 0068
Process swapper (pid: 1, threadinfo=b105e000 task=b105da70)
Stack: <0>b0201cb1 0063     b1924000 000
1
  b105ffc0 b105ffc0 b03088d4    b02ff2e8 b01002de
  b0100267  b0101005     
Call Trace:
[] pcibios_sort+0xa5/0x164
[] pcibios_init+0x65/0x68
[] init+0x77/0x1a5
[] init+0x0/0x1a5
[] kernel_thread_helper+0x5/0xb
Code: c8 66 ef 5a eb 1d 3c 0d 75 10 e8 1d 00 00 00 52 66 ba fc 0c 89 c8 ef 5a eb
09 b4 81 5f 5e fb 66 9d f9 cb 30 e4 5f 5e fb 66 9d f8  52 b8 00 00 80 00 66
89 d8 c1 e0 08 66 81 e7 ff 00 66 09 f8
<0>Kernel panic - not syncing: Attempted to kill init!



Hope this helps someone!

Cheers
--
Leo Antunes
[EMAIL PROTECTED] | [EMAIL PROTECTED] | [EMAIL PROTECTED]


___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


[Qemu-devel] Patch for SDL framebuffer console problem

2007-04-24 Thread Kitambi Leo

I've been using qemu using the Linux framebuffer console, on a machine
that's a bit too slow to support X.  There have been more than a few snags,
but perhaps the most irritating one was that if qemu crashed (and received a
SIGSEGV), it would leave the console in an unusable state.  This would
require me to log into the machine via ssh and reboot it remotely.  I'm
submitting a patch that installs a signal handler to call sdl_cleanup(),
then exit.  This has alleviated my problem with the system being left in an
unusable state, however I suspect that the segfault problem lies somewhere
in SDL itself, and not in qemu.


sdl.c.diff
Description: Binary data


[Qemu-devel] Status Of These Issues In Qemu 0.13

2010-09-22 Thread Leo B
Hi I would like to know the status of these issues in Qemu 0.13

1.I saw that as recently as this year there was work being done on
Completeing Big Real Mode Emulation which should help old OS to run such as
Windows1.01-Windows 98 so my question is is this support complete now do
these OS run on Qemu 0.13 now?

2.Is there any work being done on implementing Direct X and/or OpenGL on
Legacy Windows OS such as Windows 98 other software is only working on
getting DirectX and OpenGL On newer OS only such as Windows XP-Windows 7 if
QEMU can implement this it will set it apart from the others?

3.Will CUDA Or Paravirtulization help in getting 3D Accelerated Graphics
such as OpenGL Or Direct3D running in Qemu?

4.Is any work being done on getting M68K Or PPC Emulation working so  I can
run Mac OS-MacOS X On Qemu I know that there was working being done by
somebody to get MacOS running on Qemu are we any closer now?

Thanks in Advance to anyone who replys


[Qemu-devel] M68K Or PPC Status Update to Run Mac OS in Qemu

2009-12-20 Thread Leo B
A while back I know someone was working on PPC Support in Qemu to be able to
run Mac OS9-Mac OS X in Qemu on X86 PC's not sure about M68K support though
so is anybody still working on M68K support or PPC Support and how far is it
from being included in a stable Qemu release,I notice Power PC Support still
says Testing on the status page how far is it from being complete?


Thanks In Advance For Replying


Re: [Qemu-devel] [PATCH v2 0/4] 9pfs: local: fix metadata of mapped-file security mode

2017-05-23 Thread Leo Gaspard
On 05/23/2017 04:32 PM, Greg Kurz wrote:
> v2: - posted patch for CVE-2017-7493 separately
> - other changes available in each patch changelog
> 
> Leo,
> 
> If you find time to test this series, I'll gladly add your Tested-by: to
> it before merging.

Just tested with a base of 2.9.0 with patches [1] [2] (from my
distribution), [3] (required to apply cleanly) and this patchset.

Things appear to work as expected, and .virtfs_metadata{,_root} appear
to be neither readable nor writable by any user.

That said, one thing still bothering me with the fix in [3] is that it
still "leaks" the host's uid/gid to the guest when a corresponding file
in .virtfs_metadata is not present (while I'd have expected it to appear
as root:root in the guest), but that's a separate issue, and I guess
retro-compatibility prevents any fixing it.

Thanks for these patches!
Leo


[1]
https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/virtualization/qemu/force-uid0-on-9p.patch

[2]
https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/virtualization/qemu/no-etc-install.patch

[3] https://lists.gnu.org/archive/html/qemu-devel/2017-05/msg03663.html



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-devel] [PATCH v2 4/4] 9pfs: local: metadata file for the VirtFS root

2017-05-23 Thread Leo Gaspard
On 05/23/2017 07:13 PM, Eric Blake wrote:> We have to block
VIRTFS_META_DIR at any depth in the hierarchy, but
> can/should we change the blocking of VIRTFS_META_ROOT_FILE to only
> happen at the root directory, rather than at all directories?  On the
> other hand, if you can simultaneously map /path/to/a for one mount
> point, and /path/to/a/b for another, then the root file for B is visible
> at a lower depth than the root file for A, and blocking the metafile
> name everywhere means that the mount A can't influence the behavior on
> the mount for B.

If you take this kind of vulnerabilities into account, then you also
have to consider a mix of mapped-file and mapped-attr mounts, or even
worse a proxy with a mapped-file mount (which I think is currently
vulnerable to this threat if the "proxy" path points above the
"local,security_model=mapped-file" path, as the check is done in
"local_" functions, which are I guess not used for proxy-type virtfs)

I'm clearly not saying it's an invalid attack (there is no explicit
documentation stating it's insecure to "nest" virtual mounts"), just
saying it's a much larger attack surface than one internal to virtfs
mapped-file only. Then the question of what is reasonably possible to
forbid to the user arises, and that's not one I could answer.

Cheers & HTH,
Leo



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-devel] [PATCH v2 0/4] 9pfs: local: fix metadata of mapped-file security mode

2017-05-24 Thread Leo Gaspard
On 05/24/2017 10:54 AM, Greg Kurz wrote:
> On Wed, 24 May 2017 00:59:29 +0200
> Leo Gaspard  wrote:
> 
>> On 05/23/2017 04:32 PM, Greg Kurz wrote:
>>> v2: - posted patch for CVE-2017-7493 separately
>>> - other changes available in each patch changelog
>>>
>>> Leo,
>>>
>>> If you find time to test this series, I'll gladly add your Tested-by: to
>>> it before merging.  
>>
>> Just tested with a base of 2.9.0 with patches [1] [2] (from my
>> distribution), [3] (required to apply cleanly) and this patchset.
>>
>> Things appear to work as expected, and .virtfs_metadata{,_root} appear
>> to be neither readable nor writable by any user.
>>
> 
> Shall I add your Tested-by: to the patch then ?

Hmm, I can't find the definition of Tested-by: on [1], but if it means
"tested it by hand without maybe trying all possible edge cases" then I
guess you can add it :)

>> That said, one thing still bothering me with the fix in [3] is that it
>> still "leaks" the host's uid/gid to the guest when a corresponding file
>> in .virtfs_metadata is not present (while I'd have expected it to appear
>> as root:root in the guest), but that's a separate issue, and I guess
>> retro-compatibility prevents any fixing it.
>>
> 
> Heh, I had a tentative patch to create root:root credentials and 0700 mode
> bits by default... but this could indeed break some setups, so I decided
> not to post it.

Hmm, maybe adding an option to the security_model=mapped-* that allows
to run default to root:root 0700 would allow to keep retrocompat, and in
a few versions swap the default value of the parameter? As the
'dscript=no' of -netdev type=tap appears to have disappeared -- and
broke my scripts -- in the switch to 1.9.0, I guess such a change would
be allowed by the retrocompatibility policy of qemu?

Anyway, it's fun to see you had thought of that too!

Cheers,
Leo


[1] http://wiki.qemu.org/Contribute/SubmitAPatch



signature.asc
Description: OpenPGP digital signature


[Qemu-devel] [Bug 1672365] [NEW] nested 9pfs read fail

2017-03-13 Thread Leo Gaspard
Public bug reported:

tl;dr: A virtfs read fails. The init being on this virtfs (mounted by
the initrd), the linux kernel guest is unable to boot, and kernel
panics. The fact that qemu still takes 100%cpu after the kernel panic
makes me think it's a qemu bug.

Here is the setup (some hashes replaced with "..."):
 * A (NixOS) host system, with /nix/store as a btrfs on lvm on cryptsetup
 * Running a qemu-kvm NixOS guest, with /nix/.ro-store as a virtfs mapping to 
host /nix/store:
```
exec /nix/store/...-qemu-x86-only-for-vm-tests-2.8.0/bin/qemu-kvm \
-name test \
-m 8192 \
-cpu kvm64 \
-net nic,vlan=0,model=virtio -net user,vlan=0 \
-virtfs local,path=/nix/store,security_model=none,mount_tag=store \
-virtfs local,path=/tmp/nix-vm/xchg,security_model=none,mount_tag=xchg \
-virtfs 
local,path=/tmp/nix-vm/xchg,security_model=none,mount_tag=shared \
-drive 
index=0,id=drive1,file=/home/ekleog/nixos/test.qcow2,if=virtio,cache=writeback,werror=report
 \
-kernel /nix/store/...-nixos-system-test-17.09.git.deee8da/kernel \
-initrd /nix/store/...-nixos-system-test-17.09.git.deee8da/initrd \
-append "$(cat 
/nix/store/...-nixos-system-test-17.09.git.deee8da/kernel-params) 
init=/nix/store/...-nixos-system-test-17.09.git.deee8da/init 
regInfo=/nix/store/...-reginfo" \
-vga std -usbdevice tablet
```
 * With /nix/store as an overlayfs between /nix/.ro-store and /nix/.rw-store
 * Running a qemu-kvm NixOS guest, with /nix/.ro-store as a virtfs mapping to 
host /nix/store/...-vm-nginx-store:
```
/nix/store/...-qemu-2.8.0/bin/qemu-kvm \
  -name nginx -m 128 -smp 2 -cpu kvm64 \
  -nographic -serial 
unix:"/var/lib/vm/consoles/nginx/socket.unix",server,nowait \
  -drive file="/var/lib/vm/images/nginx.img",if=virtio,media=disk \
  -virtfs 
local,path="/nix/store/...-vm-nginx-store",security_model=none,mount_tag=store \
  -netdev type=tap,id=net0,ifname=vm-nginx,script=no,dscript=no -device 
virtio-net-pci,netdev=net0,mac=56:00:00:00:00:01 \
  -kernel /nix/store/...-nixos-system-nginx-17.09.git.deee8da/kernel \
  -initrd /nix/store/...-nixos-system-nginx-17.09.git.deee8da/initrd \
  -append "$(cat 
/nix/store/...-nixos-system-nginx-17.09.git.deee8da/kernel-params) 
init=/nix/store/...-nixos-system-nginx-17.09.git.deee8da/init console=ttyS0 
boot.shell_on_fail" \
  -virtfs 
local,path="/var/lib/vm/persist/nginx/home",security_model=mapped-xattr,mount_tag="shared1"
 \
  -virtfs local,path="/var/lib",security_model=mapped-xattr,mount_tag="shared2" 
\
  -virtfs local,path="/tmp",security_model=mapped-xattr,mount_tag="shared3"
```
 * With /nix/store as an overlayfs between /nix/.ro-store and /nix/.rw-store
 * With init in /nix/store

What happens is that at boot time the inner VM doesn't manage to read
the init script after the initrd has mounted the 9pfs and overlayfs.

What makes me think it's a qemu bug is that htop in the outer VM shows
the inner VM's qemu as using 100% cpu, despite the fact the kernel it's
running is under kernel panic, thus doing nothing.

What do you think about this?

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1672365

Title:
  nested 9pfs read fail

Status in QEMU:
  New

Bug description:
  tl;dr: A virtfs read fails. The init being on this virtfs (mounted by
  the initrd), the linux kernel guest is unable to boot, and kernel
  panics. The fact that qemu still takes 100%cpu after the kernel panic
  makes me think it's a qemu bug.

  Here is the setup (some hashes replaced with "..."):
   * A (NixOS) host system, with /nix/store as a btrfs on lvm on cryptsetup
   * Running a qemu-kvm NixOS guest, with /nix/.ro-store as a virtfs mapping to 
host /nix/store:
  ```
  exec /nix/store/...-qemu-x86-only-for-vm-tests-2.8.0/bin/qemu-kvm \
  -name test \
  -m 8192 \
  -cpu kvm64 \
  -net nic,vlan=0,model=virtio -net user,vlan=0 \
  -virtfs local,path=/nix/store,security_model=none,mount_tag=store \
  -virtfs 
local,path=/tmp/nix-vm/xchg,security_model=none,mount_tag=xchg \
  -virtfs 
local,path=/tmp/nix-vm/xchg,security_model=none,mount_tag=shared \
  -drive 
index=0,id=drive1,file=/home/ekleog/nixos/test.qcow2,if=virtio,cache=writeback,werror=report
 \
  -kernel /nix/store/...-nixos-system-test-17.09.git.deee8da/kernel \
  -initrd /nix/store/...-nixos-system-test-17.09.git.deee8da/initrd \
  -append "$(cat 
/nix/store/...-nixos-system-test-17.09.git.deee8da/kernel-params) 
init=/nix/store/...-nixos-system-test-17.09.git.deee8da/init 
regInfo=/nix/store/...-reginfo" \
  -vga std -usbdevice tablet
  ```
   * With /nix/store as an overlayfs between /nix/.ro-store and /nix/.rw-store
   * Running a qemu-kvm NixOS guest, with /nix/.ro-store as a virtfs mapping to 
host /nix/store/...-vm-nginx-store:
  ```
  /nix/store/...-qemu-2.8.0/bin/qemu-kvm \
-name nginx -m 128 -s

[Qemu-devel] [Bug 1672365] Re: nested 9pfs read fail

2017-03-13 Thread Leo Gaspard
Oh, I forgot to mention: it first worked for some time, then in the
middle of a shell session running over a screen
/var/lib/vm/consoles/nginx/screen from the outer VM (socat-linked to
/var/lib/vm/consoles/nginx/socket.unix to provide a predictable pty
link), the 9pfs stopped returning any data, and it didn't go away after
a reboot of the inner VM, as it then no longer managed to boot.

I was unfortunately unable to identify exactly which operation caused
the thing to "stop working", but I'd say it is due to zsh's path-full
autocompletion in paths including directories with ~700 entries, without
being certain of that.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1672365

Title:
  nested 9pfs read fail

Status in QEMU:
  New

Bug description:
  tl;dr: A virtfs read fails. The init being on this virtfs (mounted by
  the initrd), the linux kernel guest is unable to boot, and kernel
  panics. The fact that qemu still takes 100%cpu after the kernel panic
  makes me think it's a qemu bug.

  Here is the setup (some hashes replaced with "..."):
   * A (NixOS) host system, with /nix/store as a btrfs on lvm on cryptsetup
   * Running a qemu-kvm NixOS guest, with /nix/.ro-store as a virtfs mapping to 
host /nix/store:
  ```
  exec /nix/store/...-qemu-x86-only-for-vm-tests-2.8.0/bin/qemu-kvm \
  -name test \
  -m 8192 \
  -cpu kvm64 \
  -net nic,vlan=0,model=virtio -net user,vlan=0 \
  -virtfs local,path=/nix/store,security_model=none,mount_tag=store \
  -virtfs 
local,path=/tmp/nix-vm/xchg,security_model=none,mount_tag=xchg \
  -virtfs 
local,path=/tmp/nix-vm/xchg,security_model=none,mount_tag=shared \
  -drive 
index=0,id=drive1,file=/home/ekleog/nixos/test.qcow2,if=virtio,cache=writeback,werror=report
 \
  -kernel /nix/store/...-nixos-system-test-17.09.git.deee8da/kernel \
  -initrd /nix/store/...-nixos-system-test-17.09.git.deee8da/initrd \
  -append "$(cat 
/nix/store/...-nixos-system-test-17.09.git.deee8da/kernel-params) 
init=/nix/store/...-nixos-system-test-17.09.git.deee8da/init 
regInfo=/nix/store/...-reginfo" \
  -vga std -usbdevice tablet
  ```
   * With /nix/store as an overlayfs between /nix/.ro-store and /nix/.rw-store
   * Running a qemu-kvm NixOS guest, with /nix/.ro-store as a virtfs mapping to 
host /nix/store/...-vm-nginx-store:
  ```
  /nix/store/...-qemu-2.8.0/bin/qemu-kvm \
-name nginx -m 128 -smp 2 -cpu kvm64 \
-nographic -serial 
unix:"/var/lib/vm/consoles/nginx/socket.unix",server,nowait \
-drive file="/var/lib/vm/images/nginx.img",if=virtio,media=disk \
-virtfs 
local,path="/nix/store/...-vm-nginx-store",security_model=none,mount_tag=store \
-netdev type=tap,id=net0,ifname=vm-nginx,script=no,dscript=no -device 
virtio-net-pci,netdev=net0,mac=56:00:00:00:00:01 \
-kernel /nix/store/...-nixos-system-nginx-17.09.git.deee8da/kernel \
-initrd /nix/store/...-nixos-system-nginx-17.09.git.deee8da/initrd \
-append "$(cat 
/nix/store/...-nixos-system-nginx-17.09.git.deee8da/kernel-params) 
init=/nix/store/...-nixos-system-nginx-17.09.git.deee8da/init console=ttyS0 
boot.shell_on_fail" \
-virtfs 
local,path="/var/lib/vm/persist/nginx/home",security_model=mapped-xattr,mount_tag="shared1"
 \
-virtfs 
local,path="/var/lib",security_model=mapped-xattr,mount_tag="shared2" \
-virtfs local,path="/tmp",security_model=mapped-xattr,mount_tag="shared3"
  ```
   * With /nix/store as an overlayfs between /nix/.ro-store and /nix/.rw-store
   * With init in /nix/store

  What happens is that at boot time the inner VM doesn't manage to read
  the init script after the initrd has mounted the 9pfs and overlayfs.

  What makes me think it's a qemu bug is that htop in the outer VM shows
  the inner VM's qemu as using 100% cpu, despite the fact the kernel
  it's running is under kernel panic, thus doing nothing.

  What do you think about this?

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1672365/+subscriptions



[Qemu-devel] [Bug 1672365] Re: nested 9pfs read fail

2017-03-17 Thread Leo Gaspard
Hmm, actually it looks like a kernel in kernel panic always takes 100%
CPU (just got another unrelated one), so I guess it's not necessarily a
qemu bug but can arise from anywhere in the stack. I'm marking the bug
as invalid as a consequence.

** Changed in: qemu
   Status: New => Invalid

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1672365

Title:
  nested 9pfs read fail

Status in QEMU:
  Invalid

Bug description:
  tl;dr: A virtfs read fails. The init being on this virtfs (mounted by
  the initrd), the linux kernel guest is unable to boot, and kernel
  panics. The fact that qemu still takes 100%cpu after the kernel panic
  makes me think it's a qemu bug.

  Here is the setup (some hashes replaced with "..."):
   * A (NixOS) host system, with /nix/store as a btrfs on lvm on cryptsetup
   * Running a qemu-kvm NixOS guest, with /nix/.ro-store as a virtfs mapping to 
host /nix/store:
  ```
  exec /nix/store/...-qemu-x86-only-for-vm-tests-2.8.0/bin/qemu-kvm \
  -name test \
  -m 8192 \
  -cpu kvm64 \
  -net nic,vlan=0,model=virtio -net user,vlan=0 \
  -virtfs local,path=/nix/store,security_model=none,mount_tag=store \
  -virtfs 
local,path=/tmp/nix-vm/xchg,security_model=none,mount_tag=xchg \
  -virtfs 
local,path=/tmp/nix-vm/xchg,security_model=none,mount_tag=shared \
  -drive 
index=0,id=drive1,file=/home/ekleog/nixos/test.qcow2,if=virtio,cache=writeback,werror=report
 \
  -kernel /nix/store/...-nixos-system-test-17.09.git.deee8da/kernel \
  -initrd /nix/store/...-nixos-system-test-17.09.git.deee8da/initrd \
  -append "$(cat 
/nix/store/...-nixos-system-test-17.09.git.deee8da/kernel-params) 
init=/nix/store/...-nixos-system-test-17.09.git.deee8da/init 
regInfo=/nix/store/...-reginfo" \
  -vga std -usbdevice tablet
  ```
   * With /nix/store as an overlayfs between /nix/.ro-store and /nix/.rw-store
   * Running a qemu-kvm NixOS guest, with /nix/.ro-store as a virtfs mapping to 
host /nix/store/...-vm-nginx-store:
  ```
  /nix/store/...-qemu-2.8.0/bin/qemu-kvm \
-name nginx -m 128 -smp 2 -cpu kvm64 \
-nographic -serial 
unix:"/var/lib/vm/consoles/nginx/socket.unix",server,nowait \
-drive file="/var/lib/vm/images/nginx.img",if=virtio,media=disk \
-virtfs 
local,path="/nix/store/...-vm-nginx-store",security_model=none,mount_tag=store \
-netdev type=tap,id=net0,ifname=vm-nginx,script=no,dscript=no -device 
virtio-net-pci,netdev=net0,mac=56:00:00:00:00:01 \
-kernel /nix/store/...-nixos-system-nginx-17.09.git.deee8da/kernel \
-initrd /nix/store/...-nixos-system-nginx-17.09.git.deee8da/initrd \
-append "$(cat 
/nix/store/...-nixos-system-nginx-17.09.git.deee8da/kernel-params) 
init=/nix/store/...-nixos-system-nginx-17.09.git.deee8da/init console=ttyS0 
boot.shell_on_fail" \
-virtfs 
local,path="/var/lib/vm/persist/nginx/home",security_model=mapped-xattr,mount_tag="shared1"
 \
-virtfs 
local,path="/var/lib",security_model=mapped-xattr,mount_tag="shared2" \
-virtfs local,path="/tmp",security_model=mapped-xattr,mount_tag="shared3"
  ```
   * With /nix/store as an overlayfs between /nix/.ro-store and /nix/.rw-store
   * With init in /nix/store

  What happens is that at boot time the inner VM doesn't manage to read
  the init script after the initrd has mounted the 9pfs and overlayfs.

  What makes me think it's a qemu bug is that htop in the outer VM shows
  the inner VM's qemu as using 100% cpu, despite the fact the kernel
  it's running is under kernel panic, thus doing nothing.

  What do you think about this?

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1672365/+subscriptions



[Qemu-devel] [Bug 1673957] [NEW] virtfs: mapped-xattr on mount point

2017-03-18 Thread Leo Gaspard
Public bug reported:

With
  -virtfs local,path="/tmp",security_model=mapped-xattr,mount_tag="shared2"
in the qemu command line,
  shared2 on /mnt/testbis type 9p 
(rw,sync,dirsync,relatime,trans=virtio,version=9p2000.L,msize=262144)
in the guest mount points, and
  tmpfs on /tmp type tmpfs (rw,nosuid,nodev)
in the host mount points (with CONFIG_TMPFS_XATTR=y according to zgrep 
/proc/config.gz), running qemu as user "vm-test", trying to "touch a" in 
/mnt/testbis on the VM fails with "Operation not supported". In addition, no 
file or directory actually present in the host's /tmp can be seen in the 
guest's /mnt/testbis.

When trying to replace "/tmp" with "/tmp/aaa" on the host, with /tmp/aaa
owned by root:root, still running qemu as vm-test, trying to run "ls" in
the guest's /mnt/testbis fails with the weird "ls: reading directory
'.': Cannot allocate memory", while the directory is empty.

After a "chown vm-test /tmp/aaa", the guest can list the files (despite
the permissions already allowing it to do so before), but still not
write new files: "cannot touch 'b': Operation not supported".

Do you have a pointer as to what is happening?

PS: complete setup is running all this inside a qemu VM that I use for
testing, I guess it shouldn't matter but saying it just in case

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1673957

Title:
  virtfs: mapped-xattr on mount point

Status in QEMU:
  New

Bug description:
  With
-virtfs local,path="/tmp",security_model=mapped-xattr,mount_tag="shared2"
  in the qemu command line,
shared2 on /mnt/testbis type 9p 
(rw,sync,dirsync,relatime,trans=virtio,version=9p2000.L,msize=262144)
  in the guest mount points, and
tmpfs on /tmp type tmpfs (rw,nosuid,nodev)
  in the host mount points (with CONFIG_TMPFS_XATTR=y according to zgrep 
/proc/config.gz), running qemu as user "vm-test", trying to "touch a" in 
/mnt/testbis on the VM fails with "Operation not supported". In addition, no 
file or directory actually present in the host's /tmp can be seen in the 
guest's /mnt/testbis.

  When trying to replace "/tmp" with "/tmp/aaa" on the host, with
  /tmp/aaa owned by root:root, still running qemu as vm-test, trying to
  run "ls" in the guest's /mnt/testbis fails with the weird "ls: reading
  directory '.': Cannot allocate memory", while the directory is empty.

  After a "chown vm-test /tmp/aaa", the guest can list the files
  (despite the permissions already allowing it to do so before), but
  still not write new files: "cannot touch 'b': Operation not
  supported".

  Do you have a pointer as to what is happening?

  PS: complete setup is running all this inside a qemu VM that I use for
  testing, I guess it shouldn't matter but saying it just in case

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1673957/+subscriptions