[Bug 1657010] Re: RFE: Please implement -cpu best or a CPU fallback option

2020-01-10 Thread Richard Jones
** Changed in: qemu
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1657010

Title:
  RFE: Please implement -cpu best or a CPU fallback option

Status in QEMU:
  Fix Released

Bug description:
  QEMU should implement a -cpu best option or some other way to make
  this work:

  qemu -M pc,accel=kvm:tcg -cpu best

  qemu -M pc,accel=kvm:tcg -cpu host:qemu64

  See also:

  https://bugzilla.redhat.com/show_bug.cgi?id=1277744#c6

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1657010/+subscriptions



[Qemu-devel] [Bug 1814381] Re: qemu can't resolve ::1 when no network is available

2019-02-02 Thread Richard Jones
The logic in util/qemu-sockets.c is very complicated, containing workarounds
for all sorts of broken/obsolete GAI implementations, so it's hard to tell
what's going on there.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1814381

Title:
  qemu can't resolve ::1 when no network is available

Status in QEMU:
  New

Bug description:
  I'm not sure if this is a qemu thing or a getaddrinfo/glibc thing, or
  even just something about my laptop.  However we have a test failure
  in nbdkit which only occurs when my laptop is not connected to wifi or
  other networking.  It boils down to:

    $ qemu-img info --image-opts "file.driver=nbd,file.host=::1,file.port=1234"
    qemu-img: Could not open 'file.driver=nbd,file.host=::1,file.port=1234': 
address resolution failed for ::1:1234: Address family for hostname not 
supported

  In a successful case it should connect to a local NBD server on port
  1234, but if you don't have that you will see:

    qemu-img: Could not open
  'file.driver=nbd,file.host=::1,file.port=1234': Failed to connect
  socket: Connection refused

  I can ‘ping6 ::1’ fine.

  It also works if I replace ‘::1’ with ‘localhost6’.

  My /etc/hosts contains:

  127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
  ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1814381/+subscriptions



[Qemu-devel] [Bug 1814381] Re: qemu can't resolve ::1 when no network is available

2019-02-02 Thread Richard Jones
ping6 output when the network is not connected:

$ ping6 ::1
PING ::1(::1) 56 data bytes
64 bytes from ::1: icmp_seq=1 ttl=64 time=0.082 ms
64 bytes from ::1: icmp_seq=2 ttl=64 time=0.092 ms
64 bytes from ::1: icmp_seq=3 ttl=64 time=0.089 ms
^C
--- ::1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 55ms
rtt min/avg/max/mdev = 0.082/0.087/0.092/0.011 ms

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1814381

Title:
  qemu can't resolve ::1 when no network is available

Status in QEMU:
  New

Bug description:
  I'm not sure if this is a qemu thing or a getaddrinfo/glibc thing, or
  even just something about my laptop.  However we have a test failure
  in nbdkit which only occurs when my laptop is not connected to wifi or
  other networking.  It boils down to:

    $ qemu-img info --image-opts "file.driver=nbd,file.host=::1,file.port=1234"
    qemu-img: Could not open 'file.driver=nbd,file.host=::1,file.port=1234': 
address resolution failed for ::1:1234: Address family for hostname not 
supported

  In a successful case it should connect to a local NBD server on port
  1234, but if you don't have that you will see:

    qemu-img: Could not open
  'file.driver=nbd,file.host=::1,file.port=1234': Failed to connect
  socket: Connection refused

  I can ‘ping6 ::1’ fine.

  It also works if I replace ‘::1’ with ‘localhost6’.

  My /etc/hosts contains:

  127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
  ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1814381/+subscriptions



[Qemu-devel] [Bug 1814381] Re: qemu can't resolve ::1 when no network is available

2019-02-02 Thread Richard Jones
ip output when the network is not connected:

$ ip a show scope host
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever
2: enp0s31f6:  mtu 1500 qdisc fq_codel state 
DOWN group default qlen 1000
link/ether e8:6a:64:5d:2c:66 brd ff:ff:ff:ff:ff:ff
3: wlp61s0:  mtu 1500 qdisc mq state DOWN 
group default qlen 1000
link/ether 1e:2b:b1:0c:99:ef brd ff:ff:ff:ff:ff:ff
5: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 
state DOWN group default qlen 1000
link/ether 52:54:00:72:04:db brd ff:ff:ff:ff:ff:ff


** Description changed:

  I'm not sure if this is a qemu thing or a getaddrinfo/glibc thing, or
  even just something about my laptop.  However we have a test failure
  in nbdkit which only occurs when my laptop is not connected to wifi or
  other networking.  It boils down to:
  
-   $ qemu-img info --image-opts "file.driver=nbd,file.host=::1,file.port=1234"
-   qemu-img: Could not open 'file.driver=nbd,file.host=::1,file.port=1234': 
addre
- ss resolution failed for ::1:1234: Address family for hostname not supported
+   $ qemu-img info --image-opts "file.driver=nbd,file.host=::1,file.port=1234"
+   qemu-img: Could not open 'file.driver=nbd,file.host=::1,file.port=1234': 
address resolution failed for ::1:1234: Address family for hostname not 
supported
  
  In a successful case it should connect to a local NBD server on port
  1234, but if you don't have that you will see:
  
-   qemu-img: Could not open 'file.driver=nbd,file.host=::1,file.port=1234': 
Faile
- d to connect socket: Connection refused
+   qemu-img: Could not open
+ 'file.driver=nbd,file.host=::1,file.port=1234': Failed to connect
+ socket: Connection refused
  
  I can ‘ping6 ::1’ fine.
  
  It also works if I replace ‘::1’ with ‘localhost6’.
  
  My /etc/hosts contains:
  
  127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
  ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1814381

Title:
  qemu can't resolve ::1 when no network is available

Status in QEMU:
  New

Bug description:
  I'm not sure if this is a qemu thing or a getaddrinfo/glibc thing, or
  even just something about my laptop.  However we have a test failure
  in nbdkit which only occurs when my laptop is not connected to wifi or
  other networking.  It boils down to:

    $ qemu-img info --image-opts "file.driver=nbd,file.host=::1,file.port=1234"
    qemu-img: Could not open 'file.driver=nbd,file.host=::1,file.port=1234': 
address resolution failed for ::1:1234: Address family for hostname not 
supported

  In a successful case it should connect to a local NBD server on port
  1234, but if you don't have that you will see:

    qemu-img: Could not open
  'file.driver=nbd,file.host=::1,file.port=1234': Failed to connect
  socket: Connection refused

  I can ‘ping6 ::1’ fine.

  It also works if I replace ‘::1’ with ‘localhost6’.

  My /etc/hosts contains:

  127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
  ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1814381/+subscriptions



[Qemu-devel] [Bug 1814381] [NEW] qemu can't resolve ::1 when no network is available

2019-02-02 Thread Richard Jones
Public bug reported:

I'm not sure if this is a qemu thing or a getaddrinfo/glibc thing, or
even just something about my laptop.  However we have a test failure
in nbdkit which only occurs when my laptop is not connected to wifi or
other networking.  It boils down to:

  $ qemu-img info --image-opts "file.driver=nbd,file.host=::1,file.port=1234"
  qemu-img: Could not open 'file.driver=nbd,file.host=::1,file.port=1234': 
address resolution failed for ::1:1234: Address family for hostname not 
supported

In a successful case it should connect to a local NBD server on port
1234, but if you don't have that you will see:

  qemu-img: Could not open
'file.driver=nbd,file.host=::1,file.port=1234': Failed to connect
socket: Connection refused

I can ‘ping6 ::1’ fine.

It also works if I replace ‘::1’ with ‘localhost6’.

My /etc/hosts contains:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1814381

Title:
  qemu can't resolve ::1 when no network is available

Status in QEMU:
  New

Bug description:
  I'm not sure if this is a qemu thing or a getaddrinfo/glibc thing, or
  even just something about my laptop.  However we have a test failure
  in nbdkit which only occurs when my laptop is not connected to wifi or
  other networking.  It boils down to:

    $ qemu-img info --image-opts "file.driver=nbd,file.host=::1,file.port=1234"
    qemu-img: Could not open 'file.driver=nbd,file.host=::1,file.port=1234': 
address resolution failed for ::1:1234: Address family for hostname not 
supported

  In a successful case it should connect to a local NBD server on port
  1234, but if you don't have that you will see:

    qemu-img: Could not open
  'file.driver=nbd,file.host=::1,file.port=1234': Failed to connect
  socket: Connection refused

  I can ‘ping6 ::1’ fine.

  It also works if I replace ‘::1’ with ‘localhost6’.

  My /etc/hosts contains:

  127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
  ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1814381/+subscriptions



[Qemu-devel] [Bug 1804323] Re: qemu segfaults in virtio-scsi driver if underlying device returns -EIO

2018-11-21 Thread Richard Jones
Kevin suggested this change, which works for me:

diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index 6eb258d3f3..0e9027c8f3 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -482,7 +482,7 @@ static bool scsi_handle_rw_error(SCSIDiskReq *r, int error, 
bool acct_failed)
 if (action == BLOCK_ERROR_ACTION_STOP) {
 scsi_req_retry(>req);
 }
-return false;
+return true;
 }
 
 static void scsi_write_complete_noio(SCSIDiskReq *r, int ret)

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1804323

Title:
  qemu segfaults in virtio-scsi driver if underlying device returns -EIO

Status in QEMU:
  New

Bug description:
  Reported downstream in Fedora:
  https://bugzilla.redhat.com/show_bug.cgi?id=1650975

  Using qemu from git and reasonably recent nbdkit, this command injects -EIO
  errors into the block device which virtio-scsi is reading from:

  $ nbdkit --filter=error memory size=64M error-rate=100% \
  --run 'x86_64-softmmu/qemu-system-x86_64 -device virtio-scsi,id=scsi 
-drive file=$nbd,format=raw,id=hd0,if=none -device scsi-hd,drive=hd0'
  nbdkit: memory[1]: error: injecting EIO error into pread
  nbdkit: memory[1]: error: injecting EIO error into pread
  qemu-system-x86_64: hw/scsi/scsi-bus.c:1374: scsi_req_complete: Assertion 
`req->status == -1' failed.

  The stack trace is:

  Thread 5 (Thread 0x7f33e1f8b700 (LWP 10474)):
  #0  0x7f33fe0bf371 in __GI___poll (fds=0x559b07199490, nfds=1, timeout=-1)
  at ../sysdeps/unix/sysv/linux/poll.c:29
  #1  0x7f34061df5e6 in  () at /lib64/libglib-2.0.so.0
  #2  0x7f34061df710 in g_main_context_iteration ()
  at /lib64/libglib-2.0.so.0
  #3  0x7f34061df761 in  () at /lib64/libglib-2.0.so.0
  #4  0x7f34062086ea in  () at /lib64/libglib-2.0.so.0
  #5  0x7f33fe19b58e in start_thread (arg=)
  at pthread_create.c:486
  #6  0x7f33fe0ca593 in clone ()
  at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

  Thread 4 (Thread 0x7f33e3fff700 (LWP 10473)):
  #0  0x7f33fe1a4a8d in __lll_lock_wait ()
  at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
  #1  0x7f33fe19ddf8 in __GI___pthread_mutex_lock 
(mutex=mutex@entry=0x559b054697a0 ) at 
../nptl/pthread_mutex_lock.c:78
  #2  0x559b04f6b103 in qemu_mutex_lock_impl (mutex=0x559b054697a0 
, file=0x559b04f87041 "/home/rjones/d/qemu/exec.c", 
line=3197)
  at util/qemu-thread-posix.c:66
  #3  0x559b04b722ee in qemu_mutex_lock_iothread_impl 
(file=file@entry=0x559b04f87041 "/home/rjones/d/qemu/exec.c", 
line=line@entry=3197)
  at /home/rjones/d/qemu/cpus.c:1845
  #4  0x559b04b31859 in prepare_mmio_access (mr=, 
mr=) at /home/rjones/d/qemu/exec.c:3197
  #5  0x559b04b381d4 in address_space_ldub (as=, 
addr=, attrs=..., result=result@entry=0x0)
  at /home/rjones/d/qemu/memory_ldst.inc.c:188
  #6  0x559b04c61cd0 in helper_inb (env=, port=) at /home/rjones/d/qemu/target/i386/cpu.h:1846
  #7  0x7f33e889dc3e in code_gen_buffer ()
  #8  0x559b04bb3b87 in cpu_tb_exec (itb=, 
cpu=0x7f33e8876100 ) at 
/home/rjones/d/qemu/accel/tcg/cpu-exec.c:171
  #9  0x559b04bb3b87 in cpu_loop_exec_tb (tb_exit=, 
last_tb=, tb=, cpu=0x7f33e8876100 
) at /home/rjones/d/qemu/accel/tcg/cpu-exec.c:615
  #10 0x559b04bb3b87 in cpu_exec (cpu=cpu@entry=0x559b05db57a0)
  at /home/rjones/d/qemu/accel/tcg/cpu-exec.c:725
  #11 0x559b04b7088f in tcg_cpu_exec (cpu=0x559b05db57a0)
  at /home/rjones/d/qemu/cpus.c:1425
  #12 0x559b04b72c03 in qemu_tcg_cpu_thread_fn (arg=0x559b05db57a0)
  at /home/rjones/d/qemu/cpus.c:1729
  #13 0x559b04b72c03 in qemu_tcg_cpu_thread_fn 
(arg=arg@entry=0x559b05db57a0)
  at /home/rjones/d/qemu/cpus.c:1703
  #14 0x559b04f6afba in qemu_thread_start (args=)
  at util/qemu-thread-posix.c:498
  #15 0x7f33fe19b58e in start_thread (arg=)
  at pthread_create.c:486
  #16 0x7f33fe0ca593 in clone ()
  at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

  Thread 3 (Thread 0x7f33e178a700 (LWP 10475)):
  #0  0x7f33fe0bf371 in __GI___poll (fds=0x559b071aa760, nfds=2, timeout=-1)
  at ../sysdeps/unix/sysv/linux/poll.c:29
  #1  0x7f34061df5e6 in  () at /lib64/libglib-2.0.so.0
  #2  0x7f34061df9a2 in g_main_loop_run () at /lib64/libglib-2.0.so.0
  #3  0x7f34032ca90a in  () at /lib64/libgio-2.0.so.0
  #4  0x7f34062086ea in  () at /lib64/libglib-2.0.so.0
  #5  0x7f33fe19b58e in start_thread (arg=)
  at pthread_create.c:486
  #6  0x7f33fe0ca593 in clone ()
  at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

  Thread 2 (Thread 0x7f33eb050700 (LWP 10471)):
  #0  0x7f33fe1a5400 in __GI___nanosleep (requested_time=0x7f33eb04d270, 
remaining=0x7f33eb04d280) at ../sysdeps/unix/sysv/linux/nanosleep.c:28
  #1  0x7f3406209e17 in g_usleep () at /lib64/libglib-2.0.so.0
  #2  0x559b04f7cb80 in call_rcu_thread (opaque=opaque@entry=0x0)
  

[Qemu-devel] [Bug 1804323] [NEW] qemu segfaults in virtio-scsi driver if underlying device returns -EIO

2018-11-20 Thread Richard Jones
Public bug reported:

Reported downstream in Fedora:
https://bugzilla.redhat.com/show_bug.cgi?id=1650975

Using qemu from git and reasonably recent nbdkit, this command injects -EIO
errors into the block device which virtio-scsi is reading from:

$ nbdkit --filter=error memory size=64M error-rate=100% \
--run 'x86_64-softmmu/qemu-system-x86_64 -device virtio-scsi,id=scsi -drive 
file=$nbd,format=raw,id=hd0,if=none -device scsi-hd,drive=hd0'
nbdkit: memory[1]: error: injecting EIO error into pread
nbdkit: memory[1]: error: injecting EIO error into pread
qemu-system-x86_64: hw/scsi/scsi-bus.c:1374: scsi_req_complete: Assertion 
`req->status == -1' failed.

The stack trace is:

Thread 5 (Thread 0x7f33e1f8b700 (LWP 10474)):
#0  0x7f33fe0bf371 in __GI___poll (fds=0x559b07199490, nfds=1, timeout=-1)
at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x7f34061df5e6 in  () at /lib64/libglib-2.0.so.0
#2  0x7f34061df710 in g_main_context_iteration ()
at /lib64/libglib-2.0.so.0
#3  0x7f34061df761 in  () at /lib64/libglib-2.0.so.0
#4  0x7f34062086ea in  () at /lib64/libglib-2.0.so.0
#5  0x7f33fe19b58e in start_thread (arg=)
at pthread_create.c:486
#6  0x7f33fe0ca593 in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 4 (Thread 0x7f33e3fff700 (LWP 10473)):
#0  0x7f33fe1a4a8d in __lll_lock_wait ()
at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x7f33fe19ddf8 in __GI___pthread_mutex_lock 
(mutex=mutex@entry=0x559b054697a0 ) at 
../nptl/pthread_mutex_lock.c:78
#2  0x559b04f6b103 in qemu_mutex_lock_impl (mutex=0x559b054697a0 
, file=0x559b04f87041 "/home/rjones/d/qemu/exec.c", 
line=3197)
at util/qemu-thread-posix.c:66
#3  0x559b04b722ee in qemu_mutex_lock_iothread_impl 
(file=file@entry=0x559b04f87041 "/home/rjones/d/qemu/exec.c", 
line=line@entry=3197)
at /home/rjones/d/qemu/cpus.c:1845
#4  0x559b04b31859 in prepare_mmio_access (mr=, 
mr=) at /home/rjones/d/qemu/exec.c:3197
#5  0x559b04b381d4 in address_space_ldub (as=, 
addr=, attrs=..., result=result@entry=0x0)
at /home/rjones/d/qemu/memory_ldst.inc.c:188
#6  0x559b04c61cd0 in helper_inb (env=, port=) at /home/rjones/d/qemu/target/i386/cpu.h:1846
#7  0x7f33e889dc3e in code_gen_buffer ()
#8  0x559b04bb3b87 in cpu_tb_exec (itb=, cpu=0x7f33e8876100 
) at /home/rjones/d/qemu/accel/tcg/cpu-exec.c:171
#9  0x559b04bb3b87 in cpu_loop_exec_tb (tb_exit=, 
last_tb=, tb=, cpu=0x7f33e8876100 
) at /home/rjones/d/qemu/accel/tcg/cpu-exec.c:615
#10 0x559b04bb3b87 in cpu_exec (cpu=cpu@entry=0x559b05db57a0)
at /home/rjones/d/qemu/accel/tcg/cpu-exec.c:725
#11 0x559b04b7088f in tcg_cpu_exec (cpu=0x559b05db57a0)
at /home/rjones/d/qemu/cpus.c:1425
#12 0x559b04b72c03 in qemu_tcg_cpu_thread_fn (arg=0x559b05db57a0)
at /home/rjones/d/qemu/cpus.c:1729
#13 0x559b04b72c03 in qemu_tcg_cpu_thread_fn (arg=arg@entry=0x559b05db57a0)
at /home/rjones/d/qemu/cpus.c:1703
#14 0x559b04f6afba in qemu_thread_start (args=)
at util/qemu-thread-posix.c:498
#15 0x7f33fe19b58e in start_thread (arg=)
at pthread_create.c:486
#16 0x7f33fe0ca593 in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 3 (Thread 0x7f33e178a700 (LWP 10475)):
#0  0x7f33fe0bf371 in __GI___poll (fds=0x559b071aa760, nfds=2, timeout=-1)
at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x7f34061df5e6 in  () at /lib64/libglib-2.0.so.0
#2  0x7f34061df9a2 in g_main_loop_run () at /lib64/libglib-2.0.so.0
#3  0x7f34032ca90a in  () at /lib64/libgio-2.0.so.0
#4  0x7f34062086ea in  () at /lib64/libglib-2.0.so.0
#5  0x7f33fe19b58e in start_thread (arg=)
at pthread_create.c:486
#6  0x7f33fe0ca593 in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 2 (Thread 0x7f33eb050700 (LWP 10471)):
#0  0x7f33fe1a5400 in __GI___nanosleep (requested_time=0x7f33eb04d270, 
remaining=0x7f33eb04d280) at ../sysdeps/unix/sysv/linux/nanosleep.c:28
#1  0x7f3406209e17 in g_usleep () at /lib64/libglib-2.0.so.0
#2  0x559b04f7cb80 in call_rcu_thread (opaque=opaque@entry=0x0)
at util/rcu.c:253
#3  0x559b04f6afba in qemu_thread_start (args=)
at util/qemu-thread-posix.c:498
#4  0x7f33fe19b58e in start_thread (arg=)
at pthread_create.c:486
#5  0x7f33fe0ca593 in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 1 (Thread 0x7f33eb9b42c0 (LWP 10470)):
#0  0x7f33fe00553f in __GI_raise (sig=sig@entry=6)
at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x7f33fdfef895 in __GI_abort () at abort.c:79
#2  0x7f33fdfef769 in __assert_fail_base (fmt=0x7f33fe156ea8 "%s%s%s:%u: 
%s%sAssertion `%s' failed.\n%n", assertion=0x559b050203cf "req->status == -1", 
file=0x559b050203aa "hw/scsi/scsi-bus.c", line=1374, function=0x559b05020dc0 
<__PRETTY_FUNCTION__.32414> "scsi_req_complete") at assert.c:92
#3  0x7f33fdffd9f6 in __GI___assert_fail 
(assertion=assertion@entry=0x559b050203cf 

[Qemu-devel] [Bug 1740364] Re: qemu-img: fails to get shared 'write' lock

2018-11-02 Thread Richard Jones
Fixed upstream in
https://github.com/libguestfs/libguestfs/commit/f00f920ad3b15ab8e9e8f201c16e7628b6b7b109

The fix should appear in libguestfs 1.40.

** Changed in: qemu
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1740364

Title:
  qemu-img: fails to get shared 'write' lock

Status in QEMU:
  Fix Committed

Bug description:
  Description of problem:
  Somewhere in F27 (did not see it happening before), I'm getting while running 
libguestfs (via libvirt or direct), a qemu-img failure. Note: multiple qcow2 
snapshots are on the same backing file, and a parallel libguestfs command is 
running on all. However, it seems to be failing to get a lock on the leaf, 
which is unique, non-shared.

  The VM is up and running. I'm not sure why qemu-img is even trying to get a 
write lock on it. Even 'info' fails:
  ykaul@ykaul ovirt-system-tests]$ qemu-img info 
/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2
  qemu-img: Could not open 
'/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2':
 Failed to get shared "write" lock
  Is another process using the image?
  [ykaul@ykaul ovirt-system-tests]$ lsof |grep qcow2
  [ykaul@ykaul ovirt-system-tests]$ file 
/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2
  
/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2:
 QEMU QCOW Image (v3), has backing file (path 
/var/lib/lago/store/phx_repo:el7.4-base:v1), 6442450944 bytes

  
  And it's OK if I kill the VM of course.


  
  Version-Release number of selected component (if applicable):
  [ykaul@ykaul ovirt-system-tests]$ rpm -qa |grep qemu
  qemu-block-nfs-2.10.1-2.fc27.x86_64
  qemu-block-dmg-2.10.1-2.fc27.x86_64
  qemu-guest-agent-2.10.1-2.fc27.x86_64
  qemu-system-x86-core-2.10.1-2.fc27.x86_64
  qemu-block-curl-2.10.1-2.fc27.x86_64
  qemu-img-2.10.1-2.fc27.x86_64
  qemu-common-2.10.1-2.fc27.x86_64
  qemu-kvm-2.10.1-2.fc27.x86_64
  qemu-block-ssh-2.10.1-2.fc27.x86_64
  qemu-block-iscsi-2.10.1-2.fc27.x86_64
  libvirt-daemon-driver-qemu-3.7.0-3.fc27.x86_64
  qemu-block-gluster-2.10.1-2.fc27.x86_64
  ipxe-roms-qemu-20161108-2.gitb991c67.fc26.noarch
  qemu-system-x86-2.10.1-2.fc27.x86_64
  qemu-block-rbd-2.10.1-2.fc27.x86_64

  
  How reproducible:
  Sometimes.

  Steps to Reproduce:
  1. Running Lago (ovirt-system-tests) on my laptop, it happens quite a lot.

  Additional info:
  libguestfs: trace: set_verbose true
  libguestfs: trace: set_verbose = 0
  libguestfs: trace: set_backend "direct"
  libguestfs: trace: set_backend = 0
  libguestfs: create: flags = 0, handle = 0x7f1314006430, program = python2
  libguestfs: trace: set_program "lago"
  libguestfs: trace: set_program = 0
  libguestfs: trace: add_drive_ro 
"/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2"
  libguestfs: trace: add_drive 
"/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2"
 "readonly:true"
  libguestfs: creating COW overlay to protect original drive content
  libguestfs: trace: get_tmpdir
  libguestfs: trace: get_tmpdir = "/tmp"
  libguestfs: trace: disk_create "/tmp/libguestfsWrA7Dh/overlay1.qcow2" "qcow2" 
-1 
"backingfile:/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2"
  libguestfs: command: run: qemu-img
  libguestfs: command: run: \ create
  libguestfs: command: run: \ -f qcow2
  libguestfs: command: run: \ -o 
backing_file=/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2
  libguestfs: command: run: \ /tmp/libguestfsWrA7Dh/overlay1.qcow2
  qemu-img: /tmp/libguestfsWrA7Dh/overlay1.qcow2: Failed to get shared "write" 
lock
  Is another process using the image?
  Could not open backing image to determine size.
  libguestfs: trace: disk_create = -1 (error)
  libguestfs: trace: add_drive = -1 (error)
  libguestfs: trace: add_drive_ro = -1 (error)

  
  And:
  [ykaul@ykaul ovirt-system-tests]$ strace qemu-img info 
/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2
  execve("/usr/bin/qemu-img", ["qemu-img", "info", 
"/home/ykaul/ovirt-system-tests/d"...], 0x7fffb36ccfc0 /* 59 vars */) = 0
  brk(NULL)   = 0x562790488000
  mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7f20cea08000
  access("/etc/ld.so.preload", R_OK)  = -1 ENOENT (No such file or 
directory)
  openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
  fstat(3, {st_mode=S_IFREG|0644, 

[Qemu-devel] [Bug 1740364] Re: qemu-img: fails to get shared 'write' lock

2018-11-02 Thread Richard Jones
Sorry I noticed this bug is filed against qemu.  The fix was done in
libguestfs, it's not a bug in qemu as far as I know.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1740364

Title:
  qemu-img: fails to get shared 'write' lock

Status in QEMU:
  Fix Committed

Bug description:
  Description of problem:
  Somewhere in F27 (did not see it happening before), I'm getting while running 
libguestfs (via libvirt or direct), a qemu-img failure. Note: multiple qcow2 
snapshots are on the same backing file, and a parallel libguestfs command is 
running on all. However, it seems to be failing to get a lock on the leaf, 
which is unique, non-shared.

  The VM is up and running. I'm not sure why qemu-img is even trying to get a 
write lock on it. Even 'info' fails:
  ykaul@ykaul ovirt-system-tests]$ qemu-img info 
/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2
  qemu-img: Could not open 
'/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2':
 Failed to get shared "write" lock
  Is another process using the image?
  [ykaul@ykaul ovirt-system-tests]$ lsof |grep qcow2
  [ykaul@ykaul ovirt-system-tests]$ file 
/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2
  
/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2:
 QEMU QCOW Image (v3), has backing file (path 
/var/lib/lago/store/phx_repo:el7.4-base:v1), 6442450944 bytes

  
  And it's OK if I kill the VM of course.


  
  Version-Release number of selected component (if applicable):
  [ykaul@ykaul ovirt-system-tests]$ rpm -qa |grep qemu
  qemu-block-nfs-2.10.1-2.fc27.x86_64
  qemu-block-dmg-2.10.1-2.fc27.x86_64
  qemu-guest-agent-2.10.1-2.fc27.x86_64
  qemu-system-x86-core-2.10.1-2.fc27.x86_64
  qemu-block-curl-2.10.1-2.fc27.x86_64
  qemu-img-2.10.1-2.fc27.x86_64
  qemu-common-2.10.1-2.fc27.x86_64
  qemu-kvm-2.10.1-2.fc27.x86_64
  qemu-block-ssh-2.10.1-2.fc27.x86_64
  qemu-block-iscsi-2.10.1-2.fc27.x86_64
  libvirt-daemon-driver-qemu-3.7.0-3.fc27.x86_64
  qemu-block-gluster-2.10.1-2.fc27.x86_64
  ipxe-roms-qemu-20161108-2.gitb991c67.fc26.noarch
  qemu-system-x86-2.10.1-2.fc27.x86_64
  qemu-block-rbd-2.10.1-2.fc27.x86_64

  
  How reproducible:
  Sometimes.

  Steps to Reproduce:
  1. Running Lago (ovirt-system-tests) on my laptop, it happens quite a lot.

  Additional info:
  libguestfs: trace: set_verbose true
  libguestfs: trace: set_verbose = 0
  libguestfs: trace: set_backend "direct"
  libguestfs: trace: set_backend = 0
  libguestfs: create: flags = 0, handle = 0x7f1314006430, program = python2
  libguestfs: trace: set_program "lago"
  libguestfs: trace: set_program = 0
  libguestfs: trace: add_drive_ro 
"/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2"
  libguestfs: trace: add_drive 
"/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2"
 "readonly:true"
  libguestfs: creating COW overlay to protect original drive content
  libguestfs: trace: get_tmpdir
  libguestfs: trace: get_tmpdir = "/tmp"
  libguestfs: trace: disk_create "/tmp/libguestfsWrA7Dh/overlay1.qcow2" "qcow2" 
-1 
"backingfile:/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2"
  libguestfs: command: run: qemu-img
  libguestfs: command: run: \ create
  libguestfs: command: run: \ -f qcow2
  libguestfs: command: run: \ -o 
backing_file=/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2
  libguestfs: command: run: \ /tmp/libguestfsWrA7Dh/overlay1.qcow2
  qemu-img: /tmp/libguestfsWrA7Dh/overlay1.qcow2: Failed to get shared "write" 
lock
  Is another process using the image?
  Could not open backing image to determine size.
  libguestfs: trace: disk_create = -1 (error)
  libguestfs: trace: add_drive = -1 (error)
  libguestfs: trace: add_drive_ro = -1 (error)

  
  And:
  [ykaul@ykaul ovirt-system-tests]$ strace qemu-img info 
/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2
  execve("/usr/bin/qemu-img", ["qemu-img", "info", 
"/home/ykaul/ovirt-system-tests/d"...], 0x7fffb36ccfc0 /* 59 vars */) = 0
  brk(NULL)   = 0x562790488000
  mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7f20cea08000
  access("/etc/ld.so.preload", R_OK)  = -1 ENOENT (No such file or 
directory)
  openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
  fstat(3, {st_mode=S_IFREG|0644, st_size=93275, ...}) = 0
  mmap(NULL, 93275, PROT_READ, MAP_PRIVATE, 3, 0) = 

[Qemu-devel] [Bug 1462944] Re: vpc file causes qemu-img to consume lots of time and memory

2018-06-14 Thread Richard Jones
I suspect this bug is probably still around, and if not then this class
of bugs is certainly still around.  What we have done in management
tools like Open Stack is to confine qemu-img using simple ulimits when
inspecting any untrusted image, and that solves the problem so it's
probably fine to close this bug now.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1462944

Title:
  vpc file causes qemu-img to consume lots of time and memory

Status in QEMU:
  Incomplete

Bug description:
  The attached vpc file causes 'qemu-img info' to consume 3 or 4 seconds
  of CPU time and 1.3 GB of heap, causing a minor denial of service.

  $ /usr/bin/time ~/d/qemu/qemu-img info afl12.img
  block-vpc: The header checksum of 'afl12.img' is incorrect.
  qemu-img: Could not open 'afl12.img': block-vpc: free_data_block_offset 
points after the end of file. The image has been truncated.
  1.19user 3.15system 0:04.35elapsed 99%CPU (0avgtext+0avgdata 
1324504maxresident)k
  0inputs+0outputs (0major+327314minor)pagefaults 0swaps

  The file was found using american-fuzzy-lop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1462944/+subscriptions



[Qemu-devel] [Bug 1155677] Re: snapshot=on fails with non file-based storage

2018-02-06 Thread Richard Jones
Let's close this.  libguestfs doesn't use snapshot=on any longer.

** Changed in: qemu
   Status: In Progress => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1155677

Title:
  snapshot=on fails with non file-based storage

Status in QEMU:
  Incomplete

Bug description:
  The snapshot=on option doesn't work with an nbd block device:

  /usr/bin/qemu-system-x86_64 \
  [...]
  -device virtio-scsi-pci,id=scsi \
  -drive file=nbd:localhost:61930,snapshot=on,format=raw,id=hd0,if=none \
  -device scsi-hd,drive=hd0 \
  [...]

  gives the error:

  qemu-system-x86_64: -drive
  file=nbd:localhost:61930,snapshot=on,format=raw,id=hd0,if=none: could
  not open disk image nbd:localhost:61930: No such file or directory

  If you remove the snapshot=on flag, it works (although that of course
  means that the block device is writable which we don't want).

  Previously reported here:

http://permalink.gmane.org/gmane.comp.emulators.qemu/148390

  and I can confirm this still happens in qemu 1.4.0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1155677/+subscriptions



[Qemu-devel] [Bug 1186984] Re: large -initrd can wrap around in memory causing memory corruption

2018-01-30 Thread Richard Jones
The answer is I don't know.  Closing this bug seems correct unless
someone can reproduce the original problem.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1186984

Title:
  large -initrd can wrap around in memory causing memory corruption

Status in QEMU:
  Incomplete

Bug description:
  We don't use large -initrd in libguestfs any more, but I noticed that
  a large -initrd file now crashes qemu spectacularly:

  $ ls -lh /tmp/kernel /tmp/initrd 
  -rw-r--r--. 1 rjones rjones 273M Jun  3 14:02 /tmp/initrd
  lrwxrwxrwx. 1 rjones rjones   35 Jun  3 14:02 /tmp/kernel -> 
/boot/vmlinuz-3.9.4-200.fc18.x86_64

  $ ./x86_64-softmmu/qemu-system-x86_64 -L pc-bios \
  -kernel /tmp/kernel -initrd /tmp/initrd -hda /tmp/test1.img -serial stdio 
\
  -append console=ttyS0

  qemu crashes with one of several errors:

  PFLASH: Possible BUG - Write block confirm

  qemu: fatal: Trying to execute code outside RAM or ROM at
  0x000b96cd

  If -enable-kvm is used:

  KVM: injection failed, MSI lost (Operation not permitted)

  In all cases the SDL display fills up with coloured blocks before the
  crash (see the attached screenshot).

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1186984/+subscriptions



[Qemu-devel] [Bug 1269606] Re: curl driver (https) always says "No such file or directory"

2018-01-24 Thread Richard Jones
I restored the original description.  Please report a new bug.

** Description changed:

+ 
  I have a remote server, on which an http disk image definitely exists.
- However the qemu curl block driver cannot open it.  It always gives the
+ However the qemu curl block driver cannot open it. It always gives the
  bogus error:
  
  CURL: Error opening file: Connection time-out
  qemu-system-x86_64: -drive 
file=http://onuma/scratch/cirros-0.3.1-x86_64-disk.img,snapshot=on,cache=writeback,id=hd0,if=none:
 could not open disk image http://onuma/scratch/cirros-0.3.1-x86_64-disk.img: 
Could not open backing file: Could not open 
'http://onuma/scratch/cirros-0.3.1-x86_64-disk.img': No such file or directory
  
  On the server, I can see from the logs that qemu/curl is opening it:
  
  192.168.0.175 - - [15/Jan/2014:21:25:37 +] "HEAD 
/scratch/cirros-0.3.1-x86_64-disk.img HTTP/1.1" 200 - "-" "-"
  192.168.0.175 - - [15/Jan/2014:21:25:37 +] "GET 
/scratch/cirros-0.3.1-x86_64-disk.img HTTP/1.1" 206 264192 "-" "-"
  
  I am using qemu & curl from git today.
  
  curl: curl-7_34_0-177-gc7a76bb
  qemu: for-anthony-839-g1cf892c
  
  Here is the full command I am using:
  
- 0622001a98105b80ecfad39dbf0b84c0ed71e12bb603a81a2b9b52e19ca7ae80E
+ http_proxy= \
+ LD_LIBRARY_PATH=/home/rjones/d/curl/lib/.libs \
+ LIBGUESTFS_BACKEND=direct \
+ LIBGUESTFS_HV=/home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64 \
+ guestfish -v --ro -a http://onuma/scratch/cirros-0.3.1-x86_64-disk.img run
+ 
+ The full output (includes qemu command itself) is:
+ 
+ libguestfs: launch: program=guestfish
+ libguestfs: launch: version=1.25.20fedora=21,release=1.fc21,libvirt
+ libguestfs: launch: backend registered: unix
+ libguestfs: launch: backend registered: uml
+ libguestfs: launch: backend registered: libvirt
+ libguestfs: launch: backend registered: direct
+ libguestfs: launch: backend=direct
+ libguestfs: launch: tmpdir=/tmp/libguestfsoQctgE
+ libguestfs: launch: umask=0002
+ libguestfs: launch: euid=1000
+ libguestfs: command: run: /usr/bin/supermin-helper
+ libguestfs: command: run: \ --verbose
+ libguestfs: command: run: \ -f checksum
+ libguestfs: command: run: \ --host-cpu x86_64
+ libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
+ supermin helper [0ms] whitelist = (not specified)
+ supermin helper [0ms] host_cpu = x86_64
+ supermin helper [0ms] dtb_wildcard = (not specified)
+ supermin helper [0ms] inputs:
+ supermin helper [0ms] inputs[0] = /usr/lib64/guestfs/supermin.d
+ supermin helper [0ms] outputs:
+ supermin helper [0ms] kernel = (none)
+ supermin helper [0ms] dtb = (none)
+ supermin helper [0ms] initrd = (none)
+ supermin helper [0ms] appliance = (none)
+ checking modpath /lib/modules/3.12.5-302.fc20.x86_64 is a directory
+ checking modpath /lib/modules/3.11.9-200.fc19.x86_64 is a directory
+ checking modpath /lib/modules/3.11.10-200.fc19.x86_64 is a directory
+ checking modpath /lib/modules/3.11.4-201.fc19.x86_64 is a directory
+ picked kernel vmlinuz-3.12.5-302.fc20.x86_64
+ supermin helper [0ms] finished creating kernel
+ supermin helper [0ms] visiting /usr/lib64/guestfs/supermin.d
+ supermin helper [0ms] visiting /usr/lib64/guestfs/supermin.d/base.img.gz
+ supermin helper [0ms] visiting /usr/lib64/guestfs/supermin.d/daemon.img.gz
+ supermin helper [0ms] visiting /usr/lib64/guestfs/supermin.d/hostfiles
+ supermin helper [00041ms] visiting /usr/lib64/guestfs/supermin.d/init.img
+ supermin helper [00041ms] visiting 
/usr/lib64/guestfs/supermin.d/udev-rules.img
+ supermin helper [00041ms] adding kernel modules
+ supermin helper [00064ms] finished creating appliance
+ libguestfs: checksum of existing appliance: 
2017df18eaeee7c45b87139c9bd80be2216d655a1513322c47f58a7a3668cd1f
+ libguestfs: [00066ms] begin testing qemu features
+ libguestfs: command: run: 
/home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64
+ libguestfs: command: run: \ -display none
+ libguestfs: command: run: \ -help
+ libguestfs: command: run: 
/home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64
+ libguestfs: command: run: \ -display none
+ libguestfs: command: run: \ -version
+ libguestfs: qemu version 1.7
+ libguestfs: command: run: 
/home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64
+ libguestfs: command: run: \ -display none
+ libguestfs: command: run: \ -machine accel=kvm:tcg
+ libguestfs: command: run: \ -device ?
+ libguestfs: [00127ms] finished testing qemu features
+ [00128ms] /home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64 \
+ -global virtio-blk-pci.scsi=off \
+ -nodefconfig \
+ -enable-fips \
+ -nodefaults \
+ -display none \
+ -machine accel=kvm:tcg \
+ -cpu host,+kvmclock \
+ -m 500 \
+ -no-reboot \
+ -no-hpet \
+ -kernel /var/tmp/.guestfs-1000/kernel.19645 \
+ -initrd /var/tmp/.guestfs-1000/initrd.19645 \
+ -device virtio-scsi-pci,id=scsi \
+ -drive 

Re: [Qemu-devel] [Bug 1740364] Re: qemu-img: fails to get shared 'write' lock

2018-01-05 Thread Richard Jones
On Fri, Jan 05, 2018 at 02:44:54AM -, Ping Li wrote:
> The behaviour should be expected. Since qemu 2.10, image locking is
> enabled which make multiple QEMU processes cannot write to the same
> image, even if boot snapshot and backing file at the same time are
> not allowed. 
> You could get image info with the option "-U" as below:
> $qemu-img info -U $ImageName
> The reason qcow2 is not allowed is because metadata has to be read 
> from the image file, and it is not safe if the image is being used 
> by the VM, because it may update metadata while we read it, 
> resulting in inconsistent or wrong output.

The higher layers deal with inconsistent output.  We want a way to
turn off locking when *we* know that it's safe, qemu doesn't have a
way to know that.

Interestingly the -U option is undocumented, but it seems like what we
want here.

Yaniv, how about this (only lightly tested):

diff --git a/lib/info.c b/lib/info.c
index 4464df994..460596373 100644
--- a/lib/info.c
+++ b/lib/info.c
@@ -193,6 +193,7 @@ get_json_output (guestfs_h *g, const char *filename)
 
   guestfs_int_cmd_add_arg (cmd, QEMU_IMG);
   guestfs_int_cmd_add_arg (cmd, "info");
+  guestfs_int_cmd_add_arg (cmd, "-U");
   guestfs_int_cmd_add_arg (cmd, "--output");
   guestfs_int_cmd_add_arg (cmd, "json");
   guestfs_int_cmd_add_arg (cmd, fdpath);


Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine.  Supports Linux and Windows.
http://people.redhat.com/~rjones/virt-df/

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1740364

Title:
  qemu-img: fails to get shared 'write' lock

Status in QEMU:
  New

Bug description:
  Description of problem:
  Somewhere in F27 (did not see it happening before), I'm getting while running 
libguestfs (via libvirt or direct), a qemu-img failure. Note: multiple qcow2 
snapshots are on the same backing file, and a parallel libguestfs command is 
running on all. However, it seems to be failing to get a lock on the leaf, 
which is unique, non-shared.

  The VM is up and running. I'm not sure why qemu-img is even trying to get a 
write lock on it. Even 'info' fails:
  ykaul@ykaul ovirt-system-tests]$ qemu-img info 
/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2
  qemu-img: Could not open 
'/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2':
 Failed to get shared "write" lock
  Is another process using the image?
  [ykaul@ykaul ovirt-system-tests]$ lsof |grep qcow2
  [ykaul@ykaul ovirt-system-tests]$ file 
/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2
  
/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2:
 QEMU QCOW Image (v3), has backing file (path 
/var/lib/lago/store/phx_repo:el7.4-base:v1), 6442450944 bytes

  
  And it's OK if I kill the VM of course.


  
  Version-Release number of selected component (if applicable):
  [ykaul@ykaul ovirt-system-tests]$ rpm -qa |grep qemu
  qemu-block-nfs-2.10.1-2.fc27.x86_64
  qemu-block-dmg-2.10.1-2.fc27.x86_64
  qemu-guest-agent-2.10.1-2.fc27.x86_64
  qemu-system-x86-core-2.10.1-2.fc27.x86_64
  qemu-block-curl-2.10.1-2.fc27.x86_64
  qemu-img-2.10.1-2.fc27.x86_64
  qemu-common-2.10.1-2.fc27.x86_64
  qemu-kvm-2.10.1-2.fc27.x86_64
  qemu-block-ssh-2.10.1-2.fc27.x86_64
  qemu-block-iscsi-2.10.1-2.fc27.x86_64
  libvirt-daemon-driver-qemu-3.7.0-3.fc27.x86_64
  qemu-block-gluster-2.10.1-2.fc27.x86_64
  ipxe-roms-qemu-20161108-2.gitb991c67.fc26.noarch
  qemu-system-x86-2.10.1-2.fc27.x86_64
  qemu-block-rbd-2.10.1-2.fc27.x86_64

  
  How reproducible:
  Sometimes.

  Steps to Reproduce:
  1. Running Lago (ovirt-system-tests) on my laptop, it happens quite a lot.

  Additional info:
  libguestfs: trace: set_verbose true
  libguestfs: trace: set_verbose = 0
  libguestfs: trace: set_backend "direct"
  libguestfs: trace: set_backend = 0
  libguestfs: create: flags = 0, handle = 0x7f1314006430, program = python2
  libguestfs: trace: set_program "lago"
  libguestfs: trace: set_program = 0
  libguestfs: trace: add_drive_ro 
"/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2"
  libguestfs: trace: add_drive 
"/home/ykaul/ovirt-system-tests/deployment-basic-suite-master/default/images/lago-basic-suite-master-host-1_root.qcow2"
 "readonly:true"
  libguestfs: creating COW overlay to protect original dr

[Qemu-devel] [Bug 1378554] Re: qemu segfault in virtio_scsi_handle_cmd_req_submit on ARM 32 bit

2017-11-23 Thread Richard Jones
Yes, qemu's working fine on aarch64 so this must have been fixed.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1378554

Title:
  qemu segfault in virtio_scsi_handle_cmd_req_submit on ARM 32 bit

Status in QEMU:
  Fix Released

Bug description:
  /home/rjones/d/qemu/arm-softmmu/qemu-system-arm \
  -global virtio-blk-device.scsi=off \
  -nodefconfig \
  -enable-fips \
  -nodefaults \
  -display none \
  -M virt \
  -machine accel=kvm:tcg \
  -m 500 \
  -no-reboot \
  -rtc driftfix=slew \
  -global kvm-pit.lost_tick_policy=discard \
  -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/appliance.d/kernel \
  -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/appliance.d/initrd \
  -device virtio-scsi-device,id=scsi \
  -drive 
file=/home/rjones/d/libguestfs/tmp/libguestfseV4fT5/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
  -device scsi-hd,drive=hd0 \
  -drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none
 \
  -device scsi-hd,drive=appliance \
  -device virtio-serial-device \
  -serial stdio \
  -chardev 
socket,path=/home/rjones/d/libguestfs/tmp/libguestfseV4fT5/guestfsd.sock,id=channel0
 \
  -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
  -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=6000 no_timer_check 
lpj=4464640 acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb 
selinux=0 guestfs_verbose=1 TERM=xterm-256color'

  The appliance boots, but segfaults as soon as the virtio-scsi driver
  is loaded:

  supermin: internal insmod virtio_scsi.ko
  [3.992963] scsi0 : Virtio SCSI HBA
  libguestfs: error: appliance closed the connection unexpectedly, see earlier 
error messages

  I captured a core dump:

  Core was generated by `/home/rjones/d/qemu/arm-softmmu/qemu-system-arm 
-global virtio-blk-device.scsi='.
  Program terminated with signal SIGSEGV, Segmentation fault.
  #0  0x000856bc in virtio_scsi_handle_cmd_req_submit (s=, 
  req=0x6c03acf8) at /home/rjones/d/qemu/hw/scsi/virtio-scsi.c:551
  551   bdrv_io_unplug(req->sreq->dev->conf.bs);
  (gdb) bt
  #0  0x000856bc in virtio_scsi_handle_cmd_req_submit (s=, 
  req=0x6c03acf8) at /home/rjones/d/qemu/hw/scsi/virtio-scsi.c:551
  #1  0x0008573a in virtio_scsi_handle_cmd (vdev=0xac4d68, vq=0xafe4b8)
  at /home/rjones/d/qemu/hw/scsi/virtio-scsi.c:573
  #2  0x0004fdbe in access_with_adjusted_size (addr=80, 
  value=value@entry=0x4443e6c0, size=size@entry=4, access_size_min=1, 
  access_size_max=, access_size_max@entry=0, 
  access=access@entry=0x4fee9 , 
  mr=mr@entry=0xa53fa8) at /home/rjones/d/qemu/memory.c:480
  #3  0x00054234 in memory_region_dispatch_write (size=4, data=2, 
  addr=, mr=0xa53fa8) at /home/rjones/d/qemu/memory.c:1117
  #4  io_mem_write (mr=0xa53fa8, addr=, val=val@entry=2, 
  size=size@entry=4) at /home/rjones/d/qemu/memory.c:1958
  #5  0x00021c88 in address_space_rw (as=0x3b96b4 , 
  addr=167788112, buf=buf@entry=0x4443e790 "\002", len=len@entry=4, 
  is_write=is_write@entry=true) at /home/rjones/d/qemu/exec.c:2135
  #6  0x00021de6 in address_space_write (len=4, buf=0x4443e790 "\002", 
  addr=, as=)
  at /home/rjones/d/qemu/exec.c:2202
  #7  subpage_write (opaque=, addr=, value=2, 
  len=4) at /home/rjones/d/qemu/exec.c:1811
  #8  0x0004fdbe in access_with_adjusted_size (addr=592, 
  value=value@entry=0x4443e820, size=size@entry=4, access_size_min=1, 
  access_size_max=, access_size_max@entry=0, 
  access=access@entry=0x4fee9 , 
  mr=mr@entry=0xaed980) at /home/rjones/d/qemu/memory.c:480
  #9  0x00054234 in memory_region_dispatch_write (size=4, data=2, 
  addr=, mr=0xaed980) at /home/rjones/d/qemu/memory.c:1117
  #10 io_mem_write (mr=0xaed980, addr=, val=2, size=size@entry=4)
  at /home/rjones/d/qemu/memory.c:1958
  #11 0x00057f24 in io_writel (retaddr=1121296542, Cannot access memory at 
address 0x0
  addr=, val=2, 
  physaddr=592, env=0x9d6c50) at /home/rjones/d/qemu/softmmu_template.h:381
  #12 helper_le_stl_mmu (env=0x9d6c50, addr=, val=2, 
  mmu_idx=, retaddr=1121296542)
  at /home/rjones/d/qemu/softmmu_template.h:419
  #13 0x42d5a0a0 in ?? ()
  Cannot access memory at address 0x0
  Backtrace stopped: previous frame identical to this frame (corrupt stack?)
  (gdb) print req
  $1 = (VirtIOSCSIReq *) 0x6c03acf8
  (gdb) print req->sreq
  $2 = (SCSIRequest *) 0xc2c2c2c2
  (gdb) print req->sreq->dev
  Cannot access memory at address 0xc2c2c2c6
  (gdb) print *req
  $3 = {
dev = 0x6c40, 
vq = 0x6c40, 
qsgl = {
  sg = 0x0, 
  nsg = 0, 
  nalloc = -1027423550, 
  size = 3267543746, 
  dev = 0xc2c2c2c2, 
  as = 0xc2c2c2c2
}, 
resp_iov = {
  iov = 0xc2c2c2c2, 
  niov = -1027423550, 
  

[Qemu-devel] [Bug 1383857] Re: aarch64: virtio disks don't show up in guest (neither blk nor scsi)

2017-11-07 Thread Richard Jones
Oh yes, long fixed.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1383857

Title:
  aarch64: virtio disks don't show up in guest (neither blk nor scsi)

Status in QEMU:
  Incomplete

Bug description:
  kernel-3.18.0-0.rc1.git0.1.rwmj5.fc22.aarch64 (3.18 rc1 + some hardware 
enablement)
  qemu from git today

  When I create a guest with virtio-scsi disks, they don't show up inside the 
guest.
  Literally after the virtio_mmio.ko and virtio_scsi.ko modules are loaded, 
there are
  no messages about disks, and of course nothing else works.

  Really long command line (generated by libvirt):

  HOME=/home/rjones USER=rjones LOGNAME=rjones QEMU_AUDIO_DRV=none
  TMPDIR=/home/rjones/d/libguestfs/tmp
  /home/rjones/d/qemu/aarch64-softmmu/qemu-system-aarch64 -name guestfs-
  oqv29um3jp03kpjf -S -machine virt,accel=tcg,usb=off -cpu cortex-a57 -m
  500 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
  a5f1a15d-2bc7-46df-9974-1d1f643b2449 -nographic -no-user-config
  -nodefaults -chardev
  socket,id=charmonitor,path=/home/rjones/.config/libvirt/qemu/lib
  /guestfs-oqv29um3jp03kpjf.monitor,server,nowait -mon
  chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-reboot -boot strict=on -kernel
  /home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/kernel -initrd
  /home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/initrd -append
  panic=1 console=ttyAMA0 earlyprintk=pl011,0x900 ignore_loglevel
  efi-rtc=noprobe udevtimeout=6000 udev.event-timeout=6000
  no_timer_check lpj=50 acpi=off printk.time=1 cgroup_disable=memory
  root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color -device
  virtio-scsi-device,id=scsi0 -device virtio-serial-device,id=virtio-
  serial0 -usb -drive
  file=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/scratch.1,if=none,id
  =drive-scsi0-0-0-0,format=raw,cache=unsafe -device scsi-
  hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-
  scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive
  file=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/overlay2,if=none,id
  =drive-scsi0-0-1-0,format=qcow2,cache=unsafe -device scsi-
  hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-
  scsi0-0-1-0,id=scsi0-0-1-0 -serial
  unix:/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/console.sock
  -chardev
  
socket,id=charchannel0,path=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/guestfsd.sock
  -device virtserialport,bus=virtio-
  serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0
  -msg timestamp=on

  There are no kernel messages about the disks, they just are not seen.

  Worked with kernel 3.16 so I suspect this could be a kernel bug rather than a
  qemu bug, but I've no idea where to report those.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1383857/+subscriptions



[Qemu-devel] [Bug 1013691] Re: ppc64 + virtio-scsi: only first scsi disk shows up in the guest

2017-11-03 Thread Richard Jones
Closed it, very old bug and we successfully test many disks with
ppc64/le nowadays.

** Changed in: qemu
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1013691

Title:
  ppc64 + virtio-scsi: only first scsi disk shows up in the guest

Status in QEMU:
  Invalid

Bug description:
  When adding two virtio-scsi targets to a single guest, only the first
  disk is seen inside the guest.  For some unknown reason the guest
  doesn't enumerate the second disk.

  For full qemu-system-ppc64 command line and 'dmesg' output, see:

  http://lists.nongnu.org/archive/html/qemu-devel/2012-06/msg02430.html

  I have also tried this with Linus's git tree (3.5.0-rc2+ at time of writing),
  same thing.

  In both cases I'm using qemu from git.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1013691/+subscriptions



[Qemu-devel] [Bug 1726733] [NEW] ‘qemu-img info replication:’ causes segfault

2017-10-24 Thread Richard Jones
Public bug reported:

Typing the literal command ‘qemu-img info replication:’ causes a
segfault.  Note that ‘replication:’ is not a filename.

$ ./qemu-img info replication:
qemu-img: block.c:2609: bdrv_open_inherit: Assertion `!!(flags & 
BDRV_O_PROTOCOL) == !!drv->bdrv_file_open' failed.
Aborted (core dumped)

This was originally found by Han Han and reported in Fedora:
https://bugzilla.redhat.com/show_bug.cgi?id=1505652

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1726733

Title:
  ‘qemu-img info replication:’ causes segfault

Status in QEMU:
  New

Bug description:
  Typing the literal command ‘qemu-img info replication:’ causes a
  segfault.  Note that ‘replication:’ is not a filename.

  $ ./qemu-img info replication:
  qemu-img: block.c:2609: bdrv_open_inherit: Assertion `!!(flags & 
BDRV_O_PROTOCOL) == !!drv->bdrv_file_open' failed.
  Aborted (core dumped)

  This was originally found by Han Han and reported in Fedora:
  https://bugzilla.redhat.com/show_bug.cgi?id=1505652

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1726733/+subscriptions



Re: [Qemu-devel] [Bug 1706296] Re: Booting NT 4 disk causes /home/rjones/d/qemu/cpus.c:1580:qemu_mutex_lock_iothread: assertion failed: (!qemu_mutex_iothread_locked())

2017-08-18 Thread Richard Jones
On Fri, Aug 18, 2017 at 10:23:25AM -, Alex Bennée wrote:
> That said from John's update it sounds very much like a symptom of not
> emulating the right processor type rather than behaviour we are
> incorrectly modelling.

FWIW I checked back with the original specs, and NT 4.0 minimally
required a Pentium processor (and 16 MB of RAM :-)

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine.  Supports Linux and Windows.
http://people.redhat.com/~rjones/virt-df/

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1706296

Title:
  Booting NT 4 disk causes
  /home/rjones/d/qemu/cpus.c:1580:qemu_mutex_lock_iothread: assertion
  failed: (!qemu_mutex_iothread_locked())

Status in QEMU:
  New

Bug description:
  Grab the NT 4 disk from
  https://archive.org/details/Microsoft_Windows_NT_Server_Version_4.0_227-075
  -385_CD-KEY_419-1343253_1996

  Try to boot it as follows:

  qemu-system-x86_64 -hda disk.img -cdrom 
Microsoft_Windows_NT_Server_Version_4.0_227-075-385_CD-KEY_419-1343253_1996.iso 
-m 2048 -boot d -machine pc,accel=tcg
  WARNING: Image format was not specified for 'disk.img' and probing guessed 
raw.
   Automatically detecting the format is dangerous for raw images, 
write operations on block 0 will be restricted.
   Specify the 'raw' format explicitly to remove the restrictions.
  **
  ERROR:/home/rjones/d/qemu/cpus.c:1580:qemu_mutex_lock_iothread: assertion 
failed: (!qemu_mutex_iothread_locked())
  Aborted (core dumped)

  The stack trace in the failing thread is:

  Thread 4 (Thread 0x7fffb0418700 (LWP 21979)):
  #0  0x7fffdd89b64b in raise () at /lib64/libc.so.6
  #1  0x7fffdd89d450 in abort () at /lib64/libc.so.6
  #2  0x7fffdff8c75d in g_assertion_message () at /lib64/libglib-2.0.so.0
  #3  0x7fffdff8c7ea in g_assertion_message_expr ()
  at /lib64/libglib-2.0.so.0
  #4  0x557a7d00 in qemu_mutex_lock_iothread ()
  at /home/rjones/d/qemu/cpus.c:1580
  #5  0x557cb429 in io_writex (env=env@entry=0x56751400, 
iotlbentry=0x5675b678, 
  iotlbentry@entry=0x5ae40c918, val=val@entry=8, 
addr=addr@entry=2148532220, retaddr=0, retaddr@entry=93825011136120, 
size=size@entry=4)
  at /home/rjones/d/qemu/accel/tcg/cputlb.c:795
  #6  0x557ce0f7 in io_writel (retaddr=93825011136120, addr=2148532220, 
val=8, index=255, mmu_idx=21845, env=0x56751400)
  at /home/rjones/d/qemu/softmmu_template.h:265
  #7  0x557ce0f7 in helper_le_stl_mmu (env=env@entry=0x56751400, 
addr=addr@entry=2148532220, val=val@entry=8, oi=, 
retaddr=93825011136120, retaddr@entry=0) at 
/home/rjones/d/qemu/softmmu_template.h:300
  #8  0x5587c0a4 in cpu_stl_kernel_ra (env=0x56751400, 
ptr=2148532220, v=8, retaddr=0) at 
/home/rjones/d/qemu/include/exec/cpu_ldst_template.h:182
  #9  0x55882610 in do_interrupt_protected (is_hw=, 
next_eip=, error_code=2, is_int=, 
intno=, env=0x56751400) at 
/home/rjones/d/qemu/target/i386/seg_helper.c:758
  #10 0x55882610 in do_interrupt_all (cpu=cpu@entry=0x56749170, 
intno=, is_int=, error_code=2, 
next_eip=, is_hw=is_hw@entry=0) at 
/home/rjones/d/qemu/target/i386/seg_helper.c:1252
  #11 0x558839d3 in x86_cpu_do_interrupt (cs=0x56749170)
  at /home/rjones/d/qemu/target/i386/seg_helper.c:1298
  #12 0x557d2ccb in cpu_handle_exception (ret=, 
cpu=0x566a4590) at /home/rjones/d/qemu/accel/tcg/cpu-exec.c:465
  #13 0x557d2ccb in cpu_exec (cpu=cpu@entry=0x56749170)
  at /home/rjones/d/qemu/accel/tcg/cpu-exec.c:670
  #14 0x557a855a in tcg_cpu_exec (cpu=0x56749170)
  at /home/rjones/d/qemu/cpus.c:1270
  #15 0x557a855a in qemu_tcg_rr_cpu_thread_fn (arg=)
  at /home/rjones/d/qemu/cpus.c:1365
  #16 0x7fffddc3d36d in start_thread () at /lib64/libpthread.so.0
  #17 0x7fffdd975b9f in clone () at /lib64/libc.so.6

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1706296/+subscriptions



[Qemu-devel] [Bug 1706296] [NEW] Booting NT 4 disk causes /home/rjones/d/qemu/cpus.c:1580:qemu_mutex_lock_iothread: assertion failed: (!qemu_mutex_iothread_locked())

2017-07-25 Thread Richard Jones
Public bug reported:

Grab the NT 4 disk from
https://archive.org/details/Microsoft_Windows_NT_Server_Version_4.0_227-075
-385_CD-KEY_419-1343253_1996

Try to boot it as follows:

qemu-system-x86_64 -hda disk.img -cdrom 
Microsoft_Windows_NT_Server_Version_4.0_227-075-385_CD-KEY_419-1343253_1996.iso 
-m 2048 -boot d -machine pc,accel=tcg
WARNING: Image format was not specified for 'disk.img' and probing guessed raw.
 Automatically detecting the format is dangerous for raw images, write 
operations on block 0 will be restricted.
 Specify the 'raw' format explicitly to remove the restrictions.
**
ERROR:/home/rjones/d/qemu/cpus.c:1580:qemu_mutex_lock_iothread: assertion 
failed: (!qemu_mutex_iothread_locked())
Aborted (core dumped)

The stack trace in the failing thread is:

Thread 4 (Thread 0x7fffb0418700 (LWP 21979)):
#0  0x7fffdd89b64b in raise () at /lib64/libc.so.6
#1  0x7fffdd89d450 in abort () at /lib64/libc.so.6
#2  0x7fffdff8c75d in g_assertion_message () at /lib64/libglib-2.0.so.0
#3  0x7fffdff8c7ea in g_assertion_message_expr ()
at /lib64/libglib-2.0.so.0
#4  0x557a7d00 in qemu_mutex_lock_iothread ()
at /home/rjones/d/qemu/cpus.c:1580
#5  0x557cb429 in io_writex (env=env@entry=0x56751400, 
iotlbentry=0x5675b678, 
iotlbentry@entry=0x5ae40c918, val=val@entry=8, 
addr=addr@entry=2148532220, retaddr=0, retaddr@entry=93825011136120, 
size=size@entry=4)
at /home/rjones/d/qemu/accel/tcg/cputlb.c:795
#6  0x557ce0f7 in io_writel (retaddr=93825011136120, addr=2148532220, 
val=8, index=255, mmu_idx=21845, env=0x56751400)
at /home/rjones/d/qemu/softmmu_template.h:265
#7  0x557ce0f7 in helper_le_stl_mmu (env=env@entry=0x56751400, 
addr=addr@entry=2148532220, val=val@entry=8, oi=, 
retaddr=93825011136120, retaddr@entry=0) at 
/home/rjones/d/qemu/softmmu_template.h:300
#8  0x5587c0a4 in cpu_stl_kernel_ra (env=0x56751400, 
ptr=2148532220, v=8, retaddr=0) at 
/home/rjones/d/qemu/include/exec/cpu_ldst_template.h:182
#9  0x55882610 in do_interrupt_protected (is_hw=, 
next_eip=, error_code=2, is_int=, 
intno=, env=0x56751400) at 
/home/rjones/d/qemu/target/i386/seg_helper.c:758
#10 0x55882610 in do_interrupt_all (cpu=cpu@entry=0x56749170, 
intno=, is_int=, error_code=2, 
next_eip=, is_hw=is_hw@entry=0) at 
/home/rjones/d/qemu/target/i386/seg_helper.c:1252
#11 0x558839d3 in x86_cpu_do_interrupt (cs=0x56749170)
at /home/rjones/d/qemu/target/i386/seg_helper.c:1298
#12 0x557d2ccb in cpu_handle_exception (ret=, 
cpu=0x566a4590) at /home/rjones/d/qemu/accel/tcg/cpu-exec.c:465
#13 0x557d2ccb in cpu_exec (cpu=cpu@entry=0x56749170)
at /home/rjones/d/qemu/accel/tcg/cpu-exec.c:670
#14 0x557a855a in tcg_cpu_exec (cpu=0x56749170)
at /home/rjones/d/qemu/cpus.c:1270
#15 0x557a855a in qemu_tcg_rr_cpu_thread_fn (arg=)
at /home/rjones/d/qemu/cpus.c:1365
#16 0x7fffddc3d36d in start_thread () at /lib64/libpthread.so.0
#17 0x7fffdd975b9f in clone () at /lib64/libc.so.6

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1706296

Title:
  Booting NT 4 disk causes
  /home/rjones/d/qemu/cpus.c:1580:qemu_mutex_lock_iothread: assertion
  failed: (!qemu_mutex_iothread_locked())

Status in QEMU:
  New

Bug description:
  Grab the NT 4 disk from
  https://archive.org/details/Microsoft_Windows_NT_Server_Version_4.0_227-075
  -385_CD-KEY_419-1343253_1996

  Try to boot it as follows:

  qemu-system-x86_64 -hda disk.img -cdrom 
Microsoft_Windows_NT_Server_Version_4.0_227-075-385_CD-KEY_419-1343253_1996.iso 
-m 2048 -boot d -machine pc,accel=tcg
  WARNING: Image format was not specified for 'disk.img' and probing guessed 
raw.
   Automatically detecting the format is dangerous for raw images, 
write operations on block 0 will be restricted.
   Specify the 'raw' format explicitly to remove the restrictions.
  **
  ERROR:/home/rjones/d/qemu/cpus.c:1580:qemu_mutex_lock_iothread: assertion 
failed: (!qemu_mutex_iothread_locked())
  Aborted (core dumped)

  The stack trace in the failing thread is:

  Thread 4 (Thread 0x7fffb0418700 (LWP 21979)):
  #0  0x7fffdd89b64b in raise () at /lib64/libc.so.6
  #1  0x7fffdd89d450 in abort () at /lib64/libc.so.6
  #2  0x7fffdff8c75d in g_assertion_message () at /lib64/libglib-2.0.so.0
  #3  0x7fffdff8c7ea in g_assertion_message_expr ()
  at /lib64/libglib-2.0.so.0
  #4  0x557a7d00 in qemu_mutex_lock_iothread ()
  at /home/rjones/d/qemu/cpus.c:1580
  #5  0x557cb429 in io_writex (env=env@entry=0x56751400, 
iotlbentry=0x5675b678, 
  iotlbentry@entry=0x5ae40c918, val=val@entry=8, 
addr=addr@entry=2148532220, retaddr=0, retaddr@entry=93825011136120, 
size=size@entry=4)
  at 

[Qemu-devel] [Bug 1686980] [NEW] qemu is very slow when adding 16, 384 virtio-scsi drives

2017-04-28 Thread Richard Jones
Public bug reported:

qemu runs very slowly when adding many virtio-scsi drives.  I have
attached a small reproducer shell script which demonstrates this.

Using perf shows the following stack trace taking all the time:

72.42%71.15%  qemu-system-x86  qemu-system-x86_64   [.] drive_get
|  
 --72.32%--drive_get
   |  
--1.24%--__irqentry_text_start
  |  
   --1.22%--smp_apic_timer_interrupt
 |  
  
--1.00%--local_apic_timer_interrupt
|  
 
--1.00%--hrtimer_interrupt
   |  

--0.83%--__hrtimer_run_queues
  | 
 
   
--0.64%--tick_sched_timer

21.70%21.34%  qemu-system-x86  qemu-system-x86_64   [.] 
blk_legacy_dinfo
|
---blk_legacy_dinfo

 3.65% 3.59%  qemu-system-x86  qemu-system-x86_64   [.] blk_next
|
---blk_next

** Affects: qemu
 Importance: Undecided
 Status: New

** Attachment added: "drives.sh"
   https://bugs.launchpad.net/bugs/1686980/+attachment/4869097/+files/drives.sh

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1686980

Title:
  qemu is very slow when adding 16,384 virtio-scsi drives

Status in QEMU:
  New

Bug description:
  qemu runs very slowly when adding many virtio-scsi drives.  I have
  attached a small reproducer shell script which demonstrates this.

  Using perf shows the following stack trace taking all the time:

  72.42%71.15%  qemu-system-x86  qemu-system-x86_64   [.] drive_get
  |  
   --72.32%--drive_get
 |  
  --1.24%--__irqentry_text_start
|  
 --1.22%--smp_apic_timer_interrupt
   |  

--1.00%--local_apic_timer_interrupt
  |  
   
--1.00%--hrtimer_interrupt
 |  
  
--0.83%--__hrtimer_run_queues

|  

 --0.64%--tick_sched_timer

  21.70%21.34%  qemu-system-x86  qemu-system-x86_64   [.] 
blk_legacy_dinfo
  |
  ---blk_legacy_dinfo

   3.65% 3.59%  qemu-system-x86  qemu-system-x86_64   [.] blk_next
  |
  ---blk_next

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1686980/+subscriptions



[Qemu-devel] [Bug 1686364] [NEW] qemu -readconfig/-writeconfig cannot handle quotes in values

2017-04-26 Thread Richard Jones
Public bug reported:

$ qemu-system-x86_64 -drive file=/tmp/foo\" -writeconfig -
# qemu config file

[drive]
  file = "/tmp/foo""

For bonus points, try to construct a valid qemu config file that
contains a quoted value.  It's pretty clear (from looking at the code
also) that this is not possible.

Also:

- maximum value length is hard-coded in the parser at 1023 characters
(for no apparent reason)

- the format is undocumented

- don't use sscanf for parsing!

** Affects: qemu
 Importance: Undecided
 Status: New

** Description changed:

  $ qemu-system-x86_64 -drive file=/tmp/foo\" -writeconfig -
  # qemu config file
  
  [drive]
-   file = "/tmp/foo""
+   file = "/tmp/foo""
  
- For bonus points, try to construct a value qemu config file that
+ For bonus points, try to construct a valid qemu config file that
  contains a quoted value.  It's pretty clear (from looking at the code
  also) that this is not possible.
  
  Also:
  
  - maximum value length is hard-coded in the parser at 1023 characters
  (for no apparent reason)
  
  - the format is undocumented
  
  - don't use sscanf for parsing!

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1686364

Title:
  qemu -readconfig/-writeconfig cannot handle quotes in values

Status in QEMU:
  New

Bug description:
  $ qemu-system-x86_64 -drive file=/tmp/foo\" -writeconfig -
  # qemu config file

  [drive]
    file = "/tmp/foo""

  For bonus points, try to construct a valid qemu config file that
  contains a quoted value.  It's pretty clear (from looking at the code
  also) that this is not possible.

  Also:

  - maximum value length is hard-coded in the parser at 1023 characters
  (for no apparent reason)

  - the format is undocumented

  - don't use sscanf for parsing!

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1686364/+subscriptions



[Qemu-devel] [Bug 1224444] Re: virtio-serial loses writes when used over virtio-mmio

2017-03-04 Thread Richard Jones
I don't know how to close bugs in launchpad, but this one can be closed
for a couple of reasons:

(1) I benchmarked virtio-mmio the other day using qemu-speed-test on aarch64
and I did not encounter the bug.

(2) aarch64 has supported virtio-pci for a while, for virtio-mmio is effectively
obsolete.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/122

Title:
  virtio-serial loses writes when used over virtio-mmio

Status in QEMU:
  Fix Released

Bug description:
  virtio-serial appears to lose writes, but only when used on top of
  virtio-mmio.  The scenario is this:

  /home/rjones/d/qemu/arm-softmmu/qemu-system-arm \
  -global virtio-blk-device.scsi=off \
  -nodefconfig \
  -nodefaults \
  -nographic \
  -M vexpress-a15 \
  -machine accel=kvm:tcg \
  -m 500 \
  -no-reboot \
  -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/kernel.27944 \
  -dtb /home/rjones/d/libguestfs/tmp/.guestfs-1001/dtb.27944 \
  -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/initrd.27944 \
  -device virtio-scsi-device,id=scsi \
  -drive 
file=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
  -device scsi-hd,drive=hd0 \
  -drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/root.27944,snapshot=on,id=appliance,cache=unsafe,if=none
 \
  -device scsi-hd,drive=appliance \
  -device virtio-serial-device \
  -serial stdio \
  -chardev 
socket,path=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/guestfsd.sock,id=channel0
 \
  -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
  -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check 
acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 
guestfs_verbose=1 TERM=xterm-256color'

  After the guest starts up, a daemon writes 4 bytes to a virtio-serial
  socket.  The host side reads these 4 bytes correctly and writes a 64
  byte message.  The guest never sees this message.

  I enabled virtio-mmio debugging, and this is what is printed (## = my
  comment):

  ## guest opens the socket:
  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0'
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  ## guest writes 4 bytes to the socket:
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTFS_LAUNCH_FLAG
  ## host reads 4 bytes successfully:
  main_loop libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14605ms] appliance is up
  Guest launched OK.
  ## host writes 64 bytes to socket:
  libguestfs: writing the data to the socket (size = 64)
  waiting for next request
  libguestfs: data written OK
  ## hangs here forever with guest in read() call never receiving any data

  I am using qemu from git today (2d1fe1873a984).

  Some notes:

  - It's not 100% reproducible.  Sometimes everything works fine, although it 
fails "often" (eg > 2/3rds of the time).
  - KVM is being used.
  - We've long used virtio-serial on x86 and have never seen anything like this.

  This is what the output looks like when it doesn't fail:

  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0
  '
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTlibguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14710ms] appliance is up
  Guest launched OK.
  libguestfs: writing the data to the socket (size = 64)
  FS_LAUNCH_FLAG
  main_loop waiting for next request
  libguestfs: data written OK
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  [... more virtio-mmio lines omitted ...]
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: 

[Qemu-devel] [Bug 1224444] Re: virtio-serial loses writes when used over virtio-mmio

2017-03-04 Thread Richard Jones
Fixed upstream, see previous comment.

** Changed in: qemu
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/122

Title:
  virtio-serial loses writes when used over virtio-mmio

Status in QEMU:
  Fix Released

Bug description:
  virtio-serial appears to lose writes, but only when used on top of
  virtio-mmio.  The scenario is this:

  /home/rjones/d/qemu/arm-softmmu/qemu-system-arm \
  -global virtio-blk-device.scsi=off \
  -nodefconfig \
  -nodefaults \
  -nographic \
  -M vexpress-a15 \
  -machine accel=kvm:tcg \
  -m 500 \
  -no-reboot \
  -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/kernel.27944 \
  -dtb /home/rjones/d/libguestfs/tmp/.guestfs-1001/dtb.27944 \
  -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/initrd.27944 \
  -device virtio-scsi-device,id=scsi \
  -drive 
file=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
  -device scsi-hd,drive=hd0 \
  -drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/root.27944,snapshot=on,id=appliance,cache=unsafe,if=none
 \
  -device scsi-hd,drive=appliance \
  -device virtio-serial-device \
  -serial stdio \
  -chardev 
socket,path=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/guestfsd.sock,id=channel0
 \
  -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
  -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check 
acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 
guestfs_verbose=1 TERM=xterm-256color'

  After the guest starts up, a daemon writes 4 bytes to a virtio-serial
  socket.  The host side reads these 4 bytes correctly and writes a 64
  byte message.  The guest never sees this message.

  I enabled virtio-mmio debugging, and this is what is printed (## = my
  comment):

  ## guest opens the socket:
  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0'
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  ## guest writes 4 bytes to the socket:
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTFS_LAUNCH_FLAG
  ## host reads 4 bytes successfully:
  main_loop libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14605ms] appliance is up
  Guest launched OK.
  ## host writes 64 bytes to socket:
  libguestfs: writing the data to the socket (size = 64)
  waiting for next request
  libguestfs: data written OK
  ## hangs here forever with guest in read() call never receiving any data

  I am using qemu from git today (2d1fe1873a984).

  Some notes:

  - It's not 100% reproducible.  Sometimes everything works fine, although it 
fails "often" (eg > 2/3rds of the time).
  - KVM is being used.
  - We've long used virtio-serial on x86 and have never seen anything like this.

  This is what the output looks like when it doesn't fail:

  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0
  '
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTlibguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14710ms] appliance is up
  Guest launched OK.
  libguestfs: writing the data to the socket (size = 64)
  FS_LAUNCH_FLAG
  main_loop waiting for next request
  libguestfs: data written OK
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  [... more virtio-mmio lines omitted ...]
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  guestfsd: main_loop: new request, len 0x3c
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x4
  : 20 00 f5 f5 00 00 00 04 

[Qemu-devel] [Bug 1657010] [NEW] RFE: Please implement -cpu best or a CPU fallback option

2017-01-16 Thread Richard Jones
Public bug reported:

QEMU should implement a -cpu best option or some other way to make this
work:

qemu -M pc,accel=kvm:tcg -cpu best

qemu -M pc,accel=kvm:tcg -cpu host:qemu64

See also:

https://bugzilla.redhat.com/show_bug.cgi?id=1277744#c6

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1657010

Title:
  RFE: Please implement -cpu best or a CPU fallback option

Status in QEMU:
  New

Bug description:
  QEMU should implement a -cpu best option or some other way to make
  this work:

  qemu -M pc,accel=kvm:tcg -cpu best

  qemu -M pc,accel=kvm:tcg -cpu host:qemu64

  See also:

  https://bugzilla.redhat.com/show_bug.cgi?id=1277744#c6

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1657010/+subscriptions



[Qemu-devel] [Bug 1021649] Re: qemu 1.1.0 waits for a keypress at boot

2016-11-10 Thread Richard Jones
No this refers to a very old version of qemu.  This bug should be
closed.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1021649

Title:
  qemu 1.1.0 waits for a keypress at boot

Status in QEMU:
  Fix Released
Status in qemu-kvm package in Debian:
  Fix Released

Bug description:
  qemu 1.1.0 waits for a keypress at boot.  Please don't ever do this.

  Try the attached test script.  When run it will initially print
  nothing, until you hit a key on the keyboard.

  Removing -nographic fixes the problem.

  Using virtio-scsi instead of virtio-blk fixes the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1021649/+subscriptions



[Qemu-devel] [Bug 1470536] Re: qemu-img incorrectly prints qemu-img: Host floppy pass-through is deprecated

2015-07-01 Thread Richard Jones
I sent a patch to qemu-devel which should fix this.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1470536

Title:
  qemu-img incorrectly prints qemu-img: Host floppy pass-through is
  deprecated

Status in QEMU:
  New

Bug description:
  qemu-img incorrectly prints this warning when you use /dev/fd/NN to
  pass in file descriptors.  A simple way to demonstrate this uses bash
  process substitution, so the following will only work if you are using
  bash as your shell:

  $ qemu-img info ( cat /dev/null )
  qemu-img: Host floppy pass-through is deprecated
  Support for it will be removed in a future release.
  qemu-img: Could not open '/dev/fd/63': Could not refresh total sector count: 
Illegal seek

  The root cause is a bug in block/raw-posix.c:floppy_probe_device()
  where it thinks anything starting with /dev/fd is a floppy drive,
  which is not the case here:

  http://git.qemu.org/?p=qemu.git;a=blob;f=block/raw-
  posix.c;h=cbe6574bf4da90a124436a40422dce3667da71e6;hb=HEAD#l2425

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1470536/+subscriptions



[Qemu-devel] [Bug 1470536] [NEW] qemu-img incorrectly prints qemu-img: Host floppy pass-through is deprecated

2015-07-01 Thread Richard Jones
Public bug reported:

qemu-img incorrectly prints this warning when you use /dev/fd/NN to
pass in file descriptors.  A simple way to demonstrate this uses bash
process substitution, so the following will only work if you are using
bash as your shell:

$ qemu-img info ( cat /dev/null )
qemu-img: Host floppy pass-through is deprecated
Support for it will be removed in a future release.
qemu-img: Could not open '/dev/fd/63': Could not refresh total sector count: 
Illegal seek

The root cause is a bug in block/raw-posix.c:floppy_probe_device() where
it thinks anything starting with /dev/fd is a floppy drive, which is not
the case here:

http://git.qemu.org/?p=qemu.git;a=blob;f=block/raw-
posix.c;h=cbe6574bf4da90a124436a40422dce3667da71e6;hb=HEAD#l2425

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1470536

Title:
  qemu-img incorrectly prints qemu-img: Host floppy pass-through is
  deprecated

Status in QEMU:
  New

Bug description:
  qemu-img incorrectly prints this warning when you use /dev/fd/NN to
  pass in file descriptors.  A simple way to demonstrate this uses bash
  process substitution, so the following will only work if you are using
  bash as your shell:

  $ qemu-img info ( cat /dev/null )
  qemu-img: Host floppy pass-through is deprecated
  Support for it will be removed in a future release.
  qemu-img: Could not open '/dev/fd/63': Could not refresh total sector count: 
Illegal seek

  The root cause is a bug in block/raw-posix.c:floppy_probe_device()
  where it thinks anything starting with /dev/fd is a floppy drive,
  which is not the case here:

  http://git.qemu.org/?p=qemu.git;a=blob;f=block/raw-
  posix.c;h=cbe6574bf4da90a124436a40422dce3667da71e6;hb=HEAD#l2425

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1470536/+subscriptions



[Qemu-devel] [Bug 1462944] Re: vpc file causes qemu-img to consume lots of time and memory

2015-06-08 Thread Richard Jones
This slightly modified example takes about 7 seconds and 2 GB of heap:

$ /usr/bin/time ~/d/qemu/qemu-img info /mnt/scratch/afl13.img 
block-vpc: The header checksum of '/mnt/scratch/afl13.img' is incorrect.
qemu-img: Could not open '/mnt/scratch/afl13.img': block-vpc: 
free_data_block_offset points after the end of file. The image has been 
truncated.
1.84user 5.72system 0:07.59elapsed 99%CPU (0avgtext+0avgdata 
2045496maxresident)k
8inputs+0outputs (0major+507536minor)pagefaults 0swaps


** Attachment added: afl13.img
   
https://bugs.launchpad.net/qemu/+bug/1462944/+attachment/4411471/+files/afl13.img

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1462944

Title:
  vpc file causes qemu-img to consume lots of time and memory

Status in QEMU:
  New

Bug description:
  The attached vpc file causes 'qemu-img info' to consume 3 or 4 seconds
  of CPU time and 1.3 GB of heap, causing a minor denial of service.

  $ /usr/bin/time ~/d/qemu/qemu-img info afl12.img
  block-vpc: The header checksum of 'afl12.img' is incorrect.
  qemu-img: Could not open 'afl12.img': block-vpc: free_data_block_offset 
points after the end of file. The image has been truncated.
  1.19user 3.15system 0:04.35elapsed 99%CPU (0avgtext+0avgdata 
1324504maxresident)k
  0inputs+0outputs (0major+327314minor)pagefaults 0swaps

  The file was found using american-fuzzy-lop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1462944/+subscriptions



[Qemu-devel] [Bug 1462949] Re: vmdk files cause qemu-img to consume lots of time and memory

2015-06-08 Thread Richard Jones
Both files were found by using american-fuzzy-lop.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1462949

Title:
  vmdk files cause qemu-img to consume lots of time and memory

Status in QEMU:
  New

Bug description:
  The two attached files cause 'qemu-img info' to consume lots of time
  and memory.  Around 10-12 seconds of CPU time, and around 3-4 GB of
  heap.

  $ /usr/bin/time ~/d/qemu/qemu-img info afl10.img 
  qemu-img: Can't get size of device 'image': File too large
  0.40user 11.57system 0:12.03elapsed 99%CPU (0avgtext+0avgdata 
4197804maxresident)k
  56inputs+0outputs (0major+1045672minor)pagefaults 0swaps

  $ /usr/bin/time ~/d/qemu/qemu-img info afl11.img 
  image: afl11.img
  file format: vmdk
  virtual size: 12802T (14075741666803712 bytes)
  disk size: 4.0K
  cluster_size: 65536
  Format specific information:
  cid: 4294967295
  parent cid: 4294967295
  create type: monolithicSparse
  extents:
  [0]:
  virtual size: 14075741666803712
  filename: afl11.img
  cluster size: 65536
  format: 
  0.29user 9.10system 0:09.43elapsed 99%CPU (0avgtext+0avgdata 
3297360maxresident)k
  8inputs+0outputs (0major+820507minor)pagefaults 0swaps

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1462949/+subscriptions



[Qemu-devel] [Bug 1462949] Re: vmdk files cause qemu-img to consume lots of time and memory

2015-06-08 Thread Richard Jones
** Attachment added: afl11.img
   
https://bugs.launchpad.net/qemu/+bug/1462949/+attachment/4411476/+files/afl11.img

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1462949

Title:
  vmdk files cause qemu-img to consume lots of time and memory

Status in QEMU:
  New

Bug description:
  The two attached files cause 'qemu-img info' to consume lots of time
  and memory.  Around 10-12 seconds of CPU time, and around 3-4 GB of
  heap.

  $ /usr/bin/time ~/d/qemu/qemu-img info afl10.img 
  qemu-img: Can't get size of device 'image': File too large
  0.40user 11.57system 0:12.03elapsed 99%CPU (0avgtext+0avgdata 
4197804maxresident)k
  56inputs+0outputs (0major+1045672minor)pagefaults 0swaps

  $ /usr/bin/time ~/d/qemu/qemu-img info afl11.img 
  image: afl11.img
  file format: vmdk
  virtual size: 12802T (14075741666803712 bytes)
  disk size: 4.0K
  cluster_size: 65536
  Format specific information:
  cid: 4294967295
  parent cid: 4294967295
  create type: monolithicSparse
  extents:
  [0]:
  virtual size: 14075741666803712
  filename: afl11.img
  cluster size: 65536
  format: 
  0.29user 9.10system 0:09.43elapsed 99%CPU (0avgtext+0avgdata 
3297360maxresident)k
  8inputs+0outputs (0major+820507minor)pagefaults 0swaps

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1462949/+subscriptions



[Qemu-devel] [Bug 1462949] [NEW] vmdk files cause qemu-img to consume lots of time and memory

2015-06-08 Thread Richard Jones
Public bug reported:

The two attached files cause 'qemu-img info' to consume lots of time and
memory.  Around 10-12 seconds of CPU time, and around 3-4 GB of heap.

$ /usr/bin/time ~/d/qemu/qemu-img info afl10.img 
qemu-img: Can't get size of device 'image': File too large
0.40user 11.57system 0:12.03elapsed 99%CPU (0avgtext+0avgdata 
4197804maxresident)k
56inputs+0outputs (0major+1045672minor)pagefaults 0swaps

$ /usr/bin/time ~/d/qemu/qemu-img info afl11.img 
image: afl11.img
file format: vmdk
virtual size: 12802T (14075741666803712 bytes)
disk size: 4.0K
cluster_size: 65536
Format specific information:
cid: 4294967295
parent cid: 4294967295
create type: monolithicSparse
extents:
[0]:
virtual size: 14075741666803712
filename: afl11.img
cluster size: 65536
format: 
0.29user 9.10system 0:09.43elapsed 99%CPU (0avgtext+0avgdata 
3297360maxresident)k
8inputs+0outputs (0major+820507minor)pagefaults 0swaps

** Affects: qemu
 Importance: Undecided
 Status: New

** Attachment added: afl10.img
   https://bugs.launchpad.net/bugs/1462949/+attachment/4411475/+files/afl10.img

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1462949

Title:
  vmdk files cause qemu-img to consume lots of time and memory

Status in QEMU:
  New

Bug description:
  The two attached files cause 'qemu-img info' to consume lots of time
  and memory.  Around 10-12 seconds of CPU time, and around 3-4 GB of
  heap.

  $ /usr/bin/time ~/d/qemu/qemu-img info afl10.img 
  qemu-img: Can't get size of device 'image': File too large
  0.40user 11.57system 0:12.03elapsed 99%CPU (0avgtext+0avgdata 
4197804maxresident)k
  56inputs+0outputs (0major+1045672minor)pagefaults 0swaps

  $ /usr/bin/time ~/d/qemu/qemu-img info afl11.img 
  image: afl11.img
  file format: vmdk
  virtual size: 12802T (14075741666803712 bytes)
  disk size: 4.0K
  cluster_size: 65536
  Format specific information:
  cid: 4294967295
  parent cid: 4294967295
  create type: monolithicSparse
  extents:
  [0]:
  virtual size: 14075741666803712
  filename: afl11.img
  cluster size: 65536
  format: 
  0.29user 9.10system 0:09.43elapsed 99%CPU (0avgtext+0avgdata 
3297360maxresident)k
  8inputs+0outputs (0major+820507minor)pagefaults 0swaps

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1462949/+subscriptions



[Qemu-devel] [Bug 1462944] [NEW] vpc file causes qemu-img to consume lots of time and memory

2015-06-08 Thread Richard Jones
Public bug reported:

The attached vpc file causes 'qemu-img info' to consume 3 or 4 seconds
of CPU time and 1.3 GB of heap, causing a minor denial of service.

$ /usr/bin/time ~/d/qemu/qemu-img info afl12.img
block-vpc: The header checksum of 'afl12.img' is incorrect.
qemu-img: Could not open 'afl12.img': block-vpc: free_data_block_offset points 
after the end of file. The image has been truncated.
1.19user 3.15system 0:04.35elapsed 99%CPU (0avgtext+0avgdata 
1324504maxresident)k
0inputs+0outputs (0major+327314minor)pagefaults 0swaps

The file was found using american-fuzzy-lop.

** Affects: qemu
 Importance: Undecided
 Status: New

** Attachment added: afl12.img
   https://bugs.launchpad.net/bugs/1462944/+attachment/4411469/+files/afl12.img

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1462944

Title:
  vpc file causes qemu-img to consume lots of time and memory

Status in QEMU:
  New

Bug description:
  The attached vpc file causes 'qemu-img info' to consume 3 or 4 seconds
  of CPU time and 1.3 GB of heap, causing a minor denial of service.

  $ /usr/bin/time ~/d/qemu/qemu-img info afl12.img
  block-vpc: The header checksum of 'afl12.img' is incorrect.
  qemu-img: Could not open 'afl12.img': block-vpc: free_data_block_offset 
points after the end of file. The image has been truncated.
  1.19user 3.15system 0:04.35elapsed 99%CPU (0avgtext+0avgdata 
1324504maxresident)k
  0inputs+0outputs (0major+327314minor)pagefaults 0swaps

  The file was found using american-fuzzy-lop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1462944/+subscriptions



[Qemu-devel] [Bug 1186984] Re: large -initrd can wrap around in memory causing memory corruption

2015-03-23 Thread Richard Jones
Although the error message is the same, the bug in comment 5 seems
completely different.  Please open a new bug about this issue, giving
*all* details - including the full qemu command line.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1186984

Title:
  large -initrd can wrap around in memory causing memory corruption

Status in QEMU:
  New

Bug description:
  We don't use large -initrd in libguestfs any more, but I noticed that
  a large -initrd file now crashes qemu spectacularly:

  $ ls -lh /tmp/kernel /tmp/initrd 
  -rw-r--r--. 1 rjones rjones 273M Jun  3 14:02 /tmp/initrd
  lrwxrwxrwx. 1 rjones rjones   35 Jun  3 14:02 /tmp/kernel - 
/boot/vmlinuz-3.9.4-200.fc18.x86_64

  $ ./x86_64-softmmu/qemu-system-x86_64 -L pc-bios \
  -kernel /tmp/kernel -initrd /tmp/initrd -hda /tmp/test1.img -serial stdio 
\
  -append console=ttyS0

  qemu crashes with one of several errors:

  PFLASH: Possible BUG - Write block confirm

  qemu: fatal: Trying to execute code outside RAM or ROM at
  0x000b96cd

  If -enable-kvm is used:

  KVM: injection failed, MSI lost (Operation not permitted)

  In all cases the SDL display fills up with coloured blocks before the
  crash (see the attached screenshot).

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1186984/+subscriptions



[Qemu-devel] [Bug 1383857] Re: aarch64: virtio disks don't show up in guest (neither blk nor scsi)

2014-12-01 Thread Richard Jones
Still happening with latest upstream kernel.  It seems to involve using
the -initrd option at all, with any cpio file, even a tiny one.  More
results posted here:

https://lists.cs.columbia.edu/pipermail/kvmarm/2014-December/012557.html

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1383857

Title:
  aarch64: virtio disks don't show up in guest (neither blk nor scsi)

Status in QEMU:
  New

Bug description:
  kernel-3.18.0-0.rc1.git0.1.rwmj5.fc22.aarch64 (3.18 rc1 + some hardware 
enablement)
  qemu from git today

  When I create a guest with virtio-scsi disks, they don't show up inside the 
guest.
  Literally after the virtio_mmio.ko and virtio_scsi.ko modules are loaded, 
there are
  no messages about disks, and of course nothing else works.

  Really long command line (generated by libvirt):

  HOME=/home/rjones USER=rjones LOGNAME=rjones QEMU_AUDIO_DRV=none
  TMPDIR=/home/rjones/d/libguestfs/tmp
  /home/rjones/d/qemu/aarch64-softmmu/qemu-system-aarch64 -name guestfs-
  oqv29um3jp03kpjf -S -machine virt,accel=tcg,usb=off -cpu cortex-a57 -m
  500 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
  a5f1a15d-2bc7-46df-9974-1d1f643b2449 -nographic -no-user-config
  -nodefaults -chardev
  socket,id=charmonitor,path=/home/rjones/.config/libvirt/qemu/lib
  /guestfs-oqv29um3jp03kpjf.monitor,server,nowait -mon
  chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-reboot -boot strict=on -kernel
  /home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/kernel -initrd
  /home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/initrd -append
  panic=1 console=ttyAMA0 earlyprintk=pl011,0x900 ignore_loglevel
  efi-rtc=noprobe udevtimeout=6000 udev.event-timeout=6000
  no_timer_check lpj=50 acpi=off printk.time=1 cgroup_disable=memory
  root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color -device
  virtio-scsi-device,id=scsi0 -device virtio-serial-device,id=virtio-
  serial0 -usb -drive
  file=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/scratch.1,if=none,id
  =drive-scsi0-0-0-0,format=raw,cache=unsafe -device scsi-
  hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-
  scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive
  file=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/overlay2,if=none,id
  =drive-scsi0-0-1-0,format=qcow2,cache=unsafe -device scsi-
  hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-
  scsi0-0-1-0,id=scsi0-0-1-0 -serial
  unix:/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/console.sock
  -chardev
  
socket,id=charchannel0,path=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/guestfsd.sock
  -device virtserialport,bus=virtio-
  serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0
  -msg timestamp=on

  There are no kernel messages about the disks, they just are not seen.

  Worked with kernel 3.16 so I suspect this could be a kernel bug rather than a
  qemu bug, but I've no idea where to report those.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1383857/+subscriptions



[Qemu-devel] [Bug 1383857] Re: aarch64: virtio disks don't show up in guest (neither blk nor scsi)

2014-12-01 Thread Richard Jones
Finally found the problem, patch posted:
https://lists.gnu.org/archive/html/qemu-devel/2014-12/msg00034.html

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1383857

Title:
  aarch64: virtio disks don't show up in guest (neither blk nor scsi)

Status in QEMU:
  New

Bug description:
  kernel-3.18.0-0.rc1.git0.1.rwmj5.fc22.aarch64 (3.18 rc1 + some hardware 
enablement)
  qemu from git today

  When I create a guest with virtio-scsi disks, they don't show up inside the 
guest.
  Literally after the virtio_mmio.ko and virtio_scsi.ko modules are loaded, 
there are
  no messages about disks, and of course nothing else works.

  Really long command line (generated by libvirt):

  HOME=/home/rjones USER=rjones LOGNAME=rjones QEMU_AUDIO_DRV=none
  TMPDIR=/home/rjones/d/libguestfs/tmp
  /home/rjones/d/qemu/aarch64-softmmu/qemu-system-aarch64 -name guestfs-
  oqv29um3jp03kpjf -S -machine virt,accel=tcg,usb=off -cpu cortex-a57 -m
  500 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
  a5f1a15d-2bc7-46df-9974-1d1f643b2449 -nographic -no-user-config
  -nodefaults -chardev
  socket,id=charmonitor,path=/home/rjones/.config/libvirt/qemu/lib
  /guestfs-oqv29um3jp03kpjf.monitor,server,nowait -mon
  chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-reboot -boot strict=on -kernel
  /home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/kernel -initrd
  /home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/initrd -append
  panic=1 console=ttyAMA0 earlyprintk=pl011,0x900 ignore_loglevel
  efi-rtc=noprobe udevtimeout=6000 udev.event-timeout=6000
  no_timer_check lpj=50 acpi=off printk.time=1 cgroup_disable=memory
  root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color -device
  virtio-scsi-device,id=scsi0 -device virtio-serial-device,id=virtio-
  serial0 -usb -drive
  file=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/scratch.1,if=none,id
  =drive-scsi0-0-0-0,format=raw,cache=unsafe -device scsi-
  hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-
  scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive
  file=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/overlay2,if=none,id
  =drive-scsi0-0-1-0,format=qcow2,cache=unsafe -device scsi-
  hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-
  scsi0-0-1-0,id=scsi0-0-1-0 -serial
  unix:/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/console.sock
  -chardev
  
socket,id=charchannel0,path=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/guestfsd.sock
  -device virtserialport,bus=virtio-
  serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0
  -msg timestamp=on

  There are no kernel messages about the disks, they just are not seen.

  Worked with kernel 3.16 so I suspect this could be a kernel bug rather than a
  qemu bug, but I've no idea where to report those.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1383857/+subscriptions



[Qemu-devel] [Bug 1383857] Re: aarch64: virtio disks don't show up in guest (neither blk nor scsi)

2014-10-22 Thread Richard Jones
Finally finished the git bisect (of the guest kernel, not qemu):

421520ba98290a73b35b7644e877a48f18e06004 is the first bad commit
commit 421520ba98290a73b35b7644e877a48f18e06004
Author: Yalin Wang yalin.w...@sonymobile.com
Date:   Fri Sep 26 03:07:09 2014 +0100

ARM: 8167/1: extend the reserved memory for initrd to be page aligned

This patch extends the start and end address of initrd to be page aligned,
so that we can free all memory including the un-page aligned head or tail
page of initrd, if the start or end address of initrd are not page
aligned, the page can't be freed by free_initrd_mem() function.

Signed-off-by: Yalin Wang yalin.w...@sonymobile.com
Acked-by: Catalin Marinas catalin.mari...@arm.com
Signed-off-by: Russell King rmk+ker...@arm.linux.org.uk

:04 04 23bd54d302533c173a4ae592969dd2868794e9ed
f1833b44ee7a389902f6f9d2fb55f4b89ba0de16 M  arch

Now might be a good time to mention that Fedora has very recently
switched to using 64k pages.

I'll continue this on the mailing list you suggested.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1383857

Title:
  aarch64: virtio disks don't show up in guest (neither blk nor scsi)

Status in QEMU:
  New

Bug description:
  kernel-3.18.0-0.rc1.git0.1.rwmj5.fc22.aarch64 (3.18 rc1 + some hardware 
enablement)
  qemu from git today

  When I create a guest with virtio-scsi disks, they don't show up inside the 
guest.
  Literally after the virtio_mmio.ko and virtio_scsi.ko modules are loaded, 
there are
  no messages about disks, and of course nothing else works.

  Really long command line (generated by libvirt):

  HOME=/home/rjones USER=rjones LOGNAME=rjones QEMU_AUDIO_DRV=none
  TMPDIR=/home/rjones/d/libguestfs/tmp
  /home/rjones/d/qemu/aarch64-softmmu/qemu-system-aarch64 -name guestfs-
  oqv29um3jp03kpjf -S -machine virt,accel=tcg,usb=off -cpu cortex-a57 -m
  500 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
  a5f1a15d-2bc7-46df-9974-1d1f643b2449 -nographic -no-user-config
  -nodefaults -chardev
  socket,id=charmonitor,path=/home/rjones/.config/libvirt/qemu/lib
  /guestfs-oqv29um3jp03kpjf.monitor,server,nowait -mon
  chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-reboot -boot strict=on -kernel
  /home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/kernel -initrd
  /home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/initrd -append
  panic=1 console=ttyAMA0 earlyprintk=pl011,0x900 ignore_loglevel
  efi-rtc=noprobe udevtimeout=6000 udev.event-timeout=6000
  no_timer_check lpj=50 acpi=off printk.time=1 cgroup_disable=memory
  root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color -device
  virtio-scsi-device,id=scsi0 -device virtio-serial-device,id=virtio-
  serial0 -usb -drive
  file=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/scratch.1,if=none,id
  =drive-scsi0-0-0-0,format=raw,cache=unsafe -device scsi-
  hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-
  scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive
  file=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/overlay2,if=none,id
  =drive-scsi0-0-1-0,format=qcow2,cache=unsafe -device scsi-
  hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-
  scsi0-0-1-0,id=scsi0-0-1-0 -serial
  unix:/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/console.sock
  -chardev
  
socket,id=charchannel0,path=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/guestfsd.sock
  -device virtserialport,bus=virtio-
  serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0
  -msg timestamp=on

  There are no kernel messages about the disks, they just are not seen.

  Worked with kernel 3.16 so I suspect this could be a kernel bug rather than a
  qemu bug, but I've no idea where to report those.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1383857/+subscriptions



[Qemu-devel] [Bug 1383857] [NEW] aarch64: virtio-scsi disks don't show up in guest

2014-10-21 Thread Richard Jones
Public bug reported:

kernel-3.18.0-0.rc1.git0.1.rwmj5.fc22.aarch64 (3.18 rc1 + some hardware 
enablement)
qemu from git today

When I create a guest with virtio-scsi disks, they don't show up inside the 
guest.
Literally after the virtio_mmio.ko and virtio_scsi.ko modules are loaded, there 
are
no messages about disks, and of course nothing else works.

Really long command line (generated by libvirt):

HOME=/home/rjones USER=rjones LOGNAME=rjones QEMU_AUDIO_DRV=none
TMPDIR=/home/rjones/d/libguestfs/tmp /home/rjones/d/qemu/aarch64-softmmu
/qemu-system-aarch64 -name guestfs-oqv29um3jp03kpjf -S -machine
virt,accel=tcg,usb=off -cpu cortex-a57 -m 500 -realtime mlock=off -smp
1,sockets=1,cores=1,threads=1 -uuid a5f1a15d-2bc7-46df-9974-1d1f643b2449
-nographic -no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/home/rjones/.config/libvirt/qemu/lib
/guestfs-oqv29um3jp03kpjf.monitor,server,nowait -mon
chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew
-no-reboot -boot strict=on -kernel
/home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/kernel -initrd
/home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/initrd -append
panic=1 console=ttyAMA0 earlyprintk=pl011,0x900 ignore_loglevel efi-
rtc=noprobe udevtimeout=6000 udev.event-timeout=6000 no_timer_check
lpj=50 acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb
selinux=0 guestfs_verbose=1 TERM=xterm-256color -device virtio-scsi-
device,id=scsi0 -device virtio-serial-device,id=virtio-serial0 -usb
-drive
file=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/scratch.1,if=none,id
=drive-scsi0-0-0-0,format=raw,cache=unsafe -device scsi-
hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-
scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive
file=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/overlay2,if=none,id
=drive-scsi0-0-1-0,format=qcow2,cache=unsafe -device scsi-
hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-
scsi0-0-1-0,id=scsi0-0-1-0 -serial
unix:/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/console.sock
-chardev
socket,id=charchannel0,path=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/guestfsd.sock
-device virtserialport,bus=virtio-
serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0
-msg timestamp=on

There are no kernel messages about the disks, they just are not seen.

Worked with kernel 3.16 so I suspect this could be a kernel bug rather than a
qemu bug, but I've no idea where to report those.

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1383857

Title:
  aarch64: virtio-scsi disks don't show up in guest

Status in QEMU:
  New

Bug description:
  kernel-3.18.0-0.rc1.git0.1.rwmj5.fc22.aarch64 (3.18 rc1 + some hardware 
enablement)
  qemu from git today

  When I create a guest with virtio-scsi disks, they don't show up inside the 
guest.
  Literally after the virtio_mmio.ko and virtio_scsi.ko modules are loaded, 
there are
  no messages about disks, and of course nothing else works.

  Really long command line (generated by libvirt):

  HOME=/home/rjones USER=rjones LOGNAME=rjones QEMU_AUDIO_DRV=none
  TMPDIR=/home/rjones/d/libguestfs/tmp
  /home/rjones/d/qemu/aarch64-softmmu/qemu-system-aarch64 -name guestfs-
  oqv29um3jp03kpjf -S -machine virt,accel=tcg,usb=off -cpu cortex-a57 -m
  500 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
  a5f1a15d-2bc7-46df-9974-1d1f643b2449 -nographic -no-user-config
  -nodefaults -chardev
  socket,id=charmonitor,path=/home/rjones/.config/libvirt/qemu/lib
  /guestfs-oqv29um3jp03kpjf.monitor,server,nowait -mon
  chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-reboot -boot strict=on -kernel
  /home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/kernel -initrd
  /home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/initrd -append
  panic=1 console=ttyAMA0 earlyprintk=pl011,0x900 ignore_loglevel
  efi-rtc=noprobe udevtimeout=6000 udev.event-timeout=6000
  no_timer_check lpj=50 acpi=off printk.time=1 cgroup_disable=memory
  root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color -device
  virtio-scsi-device,id=scsi0 -device virtio-serial-device,id=virtio-
  serial0 -usb -drive
  file=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/scratch.1,if=none,id
  =drive-scsi0-0-0-0,format=raw,cache=unsafe -device scsi-
  hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-
  scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive
  file=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/overlay2,if=none,id
  =drive-scsi0-0-1-0,format=qcow2,cache=unsafe -device scsi-
  hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-
  scsi0-0-1-0,id=scsi0-0-1-0 -serial
  unix:/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/console.sock
  -chardev
  
socket,id=charchannel0,path=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/guestfsd.sock
  -device 

[Qemu-devel] [Bug 1383857] Re: aarch64: virtio-scsi disks don't show up in guest

2014-10-21 Thread Richard Jones
I have now also tried virtio-blk and that doesn't work either.  Same
symptoms: no log messages at all (even with ignore_loglevel), and no
disks appear.

 Yeah, regressed with this newer kernel sounds more like a kernel
 bug than a QEMU bug to me, especially if all the other virt devices
 still work.

It seems like no virtio drivers work, but it's very hard to tell when
your guest has no disks and therefore no operating system.  How can I
debug this further?

** Summary changed:

- aarch64: virtio-scsi disks don't show up in guest
+ aarch64: virtio disks don't show up in guest (neither blk nor scsi)

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1383857

Title:
  aarch64: virtio disks don't show up in guest (neither blk nor scsi)

Status in QEMU:
  New

Bug description:
  kernel-3.18.0-0.rc1.git0.1.rwmj5.fc22.aarch64 (3.18 rc1 + some hardware 
enablement)
  qemu from git today

  When I create a guest with virtio-scsi disks, they don't show up inside the 
guest.
  Literally after the virtio_mmio.ko and virtio_scsi.ko modules are loaded, 
there are
  no messages about disks, and of course nothing else works.

  Really long command line (generated by libvirt):

  HOME=/home/rjones USER=rjones LOGNAME=rjones QEMU_AUDIO_DRV=none
  TMPDIR=/home/rjones/d/libguestfs/tmp
  /home/rjones/d/qemu/aarch64-softmmu/qemu-system-aarch64 -name guestfs-
  oqv29um3jp03kpjf -S -machine virt,accel=tcg,usb=off -cpu cortex-a57 -m
  500 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
  a5f1a15d-2bc7-46df-9974-1d1f643b2449 -nographic -no-user-config
  -nodefaults -chardev
  socket,id=charmonitor,path=/home/rjones/.config/libvirt/qemu/lib
  /guestfs-oqv29um3jp03kpjf.monitor,server,nowait -mon
  chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-reboot -boot strict=on -kernel
  /home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/kernel -initrd
  /home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/initrd -append
  panic=1 console=ttyAMA0 earlyprintk=pl011,0x900 ignore_loglevel
  efi-rtc=noprobe udevtimeout=6000 udev.event-timeout=6000
  no_timer_check lpj=50 acpi=off printk.time=1 cgroup_disable=memory
  root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color -device
  virtio-scsi-device,id=scsi0 -device virtio-serial-device,id=virtio-
  serial0 -usb -drive
  file=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/scratch.1,if=none,id
  =drive-scsi0-0-0-0,format=raw,cache=unsafe -device scsi-
  hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-
  scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive
  file=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/overlay2,if=none,id
  =drive-scsi0-0-1-0,format=qcow2,cache=unsafe -device scsi-
  hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-
  scsi0-0-1-0,id=scsi0-0-1-0 -serial
  unix:/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/console.sock
  -chardev
  
socket,id=charchannel0,path=/home/rjones/d/libguestfs/tmp/libguestfs4GxfQ9/guestfsd.sock
  -device virtserialport,bus=virtio-
  serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0
  -msg timestamp=on

  There are no kernel messages about the disks, they just are not seen.

  Worked with kernel 3.16 so I suspect this could be a kernel bug rather than a
  qemu bug, but I've no idea where to report those.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1383857/+subscriptions



[Qemu-devel] [Bug 1378554] Re: qemu segfault in virtio_scsi_handle_cmd_req_submit on ARM 32 bit

2014-10-07 Thread Richard Jones
This is qemu from git today (2014-10-07).

The hardware is 32 bit ARM (ODROID-XU Samsung Exynos 5410).  It is
running Ubuntu 14.04 LTS as the main operating system, but I am NOT
using qemu from Ubuntu (which is ancient).

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1378554

Title:
  qemu segfault in virtio_scsi_handle_cmd_req_submit on ARM 32 bit

Status in QEMU:
  New

Bug description:
  /home/rjones/d/qemu/arm-softmmu/qemu-system-arm \
  -global virtio-blk-device.scsi=off \
  -nodefconfig \
  -enable-fips \
  -nodefaults \
  -display none \
  -M virt \
  -machine accel=kvm:tcg \
  -m 500 \
  -no-reboot \
  -rtc driftfix=slew \
  -global kvm-pit.lost_tick_policy=discard \
  -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/appliance.d/kernel \
  -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/appliance.d/initrd \
  -device virtio-scsi-device,id=scsi \
  -drive 
file=/home/rjones/d/libguestfs/tmp/libguestfseV4fT5/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
  -device scsi-hd,drive=hd0 \
  -drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none
 \
  -device scsi-hd,drive=appliance \
  -device virtio-serial-device \
  -serial stdio \
  -chardev 
socket,path=/home/rjones/d/libguestfs/tmp/libguestfseV4fT5/guestfsd.sock,id=channel0
 \
  -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
  -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=6000 no_timer_check 
lpj=4464640 acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb 
selinux=0 guestfs_verbose=1 TERM=xterm-256color'

  The appliance boots, but segfaults as soon as the virtio-scsi driver
  is loaded:

  supermin: internal insmod virtio_scsi.ko
  [3.992963] scsi0 : Virtio SCSI HBA
  libguestfs: error: appliance closed the connection unexpectedly, see earlier 
error messages

  I captured a core dump:

  Core was generated by `/home/rjones/d/qemu/arm-softmmu/qemu-system-arm 
-global virtio-blk-device.scsi='.
  Program terminated with signal SIGSEGV, Segmentation fault.
  #0  0x000856bc in virtio_scsi_handle_cmd_req_submit (s=optimized out, 
  req=0x6c03acf8) at /home/rjones/d/qemu/hw/scsi/virtio-scsi.c:551
  551   bdrv_io_unplug(req-sreq-dev-conf.bs);
  (gdb) bt
  #0  0x000856bc in virtio_scsi_handle_cmd_req_submit (s=optimized out, 
  req=0x6c03acf8) at /home/rjones/d/qemu/hw/scsi/virtio-scsi.c:551
  #1  0x0008573a in virtio_scsi_handle_cmd (vdev=0xac4d68, vq=0xafe4b8)
  at /home/rjones/d/qemu/hw/scsi/virtio-scsi.c:573
  #2  0x0004fdbe in access_with_adjusted_size (addr=80, 
  value=value@entry=0x4443e6c0, size=size@entry=4, access_size_min=1, 
  access_size_max=optimized out, access_size_max@entry=0, 
  access=access@entry=0x4fee9 memory_region_write_accessor, 
  mr=mr@entry=0xa53fa8) at /home/rjones/d/qemu/memory.c:480
  #3  0x00054234 in memory_region_dispatch_write (size=4, data=2, 
  addr=optimized out, mr=0xa53fa8) at /home/rjones/d/qemu/memory.c:1117
  #4  io_mem_write (mr=0xa53fa8, addr=optimized out, val=val@entry=2, 
  size=size@entry=4) at /home/rjones/d/qemu/memory.c:1958
  #5  0x00021c88 in address_space_rw (as=0x3b96b4 address_space_memory, 
  addr=167788112, buf=buf@entry=0x4443e790 \002, len=len@entry=4, 
  is_write=is_write@entry=true) at /home/rjones/d/qemu/exec.c:2135
  #6  0x00021de6 in address_space_write (len=4, buf=0x4443e790 \002, 
  addr=optimized out, as=optimized out)
  at /home/rjones/d/qemu/exec.c:2202
  #7  subpage_write (opaque=optimized out, addr=optimized out, value=2, 
  len=4) at /home/rjones/d/qemu/exec.c:1811
  #8  0x0004fdbe in access_with_adjusted_size (addr=592, 
  value=value@entry=0x4443e820, size=size@entry=4, access_size_min=1, 
  access_size_max=optimized out, access_size_max@entry=0, 
  access=access@entry=0x4fee9 memory_region_write_accessor, 
  mr=mr@entry=0xaed980) at /home/rjones/d/qemu/memory.c:480
  #9  0x00054234 in memory_region_dispatch_write (size=4, data=2, 
  addr=optimized out, mr=0xaed980) at /home/rjones/d/qemu/memory.c:1117
  #10 io_mem_write (mr=0xaed980, addr=optimized out, val=2, size=size@entry=4)
  at /home/rjones/d/qemu/memory.c:1958
  #11 0x00057f24 in io_writel (retaddr=1121296542, Cannot access memory at 
address 0x0
  addr=optimized out, val=2, 
  physaddr=592, env=0x9d6c50) at /home/rjones/d/qemu/softmmu_template.h:381
  #12 helper_le_stl_mmu (env=0x9d6c50, addr=optimized out, val=2, 
  mmu_idx=optimized out, retaddr=1121296542)
  at /home/rjones/d/qemu/softmmu_template.h:419
  #13 0x42d5a0a0 in ?? ()
  Cannot access memory at address 0x0
  Backtrace stopped: previous frame identical to this frame (corrupt stack?)
  (gdb) print req
  $1 = (VirtIOSCSIReq *) 0x6c03acf8
  (gdb) print req-sreq
  

[Qemu-devel] [Bug 1378554] [NEW] qemu segfault in virtio_scsi_handle_cmd_req_submit on ARM 32 bit

2014-10-07 Thread Richard Jones
Public bug reported:

/home/rjones/d/qemu/arm-softmmu/qemu-system-arm \
-global virtio-blk-device.scsi=off \
-nodefconfig \
-enable-fips \
-nodefaults \
-display none \
-M virt \
-machine accel=kvm:tcg \
-m 500 \
-no-reboot \
-rtc driftfix=slew \
-global kvm-pit.lost_tick_policy=discard \
-kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/appliance.d/kernel \
-initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/appliance.d/initrd \
-device virtio-scsi-device,id=scsi \
-drive 
file=/home/rjones/d/libguestfs/tmp/libguestfseV4fT5/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
-device scsi-hd,drive=hd0 \
-drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none
 \
-device scsi-hd,drive=appliance \
-device virtio-serial-device \
-serial stdio \
-chardev 
socket,path=/home/rjones/d/libguestfs/tmp/libguestfseV4fT5/guestfsd.sock,id=channel0
 \
-device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
-append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=6000 no_timer_check 
lpj=4464640 acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb 
selinux=0 guestfs_verbose=1 TERM=xterm-256color'

The appliance boots, but segfaults as soon as the virtio-scsi driver is
loaded:

supermin: internal insmod virtio_scsi.ko
[3.992963] scsi0 : Virtio SCSI HBA
libguestfs: error: appliance closed the connection unexpectedly, see earlier 
error messages

I captured a core dump:

Core was generated by `/home/rjones/d/qemu/arm-softmmu/qemu-system-arm -global 
virtio-blk-device.scsi='.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000856bc in virtio_scsi_handle_cmd_req_submit (s=optimized out, 
req=0x6c03acf8) at /home/rjones/d/qemu/hw/scsi/virtio-scsi.c:551
551 bdrv_io_unplug(req-sreq-dev-conf.bs);
(gdb) bt
#0  0x000856bc in virtio_scsi_handle_cmd_req_submit (s=optimized out, 
req=0x6c03acf8) at /home/rjones/d/qemu/hw/scsi/virtio-scsi.c:551
#1  0x0008573a in virtio_scsi_handle_cmd (vdev=0xac4d68, vq=0xafe4b8)
at /home/rjones/d/qemu/hw/scsi/virtio-scsi.c:573
#2  0x0004fdbe in access_with_adjusted_size (addr=80, 
value=value@entry=0x4443e6c0, size=size@entry=4, access_size_min=1, 
access_size_max=optimized out, access_size_max@entry=0, 
access=access@entry=0x4fee9 memory_region_write_accessor, 
mr=mr@entry=0xa53fa8) at /home/rjones/d/qemu/memory.c:480
#3  0x00054234 in memory_region_dispatch_write (size=4, data=2, 
addr=optimized out, mr=0xa53fa8) at /home/rjones/d/qemu/memory.c:1117
#4  io_mem_write (mr=0xa53fa8, addr=optimized out, val=val@entry=2, 
size=size@entry=4) at /home/rjones/d/qemu/memory.c:1958
#5  0x00021c88 in address_space_rw (as=0x3b96b4 address_space_memory, 
addr=167788112, buf=buf@entry=0x4443e790 \002, len=len@entry=4, 
is_write=is_write@entry=true) at /home/rjones/d/qemu/exec.c:2135
#6  0x00021de6 in address_space_write (len=4, buf=0x4443e790 \002, 
addr=optimized out, as=optimized out)
at /home/rjones/d/qemu/exec.c:2202
#7  subpage_write (opaque=optimized out, addr=optimized out, value=2, 
len=4) at /home/rjones/d/qemu/exec.c:1811
#8  0x0004fdbe in access_with_adjusted_size (addr=592, 
value=value@entry=0x4443e820, size=size@entry=4, access_size_min=1, 
access_size_max=optimized out, access_size_max@entry=0, 
access=access@entry=0x4fee9 memory_region_write_accessor, 
mr=mr@entry=0xaed980) at /home/rjones/d/qemu/memory.c:480
#9  0x00054234 in memory_region_dispatch_write (size=4, data=2, 
addr=optimized out, mr=0xaed980) at /home/rjones/d/qemu/memory.c:1117
#10 io_mem_write (mr=0xaed980, addr=optimized out, val=2, size=size@entry=4)
at /home/rjones/d/qemu/memory.c:1958
#11 0x00057f24 in io_writel (retaddr=1121296542, Cannot access memory at 
address 0x0
addr=optimized out, val=2, 
physaddr=592, env=0x9d6c50) at /home/rjones/d/qemu/softmmu_template.h:381
#12 helper_le_stl_mmu (env=0x9d6c50, addr=optimized out, val=2, 
mmu_idx=optimized out, retaddr=1121296542)
at /home/rjones/d/qemu/softmmu_template.h:419
#13 0x42d5a0a0 in ?? ()
Cannot access memory at address 0x0
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb) print req
$1 = (VirtIOSCSIReq *) 0x6c03acf8
(gdb) print req-sreq
$2 = (SCSIRequest *) 0xc2c2c2c2
(gdb) print req-sreq-dev
Cannot access memory at address 0xc2c2c2c6
(gdb) print *req
$3 = {
  dev = 0x6c40, 
  vq = 0x6c40, 
  qsgl = {
sg = 0x0, 
nsg = 0, 
nalloc = -1027423550, 
size = 3267543746, 
dev = 0xc2c2c2c2, 
as = 0xc2c2c2c2
  }, 
  resp_iov = {
iov = 0xc2c2c2c2, 
niov = -1027423550, 
nalloc = -1027423550, 
size = 3267543746
  }, 
  elem = {
index = 3267543746, 
out_num = 3267543746, 
in_num = 3267543746, 
in_addr = {14033993530586874562 repeats 1024 times}, 
out_addr = {14033993530586874562 repeats 1024 

[Qemu-devel] [Bug 1224444] Re: virtio-serial loses writes when used over virtio-mmio

2014-07-29 Thread Richard Jones
FWIW I am able to reproduce this quite easily on aarch64 too.

My test program is:
https://github.com/libguestfs/libguestfs/blob/master/tests/qemu/qemu-speed-test.c

and you use it like this:
qemu-speed-test --virtio-serial-upload

(You can also test virtio-serial downloads and a few other things, but
those don't appear to deadlock)

Slowing down the upload, even just by enabling debugging, is sufficient
to make the problem go away most of the time.

I am testing with qemu from git
(f45c56e0166e86d3b309ae72f4cb8e3d0949c7ef).

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/122

Title:
  virtio-serial loses writes when used over virtio-mmio

Status in QEMU:
  New

Bug description:
  virtio-serial appears to lose writes, but only when used on top of
  virtio-mmio.  The scenario is this:

  /home/rjones/d/qemu/arm-softmmu/qemu-system-arm \
  -global virtio-blk-device.scsi=off \
  -nodefconfig \
  -nodefaults \
  -nographic \
  -M vexpress-a15 \
  -machine accel=kvm:tcg \
  -m 500 \
  -no-reboot \
  -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/kernel.27944 \
  -dtb /home/rjones/d/libguestfs/tmp/.guestfs-1001/dtb.27944 \
  -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/initrd.27944 \
  -device virtio-scsi-device,id=scsi \
  -drive 
file=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
  -device scsi-hd,drive=hd0 \
  -drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/root.27944,snapshot=on,id=appliance,cache=unsafe,if=none
 \
  -device scsi-hd,drive=appliance \
  -device virtio-serial-device \
  -serial stdio \
  -chardev 
socket,path=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/guestfsd.sock,id=channel0
 \
  -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
  -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check 
acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 
guestfs_verbose=1 TERM=xterm-256color'

  After the guest starts up, a daemon writes 4 bytes to a virtio-serial
  socket.  The host side reads these 4 bytes correctly and writes a 64
  byte message.  The guest never sees this message.

  I enabled virtio-mmio debugging, and this is what is printed (## = my
  comment):

  ## guest opens the socket:
  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0'
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  ## guest writes 4 bytes to the socket:
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTFS_LAUNCH_FLAG
  ## host reads 4 bytes successfully:
  main_loop libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14605ms] appliance is up
  Guest launched OK.
  ## host writes 64 bytes to socket:
  libguestfs: writing the data to the socket (size = 64)
  waiting for next request
  libguestfs: data written OK
  ## hangs here forever with guest in read() call never receiving any data

  I am using qemu from git today (2d1fe1873a984).

  Some notes:

  - It's not 100% reproducible.  Sometimes everything works fine, although it 
fails often (eg  2/3rds of the time).
  - KVM is being used.
  - We've long used virtio-serial on x86 and have never seen anything like this.

  This is what the output looks like when it doesn't fail:

  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0
  '
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTlibguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14710ms] appliance is up
  Guest launched OK.
  libguestfs: writing the data to the socket (size = 64)
  FS_LAUNCH_FLAG
  main_loop waiting for next request
  libguestfs: data written OK
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 

[Qemu-devel] [Bug 1269606] Re: curl driver (http) always says No such file or directory

2014-01-16 Thread Richard Jones
Turns out this is because of using snapshot=on.

Simple reproducer:

./x86_64-softmmu/qemu-system-x86_64 -drive
'file=http://127.0.0.1/~rjones/cirros-0.3.1-x86_64-disk.img,if=virtio,snapshot=on'

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1269606

Title:
  curl driver (http) always says No such file or directory

Status in QEMU:
  New

Bug description:
  I have a remote server, on which an http disk image definitely exists.
  However the qemu curl block driver cannot open it.  It always gives
  the bogus error:

  CURL: Error opening file: Connection time-out
  qemu-system-x86_64: -drive 
file=http://onuma/scratch/cirros-0.3.1-x86_64-disk.img,snapshot=on,cache=writeback,id=hd0,if=none:
 could not open disk image http://onuma/scratch/cirros-0.3.1-x86_64-disk.img: 
Could not open backing file: Could not open 
'http://onuma/scratch/cirros-0.3.1-x86_64-disk.img': No such file or directory

  On the server, I can see from the logs that qemu/curl is opening it:

  192.168.0.175 - - [15/Jan/2014:21:25:37 +] HEAD 
/scratch/cirros-0.3.1-x86_64-disk.img HTTP/1.1 200 - - -
  192.168.0.175 - - [15/Jan/2014:21:25:37 +] GET 
/scratch/cirros-0.3.1-x86_64-disk.img HTTP/1.1 206 264192 - -

  I am using qemu  curl from git today.

  curl: curl-7_34_0-177-gc7a76bb
  qemu: for-anthony-839-g1cf892c

  Here is the full command I am using:

  http_proxy= \
  LD_LIBRARY_PATH=/home/rjones/d/curl/lib/.libs \
  LIBGUESTFS_BACKEND=direct \
  LIBGUESTFS_HV=/home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64 \
  guestfish -v --ro -a http://onuma/scratch/cirros-0.3.1-x86_64-disk.img run

  The full output (includes qemu command itself) is:

  libguestfs: launch: program=guestfish
  libguestfs: launch: version=1.25.20fedora=21,release=1.fc21,libvirt
  libguestfs: launch: backend registered: unix
  libguestfs: launch: backend registered: uml
  libguestfs: launch: backend registered: libvirt
  libguestfs: launch: backend registered: direct
  libguestfs: launch: backend=direct
  libguestfs: launch: tmpdir=/tmp/libguestfsoQctgE
  libguestfs: launch: umask=0002
  libguestfs: launch: euid=1000
  libguestfs: command: run: /usr/bin/supermin-helper
  libguestfs: command: run: \ --verbose
  libguestfs: command: run: \ -f checksum
  libguestfs: command: run: \ --host-cpu x86_64
  libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
  supermin helper [0ms] whitelist = (not specified)
  supermin helper [0ms] host_cpu = x86_64
  supermin helper [0ms] dtb_wildcard = (not specified)
  supermin helper [0ms] inputs:
  supermin helper [0ms] inputs[0] = /usr/lib64/guestfs/supermin.d
  supermin helper [0ms] outputs:
  supermin helper [0ms] kernel = (none)
  supermin helper [0ms] dtb = (none)
  supermin helper [0ms] initrd = (none)
  supermin helper [0ms] appliance = (none)
  checking modpath /lib/modules/3.12.5-302.fc20.x86_64 is a directory
  checking modpath /lib/modules/3.11.9-200.fc19.x86_64 is a directory
  checking modpath /lib/modules/3.11.10-200.fc19.x86_64 is a directory
  checking modpath /lib/modules/3.11.4-201.fc19.x86_64 is a directory
  picked kernel vmlinuz-3.12.5-302.fc20.x86_64
  supermin helper [0ms] finished creating kernel
  supermin helper [0ms] visiting /usr/lib64/guestfs/supermin.d
  supermin helper [0ms] visiting /usr/lib64/guestfs/supermin.d/base.img.gz
  supermin helper [0ms] visiting /usr/lib64/guestfs/supermin.d/daemon.img.gz
  supermin helper [0ms] visiting /usr/lib64/guestfs/supermin.d/hostfiles
  supermin helper [00041ms] visiting /usr/lib64/guestfs/supermin.d/init.img
  supermin helper [00041ms] visiting 
/usr/lib64/guestfs/supermin.d/udev-rules.img
  supermin helper [00041ms] adding kernel modules
  supermin helper [00064ms] finished creating appliance
  libguestfs: checksum of existing appliance: 
2017df18eaeee7c45b87139c9bd80be2216d655a1513322c47f58a7a3668cd1f
  libguestfs: [00066ms] begin testing qemu features
  libguestfs: command: run: 
/home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64
  libguestfs: command: run: \ -display none
  libguestfs: command: run: \ -help
  libguestfs: command: run: 
/home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64
  libguestfs: command: run: \ -display none
  libguestfs: command: run: \ -version
  libguestfs: qemu version 1.7
  libguestfs: command: run: 
/home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64
  libguestfs: command: run: \ -display none
  libguestfs: command: run: \ -machine accel=kvm:tcg
  libguestfs: command: run: \ -device ?
  libguestfs: [00127ms] finished testing qemu features
  [00128ms] /home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64 \
  -global virtio-blk-pci.scsi=off \
  -nodefconfig \
  -enable-fips \
  -nodefaults \
  -display none \
  -machine accel=kvm:tcg \
  -cpu host,+kvmclock \
  -m 500 \
  -no-reboot \
  -no-hpet \
  -kernel 

[Qemu-devel] [Bug 1269606] [NEW] curl driver (http) always says No such file or directory

2014-01-15 Thread Richard Jones
Public bug reported:

I have a remote server, on which an http disk image definitely exists.
However the qemu curl block driver cannot open it.  It always gives the
bogus error:

CURL: Error opening file: Connection time-out
qemu-system-x86_64: -drive 
file=http://onuma/scratch/cirros-0.3.1-x86_64-disk.img,snapshot=on,cache=writeback,id=hd0,if=none:
 could not open disk image http://onuma/scratch/cirros-0.3.1-x86_64-disk.img: 
Could not open backing file: Could not open 
'http://onuma/scratch/cirros-0.3.1-x86_64-disk.img': No such file or directory

On the server, I can see from the logs that qemu/curl is opening it:

192.168.0.175 - - [15/Jan/2014:21:25:37 +] HEAD 
/scratch/cirros-0.3.1-x86_64-disk.img HTTP/1.1 200 - - -
192.168.0.175 - - [15/Jan/2014:21:25:37 +] GET 
/scratch/cirros-0.3.1-x86_64-disk.img HTTP/1.1 206 264192 - -

I am using qemu  curl from git today.

curl: curl-7_34_0-177-gc7a76bb
qemu: for-anthony-839-g1cf892c

Here is the full command I am using:

http_proxy= \
LD_LIBRARY_PATH=/home/rjones/d/curl/lib/.libs \
LIBGUESTFS_BACKEND=direct \
LIBGUESTFS_HV=/home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64 \
guestfish -v --ro -a http://onuma/scratch/cirros-0.3.1-x86_64-disk.img run

The full output (includes qemu command itself) is:

libguestfs: launch: program=guestfish
libguestfs: launch: version=1.25.20fedora=21,release=1.fc21,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=direct
libguestfs: launch: tmpdir=/tmp/libguestfsoQctgE
libguestfs: launch: umask=0002
libguestfs: launch: euid=1000
libguestfs: command: run: /usr/bin/supermin-helper
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ -f checksum
libguestfs: command: run: \ --host-cpu x86_64
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
supermin helper [0ms] whitelist = (not specified)
supermin helper [0ms] host_cpu = x86_64
supermin helper [0ms] dtb_wildcard = (not specified)
supermin helper [0ms] inputs:
supermin helper [0ms] inputs[0] = /usr/lib64/guestfs/supermin.d
supermin helper [0ms] outputs:
supermin helper [0ms] kernel = (none)
supermin helper [0ms] dtb = (none)
supermin helper [0ms] initrd = (none)
supermin helper [0ms] appliance = (none)
checking modpath /lib/modules/3.12.5-302.fc20.x86_64 is a directory
checking modpath /lib/modules/3.11.9-200.fc19.x86_64 is a directory
checking modpath /lib/modules/3.11.10-200.fc19.x86_64 is a directory
checking modpath /lib/modules/3.11.4-201.fc19.x86_64 is a directory
picked kernel vmlinuz-3.12.5-302.fc20.x86_64
supermin helper [0ms] finished creating kernel
supermin helper [0ms] visiting /usr/lib64/guestfs/supermin.d
supermin helper [0ms] visiting /usr/lib64/guestfs/supermin.d/base.img.gz
supermin helper [0ms] visiting /usr/lib64/guestfs/supermin.d/daemon.img.gz
supermin helper [0ms] visiting /usr/lib64/guestfs/supermin.d/hostfiles
supermin helper [00041ms] visiting /usr/lib64/guestfs/supermin.d/init.img
supermin helper [00041ms] visiting /usr/lib64/guestfs/supermin.d/udev-rules.img
supermin helper [00041ms] adding kernel modules
supermin helper [00064ms] finished creating appliance
libguestfs: checksum of existing appliance: 
2017df18eaeee7c45b87139c9bd80be2216d655a1513322c47f58a7a3668cd1f
libguestfs: [00066ms] begin testing qemu features
libguestfs: command: run: /home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64
libguestfs: command: run: \ -display none
libguestfs: command: run: \ -help
libguestfs: command: run: /home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64
libguestfs: command: run: \ -display none
libguestfs: command: run: \ -version
libguestfs: qemu version 1.7
libguestfs: command: run: /home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64
libguestfs: command: run: \ -display none
libguestfs: command: run: \ -machine accel=kvm:tcg
libguestfs: command: run: \ -device ?
libguestfs: [00127ms] finished testing qemu features
[00128ms] /home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64 \
-global virtio-blk-pci.scsi=off \
-nodefconfig \
-enable-fips \
-nodefaults \
-display none \
-machine accel=kvm:tcg \
-cpu host,+kvmclock \
-m 500 \
-no-reboot \
-no-hpet \
-kernel /var/tmp/.guestfs-1000/kernel.19645 \
-initrd /var/tmp/.guestfs-1000/initrd.19645 \
-device virtio-scsi-pci,id=scsi \
-drive 
file=http://onuma/scratch/cirros-0.3.1-x86_64-disk.img,snapshot=on,cache=writeback,id=hd0,if=none
 \
-device scsi-hd,drive=hd0 \
-drive 
file=/var/tmp/.guestfs-1000/root.19645,snapshot=on,id=appliance,cache=unsafe,if=none
 \
-device scsi-hd,drive=appliance \
-device virtio-serial-pci \
-serial stdio \
-device sga \
-chardev socket,path=/tmp/libguestfsoQctgE/guestfsd.sock,id=channel0 \
-device 

[Qemu-devel] [Bug 1263747] [NEW] Arm64 fails to run a binary which runs OK on real hardware

2013-12-23 Thread Richard Jones
Public bug reported:

Note this is using the not-yet-upstream aarch64 patches from:

https://github.com/susematz/qemu/tree/aarch64-1.6

 

This binary:

http://oirase.annexia.org/tmp/test.gz

runs OK on real aarch64 hardware.  It is a statically linked Linux
binary which (if successful) will print hello, world and exit cleanly.

On qemu-arm64 userspace emulator it doesn't print anything and loops
forever using 100% CPU.

 

The following section is only if you wish to compile this binary from
source, otherwise you can ignore it.

First compile OCaml from:

https://github.com/ocaml/ocaml

(note you have to compile it on aarch64 or in qemu, it's not possible to
cross-compile).  You will have to apply the one-line patch from:

https://sympa.inria.fr/sympa/arc/caml-list/2013-12/msg00179.html

./configure
make -j1 world.opt

Then do:

echo 'print_endline hello, world'  test.ml
./boot/ocamlrun ./ocamlopt -I stdlib stdlib.cmxa test.ml -o test
./test

** Affects: qemu
 Importance: Undecided
 Status: New

** Description changed:

+ Note this is using the not-yet-upstream aarch64 patches from:
+ 
+ https://github.com/susematz/qemu/tree/aarch64-1.6
+ 
+  
+ 
  This binary:
  
  http://oirase.annexia.org/tmp/test.gz
  
  runs OK on real aarch64 hardware.  It is a statically linked Linux
  binary which (if successful) will print hello, world and exit cleanly.
  
  On qemu-arm64 userspace emulator it doesn't print anything and loops
  forever using 100% CPU.
  
- 
- The following section is only if you wish to compile this binary from source, 
otherwise you can ignore it.
+  
+ 
+ The following section is only if you wish to compile this binary from
+ source, otherwise you can ignore it.
  
  First compile OCaml from:
  
  https://github.com/ocaml/ocaml
  
  (note you have to compile it on aarch64 or in qemu, it's not possible to
  cross-compile).  You will have to apply the one-line patch from:
  
  https://sympa.inria.fr/sympa/arc/caml-list/2013-12/msg00179.html
  
- ./configure
- make -j1 world.opt
+ ./configure
+ make -j1 world.opt
  
  Then do:
  
- echo 'print_endline hello, world'  test.ml
- ./boot/ocamlrun ./ocamlopt -I stdlib stdlib.cmxa test.ml -o test
- ./test
+ echo 'print_endline hello, world'  test.ml
+ ./boot/ocamlrun ./ocamlopt -I stdlib stdlib.cmxa test.ml -o test
+ ./test

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1263747

Title:
  Arm64 fails to run a binary which runs OK on real hardware

Status in QEMU:
  New

Bug description:
  Note this is using the not-yet-upstream aarch64 patches from:

  https://github.com/susematz/qemu/tree/aarch64-1.6

   

  This binary:

  http://oirase.annexia.org/tmp/test.gz

  runs OK on real aarch64 hardware.  It is a statically linked Linux
  binary which (if successful) will print hello, world and exit
  cleanly.

  On qemu-arm64 userspace emulator it doesn't print anything and loops
  forever using 100% CPU.

   

  The following section is only if you wish to compile this binary from
  source, otherwise you can ignore it.

  First compile OCaml from:

  https://github.com/ocaml/ocaml

  (note you have to compile it on aarch64 or in qemu, it's not possible
  to cross-compile).  You will have to apply the one-line patch from:

  https://sympa.inria.fr/sympa/arc/caml-list/2013-12/msg00179.html

  ./configure
  make -j1 world.opt

  Then do:

  echo 'print_endline hello, world'  test.ml
  ./boot/ocamlrun ./ocamlopt -I stdlib stdlib.cmxa test.ml -o test
  ./test

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1263747/+subscriptions



[Qemu-devel] [Bug 1263747] Re: Arm64 fails to run a binary which runs OK on real hardware

2013-12-23 Thread Richard Jones
It's an Aarch64 binary so it won't run on 32 bit ARM at all.  However I
guess you meant does the equivalent program run on 32 bit ARM, and the
answer is yes, but that doesn't tell us much because OCaml uses separate
code generators for 32 and 64 bit ARM.

The binary is single threaded.

I enabled tracing on qemu and got this:

http://oirase.annexia.org/tmp/arm64-call-trace.txt

The associate disassembly of the binary is here:

http://oirase.annexia.org/tmp/arm64-disassembly.txt

I'm not exactly sure which instruction fails to be emulated properly,
but it looks like one of the ones in the caml_c_call function.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1263747

Title:
  Arm64 fails to run a binary which runs OK on real hardware

Status in QEMU:
  New

Bug description:
  Note this is using the not-yet-upstream aarch64 patches from:

  https://github.com/susematz/qemu/tree/aarch64-1.6

   

  This binary:

  http://oirase.annexia.org/tmp/test.gz

  runs OK on real aarch64 hardware.  It is a statically linked Linux
  binary which (if successful) will print hello, world and exit
  cleanly.

  On qemu-arm64 userspace emulator it doesn't print anything and loops
  forever using 100% CPU.

   

  The following section is only if you wish to compile this binary from
  source, otherwise you can ignore it.

  First compile OCaml from:

  https://github.com/ocaml/ocaml

  (note you have to compile it on aarch64 or in qemu, it's not possible
  to cross-compile).  You will have to apply the one-line patch from:

  https://sympa.inria.fr/sympa/arc/caml-list/2013-12/msg00179.html

  ./configure
  make -j1 world.opt

  Then do:

  echo 'print_endline hello, world'  test.ml
  ./boot/ocamlrun ./ocamlopt -I stdlib stdlib.cmxa test.ml -o test
  ./test

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1263747/+subscriptions



[Qemu-devel] [Bug 1263747] Re: Arm64 fails to run a binary which runs OK on real hardware

2013-12-23 Thread Richard Jones
One thing I notice is that caml_c_call is the only function that uses
the instruction ret xM (in all other places the code uses the default
ret with implicit x30).  Hmmm .. do we emulate ret xM?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1263747

Title:
  Arm64 fails to run a binary which runs OK on real hardware

Status in QEMU:
  New

Bug description:
  Note this is using the not-yet-upstream aarch64 patches from:

  https://github.com/susematz/qemu/tree/aarch64-1.6

   

  This binary:

  http://oirase.annexia.org/tmp/test.gz

  runs OK on real aarch64 hardware.  It is a statically linked Linux
  binary which (if successful) will print hello, world and exit
  cleanly.

  On qemu-arm64 userspace emulator it doesn't print anything and loops
  forever using 100% CPU.

   

  The following section is only if you wish to compile this binary from
  source, otherwise you can ignore it.

  First compile OCaml from:

  https://github.com/ocaml/ocaml

  (note you have to compile it on aarch64 or in qemu, it's not possible
  to cross-compile).  You will have to apply the one-line patch from:

  https://sympa.inria.fr/sympa/arc/caml-list/2013-12/msg00179.html

  ./configure
  make -j1 world.opt

  Then do:

  echo 'print_endline hello, world'  test.ml
  ./boot/ocamlrun ./ocamlopt -I stdlib stdlib.cmxa test.ml -o test
  ./test

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1263747/+subscriptions



[Qemu-devel] [Bug 1263747] Re: Arm64 fails to run a binary which runs OK on real hardware

2013-12-23 Thread Richard Jones
The attached patch fixes the ret xM variant of ret.  I verified that it
fixes the bug.

** Patch added: 0001-arm64-Set-source-for-ret-instruction-correctly.patch
   
https://bugs.launchpad.net/qemu/+bug/1263747/+attachment/3934836/+files/0001-arm64-Set-source-for-ret-instruction-correctly.patch

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1263747

Title:
  Arm64 fails to run a binary which runs OK on real hardware

Status in QEMU:
  New

Bug description:
  Note this is using the not-yet-upstream aarch64 patches from:

  https://github.com/susematz/qemu/tree/aarch64-1.6

   

  This binary:

  http://oirase.annexia.org/tmp/test.gz

  runs OK on real aarch64 hardware.  It is a statically linked Linux
  binary which (if successful) will print hello, world and exit
  cleanly.

  On qemu-arm64 userspace emulator it doesn't print anything and loops
  forever using 100% CPU.

   

  The following section is only if you wish to compile this binary from
  source, otherwise you can ignore it.

  First compile OCaml from:

  https://github.com/ocaml/ocaml

  (note you have to compile it on aarch64 or in qemu, it's not possible
  to cross-compile).  You will have to apply the one-line patch from:

  https://sympa.inria.fr/sympa/arc/caml-list/2013-12/msg00179.html

  ./configure
  make -j1 world.opt

  Then do:

  echo 'print_endline hello, world'  test.ml
  ./boot/ocamlrun ./ocamlopt -I stdlib stdlib.cmxa test.ml -o test
  ./test

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1263747/+subscriptions



[Qemu-devel] [Bug 1224444] Re: virtio-serial loses writes when used over virtio-mmio

2013-09-17 Thread Richard Jones
 What happens if you add a five second delay to libguestfs,
 before writing the response?

No change.  Still hangs in the same place.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/122

Title:
  virtio-serial loses writes when used over virtio-mmio

Status in QEMU:
  New

Bug description:
  virtio-serial appears to lose writes, but only when used on top of
  virtio-mmio.  The scenario is this:

  /home/rjones/d/qemu/arm-softmmu/qemu-system-arm \
  -global virtio-blk-device.scsi=off \
  -nodefconfig \
  -nodefaults \
  -nographic \
  -M vexpress-a15 \
  -machine accel=kvm:tcg \
  -m 500 \
  -no-reboot \
  -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/kernel.27944 \
  -dtb /home/rjones/d/libguestfs/tmp/.guestfs-1001/dtb.27944 \
  -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/initrd.27944 \
  -device virtio-scsi-device,id=scsi \
  -drive 
file=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
  -device scsi-hd,drive=hd0 \
  -drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/root.27944,snapshot=on,id=appliance,cache=unsafe,if=none
 \
  -device scsi-hd,drive=appliance \
  -device virtio-serial-device \
  -serial stdio \
  -chardev 
socket,path=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/guestfsd.sock,id=channel0
 \
  -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
  -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check 
acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 
guestfs_verbose=1 TERM=xterm-256color'

  After the guest starts up, a daemon writes 4 bytes to a virtio-serial
  socket.  The host side reads these 4 bytes correctly and writes a 64
  byte message.  The guest never sees this message.

  I enabled virtio-mmio debugging, and this is what is printed (## = my
  comment):

  ## guest opens the socket:
  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0'
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  ## guest writes 4 bytes to the socket:
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTFS_LAUNCH_FLAG
  ## host reads 4 bytes successfully:
  main_loop libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14605ms] appliance is up
  Guest launched OK.
  ## host writes 64 bytes to socket:
  libguestfs: writing the data to the socket (size = 64)
  waiting for next request
  libguestfs: data written OK
  ## hangs here forever with guest in read() call never receiving any data

  I am using qemu from git today (2d1fe1873a984).

  Some notes:

  - It's not 100% reproducible.  Sometimes everything works fine, although it 
fails often (eg  2/3rds of the time).
  - KVM is being used.
  - We've long used virtio-serial on x86 and have never seen anything like this.

  This is what the output looks like when it doesn't fail:

  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0
  '
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTlibguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14710ms] appliance is up
  Guest launched OK.
  libguestfs: writing the data to the socket (size = 64)
  FS_LAUNCH_FLAG
  main_loop waiting for next request
  libguestfs: data written OK
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  [... more virtio-mmio lines omitted ...]
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  guestfsd: main_loop: new request, len 0x3c
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x4
  

[Qemu-devel] [Bug 1224444] Re: virtio-serial loses writes when used over virtio-mmio

2013-09-17 Thread Richard Jones
 There's at least three cases here I guess (KVM + eventfd, KVM without
 eventfd (enforceable eg. with the ioeventfd property for virtio
 devices), and TCG). We're probably talking about the third case.

To clarify on this point: I have reproduced this bug on two different ARM
machines, one using KVM and one using TCG.

In both cases they are ./configure'd without any special ioeventfd-related
options, which appears to mean CONFIG_EVENTFD=y (in both cases).

In both cases I'm using a single vCPU.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/122

Title:
  virtio-serial loses writes when used over virtio-mmio

Status in QEMU:
  New

Bug description:
  virtio-serial appears to lose writes, but only when used on top of
  virtio-mmio.  The scenario is this:

  /home/rjones/d/qemu/arm-softmmu/qemu-system-arm \
  -global virtio-blk-device.scsi=off \
  -nodefconfig \
  -nodefaults \
  -nographic \
  -M vexpress-a15 \
  -machine accel=kvm:tcg \
  -m 500 \
  -no-reboot \
  -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/kernel.27944 \
  -dtb /home/rjones/d/libguestfs/tmp/.guestfs-1001/dtb.27944 \
  -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/initrd.27944 \
  -device virtio-scsi-device,id=scsi \
  -drive 
file=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
  -device scsi-hd,drive=hd0 \
  -drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/root.27944,snapshot=on,id=appliance,cache=unsafe,if=none
 \
  -device scsi-hd,drive=appliance \
  -device virtio-serial-device \
  -serial stdio \
  -chardev 
socket,path=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/guestfsd.sock,id=channel0
 \
  -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
  -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check 
acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 
guestfs_verbose=1 TERM=xterm-256color'

  After the guest starts up, a daemon writes 4 bytes to a virtio-serial
  socket.  The host side reads these 4 bytes correctly and writes a 64
  byte message.  The guest never sees this message.

  I enabled virtio-mmio debugging, and this is what is printed (## = my
  comment):

  ## guest opens the socket:
  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0'
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  ## guest writes 4 bytes to the socket:
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTFS_LAUNCH_FLAG
  ## host reads 4 bytes successfully:
  main_loop libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14605ms] appliance is up
  Guest launched OK.
  ## host writes 64 bytes to socket:
  libguestfs: writing the data to the socket (size = 64)
  waiting for next request
  libguestfs: data written OK
  ## hangs here forever with guest in read() call never receiving any data

  I am using qemu from git today (2d1fe1873a984).

  Some notes:

  - It's not 100% reproducible.  Sometimes everything works fine, although it 
fails often (eg  2/3rds of the time).
  - KVM is being used.
  - We've long used virtio-serial on x86 and have never seen anything like this.

  This is what the output looks like when it doesn't fail:

  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0
  '
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTlibguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14710ms] appliance is up
  Guest launched OK.
  libguestfs: writing the data to the socket (size = 64)
  FS_LAUNCH_FLAG
  main_loop waiting for next request
  libguestfs: data written OK
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  [... more 

[Qemu-devel] [Bug 1224444] Re: virtio-serial loses writes when used over virtio-mmio

2013-09-16 Thread Richard Jones
 Is this a socket that libguestfs pre-creates on the host-side?

Yes it is:
https://github.com/libguestfs/libguestfs/blob/master/src/launch-direct.c#L208

You mention a scenario that might cause this.  But that appears to be
when the socket is opened.  Note that the guest did send 4 bytes
successfully (received OK at the host).  The lost write occurs when the
host next tries to send a message back to the guest.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/122

Title:
  virtio-serial loses writes when used over virtio-mmio

Status in QEMU:
  New

Bug description:
  virtio-serial appears to lose writes, but only when used on top of
  virtio-mmio.  The scenario is this:

  /home/rjones/d/qemu/arm-softmmu/qemu-system-arm \
  -global virtio-blk-device.scsi=off \
  -nodefconfig \
  -nodefaults \
  -nographic \
  -M vexpress-a15 \
  -machine accel=kvm:tcg \
  -m 500 \
  -no-reboot \
  -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/kernel.27944 \
  -dtb /home/rjones/d/libguestfs/tmp/.guestfs-1001/dtb.27944 \
  -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/initrd.27944 \
  -device virtio-scsi-device,id=scsi \
  -drive 
file=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
  -device scsi-hd,drive=hd0 \
  -drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/root.27944,snapshot=on,id=appliance,cache=unsafe,if=none
 \
  -device scsi-hd,drive=appliance \
  -device virtio-serial-device \
  -serial stdio \
  -chardev 
socket,path=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/guestfsd.sock,id=channel0
 \
  -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
  -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check 
acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 
guestfs_verbose=1 TERM=xterm-256color'

  After the guest starts up, a daemon writes 4 bytes to a virtio-serial
  socket.  The host side reads these 4 bytes correctly and writes a 64
  byte message.  The guest never sees this message.

  I enabled virtio-mmio debugging, and this is what is printed (## = my
  comment):

  ## guest opens the socket:
  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0'
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  ## guest writes 4 bytes to the socket:
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTFS_LAUNCH_FLAG
  ## host reads 4 bytes successfully:
  main_loop libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14605ms] appliance is up
  Guest launched OK.
  ## host writes 64 bytes to socket:
  libguestfs: writing the data to the socket (size = 64)
  waiting for next request
  libguestfs: data written OK
  ## hangs here forever with guest in read() call never receiving any data

  I am using qemu from git today (2d1fe1873a984).

  Some notes:

  - It's not 100% reproducible.  Sometimes everything works fine, although it 
fails often (eg  2/3rds of the time).
  - KVM is being used.
  - We've long used virtio-serial on x86 and have never seen anything like this.

  This is what the output looks like when it doesn't fail:

  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0
  '
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTlibguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14710ms] appliance is up
  Guest launched OK.
  libguestfs: writing the data to the socket (size = 64)
  FS_LAUNCH_FLAG
  main_loop waiting for next request
  libguestfs: data written OK
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  [... more virtio-mmio lines omitted ...]
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
  

[Qemu-devel] [Bug 1224444] Re: virtio-serial loses writes when used over virtio-mmio

2013-09-14 Thread Richard Jones
I can reproduce this bug on a second ARM machine which doesn't have KVM
(ie. using TCG).  Note it's still linked to virtio-mmio.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/122

Title:
  virtio-serial loses writes when used over virtio-mmio

Status in QEMU:
  New

Bug description:
  virtio-serial appears to lose writes, but only when used on top of
  virtio-mmio.  The scenario is this:

  /home/rjones/d/qemu/arm-softmmu/qemu-system-arm \
  -global virtio-blk-device.scsi=off \
  -nodefconfig \
  -nodefaults \
  -nographic \
  -M vexpress-a15 \
  -machine accel=kvm:tcg \
  -m 500 \
  -no-reboot \
  -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/kernel.27944 \
  -dtb /home/rjones/d/libguestfs/tmp/.guestfs-1001/dtb.27944 \
  -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/initrd.27944 \
  -device virtio-scsi-device,id=scsi \
  -drive 
file=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
  -device scsi-hd,drive=hd0 \
  -drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/root.27944,snapshot=on,id=appliance,cache=unsafe,if=none
 \
  -device scsi-hd,drive=appliance \
  -device virtio-serial-device \
  -serial stdio \
  -chardev 
socket,path=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/guestfsd.sock,id=channel0
 \
  -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
  -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check 
acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 
guestfs_verbose=1 TERM=xterm-256color'

  After the guest starts up, a daemon writes 4 bytes to a virtio-serial
  socket.  The host side reads these 4 bytes correctly and writes a 64
  byte message.  The guest never sees this message.

  I enabled virtio-mmio debugging, and this is what is printed (## = my
  comment):

  ## guest opens the socket:
  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0'
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  ## guest writes 4 bytes to the socket:
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTFS_LAUNCH_FLAG
  ## host reads 4 bytes successfully:
  main_loop libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14605ms] appliance is up
  Guest launched OK.
  ## host writes 64 bytes to socket:
  libguestfs: writing the data to the socket (size = 64)
  waiting for next request
  libguestfs: data written OK
  ## hangs here forever with guest in read() call never receiving any data

  I am using qemu from git today (2d1fe1873a984).

  Some notes:

  - It's not 100% reproducible.  Sometimes everything works fine, although it 
fails often (eg  2/3rds of the time).
  - KVM is being used.
  - We've long used virtio-serial on x86 and have never seen anything like this.

  This is what the output looks like when it doesn't fail:

  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0
  '
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTlibguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14710ms] appliance is up
  Guest launched OK.
  libguestfs: writing the data to the socket (size = 64)
  FS_LAUNCH_FLAG
  main_loop waiting for next request
  libguestfs: data written OK
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  [... more virtio-mmio lines omitted ...]
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  guestfsd: main_loop: new request, len 0x3c
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x4
  : 

[Qemu-devel] [Bug 1224444] [NEW] virtio-serial loses writes when used over virtio-mmio

2013-09-12 Thread Richard Jones
Public bug reported:

virtio-serial appears to lose writes, but only when used on top of
virtio-mmio.  The scenario is this:

/home/rjones/d/qemu/arm-softmmu/qemu-system-arm \
-global virtio-blk-device.scsi=off \
-nodefconfig \
-nodefaults \
-nographic \
-M vexpress-a15 \
-machine accel=kvm:tcg \
-m 500 \
-no-reboot \
-kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/kernel.27944 \
-dtb /home/rjones/d/libguestfs/tmp/.guestfs-1001/dtb.27944 \
-initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/initrd.27944 \
-device virtio-scsi-device,id=scsi \
-drive 
file=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
-device scsi-hd,drive=hd0 \
-drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/root.27944,snapshot=on,id=appliance,cache=unsafe,if=none
 \
-device scsi-hd,drive=appliance \
-device virtio-serial-device \
-serial stdio \
-chardev 
socket,path=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/guestfsd.sock,id=channel0
 \
-device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
-append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check 
acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 
guestfs_verbose=1 TERM=xterm-256color'

After the guest starts up, a daemon writes 4 bytes to a virtio-serial
socket.  The host side reads these 4 bytes correctly and writes a 64
byte message.  The guest never sees this message.

I enabled virtio-mmio debugging, and this is what is printed (## = my
comment):

## guest opens the socket:
trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0'
virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
opened the socket, sock = 3
udevadm settle
## guest writes 4 bytes to the socket:
virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
virtio_mmio: virtio_mmio setting IRQ 1
virtio_mmio: virtio_mmio_read offset 0x60
virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
virtio_mmio: virtio_mmio setting IRQ 0
sent magic GUESTFS_LAUNCH_FLAG
## host reads 4 bytes successfully:
main_loop libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
libguestfs: [14605ms] appliance is up
Guest launched OK.
## host writes 64 bytes to socket:
libguestfs: writing the data to the socket (size = 64)
waiting for next request
libguestfs: data written OK
## hangs here forever with guest in read() call never receiving any data

I am using qemu from git today (2d1fe1873a984).

Some notes:

- It's not 100% reproducible.  Sometimes everything works fine, although it 
fails often (eg  2/3rds of the time).
- KVM is being used.
- We've long used virtio-serial on x86 and have never seen anything like this.

** Affects: qemu
 Importance: Undecided
 Status: New

** Description changed:

  virtio-serial appears to lose writes, but only when used on top of
  virtio-mmio.  The scenario is this:
  
  /home/rjones/d/qemu/arm-softmmu/qemu-system-arm \
- -global virtio-blk-device.scsi=off \
- -nodefconfig \
- -nodefaults \
- -nographic \
- -M vexpress-a15 \
- -machine accel=kvm:tcg \
- -m 500 \
- -no-reboot \
- -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/kernel.27944 \
- -dtb /home/rjones/d/libguestfs/tmp/.guestfs-1001/dtb.27944 \
- -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/initrd.27944 \
- -device virtio-scsi-device,id=scsi \
- -drive 
file=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
- -device scsi-hd,drive=hd0 \
- -drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/root.27944,snapshot=on,id=appliance,cache=unsafe,if=none
 \
- -device scsi-hd,drive=appliance \
- -device virtio-serial-device \
- -serial stdio \
- -chardev 
socket,path=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/guestfsd.sock,id=channel0
 \
- -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
- -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check 
acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 
guestfs_verbose=1 TERM=xterm-256color'
+ -global virtio-blk-device.scsi=off \
+ -nodefconfig \
+ -nodefaults \
+ -nographic \
+ -M vexpress-a15 \
+ -machine accel=kvm:tcg \
+ -m 500 \
+ -no-reboot \
+ -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/kernel.27944 \
+ -dtb /home/rjones/d/libguestfs/tmp/.guestfs-1001/dtb.27944 \
+ -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/initrd.27944 \
+ -device virtio-scsi-device,id=scsi \
+ -drive 
file=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
+ -device scsi-hd,drive=hd0 \
+ -drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/root.27944,snapshot=on,id=appliance,cache=unsafe,if=none
 \
+ -device scsi-hd,drive=appliance \
+ -device 

[Qemu-devel] [Bug 1224444] Re: virtio-serial loses writes when used over virtio-mmio

2013-09-12 Thread Richard Jones
** Description changed:

  virtio-serial appears to lose writes, but only when used on top of
  virtio-mmio.  The scenario is this:
  
  /home/rjones/d/qemu/arm-softmmu/qemu-system-arm \
  -global virtio-blk-device.scsi=off \
  -nodefconfig \
  -nodefaults \
  -nographic \
  -M vexpress-a15 \
  -machine accel=kvm:tcg \
  -m 500 \
  -no-reboot \
  -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/kernel.27944 \
  -dtb /home/rjones/d/libguestfs/tmp/.guestfs-1001/dtb.27944 \
  -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/initrd.27944 \
  -device virtio-scsi-device,id=scsi \
  -drive 
file=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
  -device scsi-hd,drive=hd0 \
  -drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/root.27944,snapshot=on,id=appliance,cache=unsafe,if=none
 \
  -device scsi-hd,drive=appliance \
  -device virtio-serial-device \
  -serial stdio \
  -chardev 
socket,path=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/guestfsd.sock,id=channel0
 \
  -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
  -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check 
acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 
guestfs_verbose=1 TERM=xterm-256color'
  
  After the guest starts up, a daemon writes 4 bytes to a virtio-serial
  socket.  The host side reads these 4 bytes correctly and writes a 64
  byte message.  The guest never sees this message.
  
  I enabled virtio-mmio debugging, and this is what is printed (## = my
  comment):
  
  ## guest opens the socket:
  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0'
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  ## guest writes 4 bytes to the socket:
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTFS_LAUNCH_FLAG
  ## host reads 4 bytes successfully:
  main_loop libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14605ms] appliance is up
  Guest launched OK.
  ## host writes 64 bytes to socket:
  libguestfs: writing the data to the socket (size = 64)
  waiting for next request
  libguestfs: data written OK
  ## hangs here forever with guest in read() call never receiving any data
  
  I am using qemu from git today (2d1fe1873a984).
  
  Some notes:
  
  - It's not 100% reproducible.  Sometimes everything works fine, although it 
fails often (eg  2/3rds of the time).
  - KVM is being used.
  - We've long used virtio-serial on x86 and have never seen anything like this.
+ 
+ This is what the output looks like when it doesn't fail:
+ 
+ trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0
+ '
+ virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
+ opened the socket, sock = 3
+ udevadm settle
+ virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
+ virtio_mmio: virtio_mmio setting IRQ 1
+ virtio_mmio: virtio_mmio_read offset 0x60
+ virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
+ virtio_mmio: virtio_mmio setting IRQ 0
+ sent magic GUESTlibguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
+ libguestfs: [14710ms] appliance is up
+ Guest launched OK.
+ libguestfs: writing the data to the socket (size = 64)
+ FS_LAUNCH_FLAG
+ main_loop waiting for next request
+ libguestfs: data written OK
+ virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
+ virtio_mmio: virtio_mmio setting IRQ 1
+ virtio_mmio: virtio_mmio setting IRQ 1
+ virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
+ virtio_mmio: virtio_mmio_read offset 0x60
+ virtio_mmio: virtio_mmio setting IRQ 1
+ virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
+ virtio_mmio: virtio_mmio setting IRQ 0
+ virtio_mmio: virtio_mmio_read offset 0x60
+ virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
+ virtio_mmio: virtio_mmio setting IRQ 0
+ virtio_mmio: virtio_mmio_read offset 0x60
+ virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
+ [... more virtio-mmio lines omitted ...]
+ virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
+ virtio_mmio: virtio_mmio setting IRQ 1
+ virtio_mmio: virtio_mmio_read offset 0x60
+ virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
+ virtio_mmio: virtio_mmio setting IRQ 0
+ guestfsd: main_loop: new request, len 0x3c
+ virtio_mmio: virtio_mmio_write offset 0x50 value 0x4
+ : 20 00 f5 f5 00 00 00 04 00 00 00 d2 00 00 00 00 | 
...|virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
+ virtio_mmio: virtio_mmio setting IRQ 1
+ 
+ virtio_mmio: virtio_mmio_read offset 0x60
+ virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
+ virtio_mmio: virtio_mmio setting IRQ 0
+ 0010: 00 12 34 00 00 00 00 00 00 00 00 

[Qemu-devel] [Bug 1224444] Re: virtio-serial loses writes when used over virtio-mmio

2013-09-12 Thread Richard Jones
Recall this bug only happens intermittently.  This is an strace -f of
qemu when it happens to work.

Notes:

 - fd = 6 is the Unix domain socket
 - there are an expected number of recvmsg  writes, all with the correct sizes
 - this time qemu adds the socket to ppoll

** Attachment added: working strace -f
   
https://bugs.launchpad.net/qemu/+bug/122/+attachment/3817814/+files/strace-working.txt

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/122

Title:
  virtio-serial loses writes when used over virtio-mmio

Status in QEMU:
  New

Bug description:
  virtio-serial appears to lose writes, but only when used on top of
  virtio-mmio.  The scenario is this:

  /home/rjones/d/qemu/arm-softmmu/qemu-system-arm \
  -global virtio-blk-device.scsi=off \
  -nodefconfig \
  -nodefaults \
  -nographic \
  -M vexpress-a15 \
  -machine accel=kvm:tcg \
  -m 500 \
  -no-reboot \
  -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/kernel.27944 \
  -dtb /home/rjones/d/libguestfs/tmp/.guestfs-1001/dtb.27944 \
  -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/initrd.27944 \
  -device virtio-scsi-device,id=scsi \
  -drive 
file=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
  -device scsi-hd,drive=hd0 \
  -drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/root.27944,snapshot=on,id=appliance,cache=unsafe,if=none
 \
  -device scsi-hd,drive=appliance \
  -device virtio-serial-device \
  -serial stdio \
  -chardev 
socket,path=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/guestfsd.sock,id=channel0
 \
  -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
  -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check 
acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 
guestfs_verbose=1 TERM=xterm-256color'

  After the guest starts up, a daemon writes 4 bytes to a virtio-serial
  socket.  The host side reads these 4 bytes correctly and writes a 64
  byte message.  The guest never sees this message.

  I enabled virtio-mmio debugging, and this is what is printed (## = my
  comment):

  ## guest opens the socket:
  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0'
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  ## guest writes 4 bytes to the socket:
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTFS_LAUNCH_FLAG
  ## host reads 4 bytes successfully:
  main_loop libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14605ms] appliance is up
  Guest launched OK.
  ## host writes 64 bytes to socket:
  libguestfs: writing the data to the socket (size = 64)
  waiting for next request
  libguestfs: data written OK
  ## hangs here forever with guest in read() call never receiving any data

  I am using qemu from git today (2d1fe1873a984).

  Some notes:

  - It's not 100% reproducible.  Sometimes everything works fine, although it 
fails often (eg  2/3rds of the time).
  - KVM is being used.
  - We've long used virtio-serial on x86 and have never seen anything like this.

  This is what the output looks like when it doesn't fail:

  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0
  '
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTlibguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14710ms] appliance is up
  Guest launched OK.
  libguestfs: writing the data to the socket (size = 64)
  FS_LAUNCH_FLAG
  main_loop waiting for next request
  libguestfs: data written OK
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  [... more virtio-mmio lines omitted ...]
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
  virtio_mmio: 

[Qemu-devel] [Bug 1224444] Re: virtio-serial loses writes when used over virtio-mmio

2013-09-12 Thread Richard Jones
strace -f of qemu when it fails.

Notes:

 - fd = 6 is the Unix domain socket connected to virtio-serial
 - only one 4 byte write occurs to this socket (expected guest - host 
communication)
 - the socket isn't read at all (even though the library on the other side has 
written)
 - the socket is never added to any poll/ppoll syscall, so it's no wonder that 
qemu never sees any data on the socket


** Attachment added: strace -f -o /tmp/strace.txt qemu-system-arm [...]
   
https://bugs.launchpad.net/qemu/+bug/122/+attachment/3817795/+files/strace.txt

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/122

Title:
  virtio-serial loses writes when used over virtio-mmio

Status in QEMU:
  New

Bug description:
  virtio-serial appears to lose writes, but only when used on top of
  virtio-mmio.  The scenario is this:

  /home/rjones/d/qemu/arm-softmmu/qemu-system-arm \
  -global virtio-blk-device.scsi=off \
  -nodefconfig \
  -nodefaults \
  -nographic \
  -M vexpress-a15 \
  -machine accel=kvm:tcg \
  -m 500 \
  -no-reboot \
  -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/kernel.27944 \
  -dtb /home/rjones/d/libguestfs/tmp/.guestfs-1001/dtb.27944 \
  -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/initrd.27944 \
  -device virtio-scsi-device,id=scsi \
  -drive 
file=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/scratch.1,cache=unsafe,format=raw,id=hd0,if=none
 \
  -device scsi-hd,drive=hd0 \
  -drive 
file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/root.27944,snapshot=on,id=appliance,cache=unsafe,if=none
 \
  -device scsi-hd,drive=appliance \
  -device virtio-serial-device \
  -serial stdio \
  -chardev 
socket,path=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/guestfsd.sock,id=channel0
 \
  -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
  -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check 
acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 
guestfs_verbose=1 TERM=xterm-256color'

  After the guest starts up, a daemon writes 4 bytes to a virtio-serial
  socket.  The host side reads these 4 bytes correctly and writes a 64
  byte message.  The guest never sees this message.

  I enabled virtio-mmio debugging, and this is what is printed (## = my
  comment):

  ## guest opens the socket:
  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0'
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  ## guest writes 4 bytes to the socket:
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTFS_LAUNCH_FLAG
  ## host reads 4 bytes successfully:
  main_loop libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14605ms] appliance is up
  Guest launched OK.
  ## host writes 64 bytes to socket:
  libguestfs: writing the data to the socket (size = 64)
  waiting for next request
  libguestfs: data written OK
  ## hangs here forever with guest in read() call never receiving any data

  I am using qemu from git today (2d1fe1873a984).

  Some notes:

  - It's not 100% reproducible.  Sometimes everything works fine, although it 
fails often (eg  2/3rds of the time).
  - KVM is being used.
  - We've long used virtio-serial on x86 and have never seen anything like this.

  This is what the output looks like when it doesn't fail:

  trying to open virtio-serial channel 
'/dev/virtio-ports/org.libguestfs.channel.0
  '
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x3
  opened the socket, sock = 3
  udevadm settle
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x5
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  sent magic GUESTlibguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
  libguestfs: [14710ms] appliance is up
  Guest launched OK.
  libguestfs: writing the data to the socket (size = 64)
  FS_LAUNCH_FLAG
  main_loop waiting for next request
  libguestfs: data written OK
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x50 value 0x2
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio setting IRQ 1
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x1
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  virtio_mmio: virtio_mmio_write offset 0x64 value 0x0
  virtio_mmio: virtio_mmio setting IRQ 0
  virtio_mmio: virtio_mmio_read offset 0x60
  

[Qemu-devel] [Bug 1219207] [NEW] QMP (32 bit only) segfaults in query-tpm-types when compiled with --enable-tpm

2013-08-31 Thread Richard Jones
Public bug reported:

NB: This bug ONLY happens on i686.  When qemu is compiled for x86-64,
the bug does NOT happen.

$ ./configure --enable-tpm
$ make
$ (sleep 5; printf 
'{execute:qmp_capabilities}\n{execute:query-tpm-types}\n') | 
./i386-softmmu/qemu-system-i386 -S -nodefaults -nographic -M none -qmp stdio
{QMP: {version: {qemu: {micro: 50, minor: 6, major: 1}, package: 
}, capabilities: []}}
{return: {}}
Segmentation fault (core dumped)

The stack trace is:

#0  output_type_enum (v=0xb9938228, obj=0x5, 
strings=0xb77f0320 TpmType_lookup, kind=0xb767f1d4 TpmType, name=0x0, 
errp=0xbfec4628) at qapi/qapi-visit-core.c:306
#1  0xb762b3b5 in visit_type_enum (v=v@entry=0xb9938228, obj=0x5, 
strings=0xb77f0320 TpmType_lookup, kind=kind@entry=0xb767f1d4 TpmType, 
name=name@entry=0x0, errp=errp@entry=0xbfec4628)
at qapi/qapi-visit-core.c:114
#2  0xb74a9ef4 in visit_type_TpmType (errp=0xbfec4628, name=0x0, 
obj=optimized out, m=0xb9938228) at qapi-visit.c:5220
#3  visit_type_TpmTypeList (m=0xb9938228, obj=obj@entry=0xbfec4678, 
name=name@entry=0xb76545a6 unused, errp=errp@entry=0xbfec4674)
at qapi-visit.c:5206
#4  0xb74c403e in qmp_marshal_output_query_tpm_types (errp=0xbfec4674, 
ret_out=0xbfec46d8, ret_in=0xb993f490) at qmp-marshal.c:3795
#5  qmp_marshal_input_query_tpm_types (mon=0xb9937098, qdict=0xb99379a0, 
ret=0xbfec46d8) at qmp-marshal.c:3817
#6  0xb7581d7a in qmp_call_cmd (cmd=optimized out, params=0xb99379a0, 
mon=0xb9937098) at /home/rjones/d/qemu/monitor.c:4644
#7  handle_qmp_command (parser=0xb99370ec, tokens=0xb9941438)
at /home/rjones/d/qemu/monitor.c:4710
#8  0xb7631d8f in json_message_process_token (lexer=0xb99370f0, 
token=0xb993f3a8, type=JSON_OPERATOR, x=29, y=1)
at qobject/json-streamer.c:87
#9  0xb764579b in json_lexer_feed_char (lexer=lexer@entry=0xb99370f0, 
ch=optimized out, flush=flush@entry=false) at qobject/json-lexer.c:303
#10 0xb76458c8 in json_lexer_feed (lexer=lexer@entry=0xb99370f0, 
buffer=buffer@entry=0xbfec486c }\243\353S\351\364b\267/\327ⵀ\025}\267 
\367b\267\315\372\223\271\065\023j\267\002, size=size@entry=1)
at qobject/json-lexer.c:356
#11 0xb7631fab in json_message_parser_feed (parser=0xb99370ec, 
buffer=buffer@entry=0xbfec486c }\243\353S\351\364b\267/\327ⵀ\025}\267 
\367b\267\315\372\223\271\065\023j\267\002, size=size@entry=1)
at qobject/json-streamer.c:110
#12 0xb75803eb in monitor_control_read (opaque=0xb9937098, 
buf=0xbfec486c }\243\353S\351\364b\267/\327ⵀ\025}\267 
\367b\267\315\372\223\271\065\023j\267\002, size=1) at 
/home/rjones/d/qemu/monitor.c:4731
#13 0xb74b191e in qemu_chr_be_write (len=optimized out, 
buf=0xbfec486c }\243\353S\351\364b\267/\327ⵀ\025}\267 
\367b\267\315\372\223\271\065\023j\267\002, s=0xb9935800) at qemu-char.c:165
#14 fd_chr_read (chan=0xb9935870, cond=(G_IO_IN | G_IO_HUP), opaque=0xb9935800)
at qemu-char.c:841
#15 0xb71f6876 in g_io_unix_dispatch () from /usr/lib/libglib-2.0.so.0
#16 0xb71b0286 in g_main_context_dispatch () from /usr/lib/libglib-2.0.so.0
#17 0xb747a13e in glib_pollfds_poll () at main-loop.c:189
#18 os_host_main_loop_wait (timeout=optimized out) at main-loop.c:234
#19 main_loop_wait (nonblocking=1) at main-loop.c:484
#20 0xb7309f11 in main_loop () at vl.c:2090
#21 main (argc=8, argv=0xbfec5c14, envp=0xbfec5c38) at vl.c:4435

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1219207

Title:
  QMP (32 bit only) segfaults in query-tpm-types when compiled with
  --enable-tpm

Status in QEMU:
  New

Bug description:
  NB: This bug ONLY happens on i686.  When qemu is compiled for x86-64,
  the bug does NOT happen.

  $ ./configure --enable-tpm
  $ make
  $ (sleep 5; printf 
'{execute:qmp_capabilities}\n{execute:query-tpm-types}\n') | 
./i386-softmmu/qemu-system-i386 -S -nodefaults -nographic -M none -qmp stdio
  {QMP: {version: {qemu: {micro: 50, minor: 6, major: 1}, 
package: }, capabilities: []}}
  {return: {}}
  Segmentation fault (core dumped)

  The stack trace is:

  #0  output_type_enum (v=0xb9938228, obj=0x5, 
  strings=0xb77f0320 TpmType_lookup, kind=0xb767f1d4 TpmType, name=0x0, 
  errp=0xbfec4628) at qapi/qapi-visit-core.c:306
  #1  0xb762b3b5 in visit_type_enum (v=v@entry=0xb9938228, obj=0x5, 
  strings=0xb77f0320 TpmType_lookup, kind=kind@entry=0xb767f1d4 
TpmType, 
  name=name@entry=0x0, errp=errp@entry=0xbfec4628)
  at qapi/qapi-visit-core.c:114
  #2  0xb74a9ef4 in visit_type_TpmType (errp=0xbfec4628, name=0x0, 
  obj=optimized out, m=0xb9938228) at qapi-visit.c:5220
  #3  visit_type_TpmTypeList (m=0xb9938228, obj=obj@entry=0xbfec4678, 
  name=name@entry=0xb76545a6 unused, errp=errp@entry=0xbfec4674)
  at qapi-visit.c:5206
  #4  0xb74c403e in qmp_marshal_output_query_tpm_types (errp=0xbfec4674, 
  ret_out=0xbfec46d8, 

[Qemu-devel] [Bug 1218098] [NEW] qemu-system-ppc64 segfaults in helper_ldl_mmu

2013-08-28 Thread Richard Jones
Public bug reported:

Download a Fedora 19 ISO from:
http://mirrors.kernel.org/fedora-secondary/releases/19/Fedora/ppc64/iso/

Compile qemu from git (I'm using 401c227b0a1134245ec61c6c5a9997cfc963c8e4
from today).

Run qemu-system-ppc64 like this:

ppc64-softmmu/qemu-system-ppc64 -M pseries -m 4096 -hda
/dev/fedora/f20ppc64 -cdrom /tmp/Fedora-19-ppc64-DVD.iso -netdev
user,id=usernet,net=169.254.0.0/16 -device virtio-net-pci,netdev=usernet

Guest gets to yaboot.  If you hit return, qemu segfaults:

Program received signal SIGABRT, Aborted.
0x7041fa19 in __GI_raise (sig=sig@entry=6)
at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
(gdb) t a a bt

Thread 4 (Thread 0x7fff6eef7700 (LWP 7553)):
#0  sem_timedwait ()
at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_timedwait.S:101
#1  0x559a5897 in qemu_sem_timedwait (sem=sem@entry=0x5631e788,
ms=ms@entry=1) at util/qemu-thread-posix.c:238
#2  0x5577e54c in worker_thread (opaque=0x5631e6f0)
at thread-pool.c:97
#3  0x7625ec53 in start_thread (arg=0x7fff6eef7700)
at pthread_create.c:308
#4  0x704df13d in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 3 (Thread 0x7fff6e605700 (LWP 7547)):
#0  0x7041fa19 in __GI_raise (sig=sig@entry=6)
at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x70421128 in __GI_abort () at abort.c:90
#2  0x5583ea33 in helper_ldl_mmu (env=0x77fd7140, addr=1572864,
mmu_idx=1) at /home/rjones/d/qemu/include/exec/softmmu_template.h:153
#3  0x7fffab0819d8 in code_gen_buffer ()
#4  0x557aa7ae in cpu_tb_exec (tb_ptr=optimized out,
cpu=0x77fd7010) at /home/rjones/d/qemu/cpu-exec.c:56
#5  cpu_ppc_exec (env=env@entry=0x77fd7140)
at /home/rjones/d/qemu/cpu-exec.c:631
#6  0x557abc35 in tcg_cpu_exec (env=0x77fd7140)
at /home/rjones/d/qemu/cpus.c:1193
#7  tcg_exec_all () at /home/rjones/d/qemu/cpus.c:1226
#8  qemu_tcg_cpu_thread_fn (arg=optimized out)
at /home/rjones/d/qemu/cpus.c:885
#9  0x7625ec53 in start_thread (arg=0x7fff6e605700)
at pthread_create.c:308
#10 0x704df13d in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 1 (Thread 0x77fa9a40 (LWP 7542)):
#0  0x704d4c2f in __GI_ppoll (fds=0x56483210, nfds=4,
timeout=optimized out, timeout@entry=0x7fffd940,
sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:56
#1  0x55762db9 in ppoll (__ss=0x0, __timeout=0x7fffd940,
__nfds=optimized out, __fds=optimized out)
at /usr/include/bits/poll2.h:77
#2  qemu_poll_ns (fds=optimized out, nfds=optimized out,
timeout=timeout@entry=951497) at qemu-timer.c:276
#3  0x5572b58c in os_host_main_loop_wait (timeout=951497)
at main-loop.c:228
#4  main_loop_wait (nonblocking=optimized out) at main-loop.c:484
#5  0x555ef9d8 in main_loop () at vl.c:2090
#6  main (argc=optimized out, argv=optimized out, envp=optimized out)
at vl.c:4435

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1218098

Title:
  qemu-system-ppc64 segfaults in helper_ldl_mmu

Status in QEMU:
  New

Bug description:
  Download a Fedora 19 ISO from:
  http://mirrors.kernel.org/fedora-secondary/releases/19/Fedora/ppc64/iso/

  Compile qemu from git (I'm using 401c227b0a1134245ec61c6c5a9997cfc963c8e4
  from today).

  Run qemu-system-ppc64 like this:

  ppc64-softmmu/qemu-system-ppc64 -M pseries -m 4096 -hda
  /dev/fedora/f20ppc64 -cdrom /tmp/Fedora-19-ppc64-DVD.iso -netdev
  user,id=usernet,net=169.254.0.0/16 -device virtio-net-
  pci,netdev=usernet

  Guest gets to yaboot.  If you hit return, qemu segfaults:

  Program received signal SIGABRT, Aborted.
  0x7041fa19 in __GI_raise (sig=sig@entry=6)
  at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
  56  return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
  (gdb) t a a bt

  Thread 4 (Thread 0x7fff6eef7700 (LWP 7553)):
  #0  sem_timedwait ()
  at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_timedwait.S:101
  #1  0x559a5897 in qemu_sem_timedwait (sem=sem@entry=0x5631e788,
  ms=ms@entry=1) at util/qemu-thread-posix.c:238
  #2  0x5577e54c in worker_thread (opaque=0x5631e6f0)
  at thread-pool.c:97
  #3  0x7625ec53 in start_thread (arg=0x7fff6eef7700)
  at pthread_create.c:308
  #4  0x704df13d in clone ()
  at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

  Thread 3 (Thread 0x7fff6e605700 (LWP 7547)):
  #0  0x7041fa19 in __GI_raise (sig=sig@entry=6)
  at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
  #1  0x70421128 in __GI_abort () at abort.c:90
  #2  0x5583ea33 in helper_ldl_mmu (env=0x77fd7140, addr=1572864,
  mmu_idx=1) at 

[Qemu-devel] [Bug 1218098] Re: qemu-system-ppc64 segfaults in helper_ldl_mmu

2013-08-28 Thread Richard Jones
** Description changed:

  Download a Fedora 19 ISO from:
  http://mirrors.kernel.org/fedora-secondary/releases/19/Fedora/ppc64/iso/
  
  Compile qemu from git (I'm using 401c227b0a1134245ec61c6c5a9997cfc963c8e4
  from today).
  
  Run qemu-system-ppc64 like this:
  
  ppc64-softmmu/qemu-system-ppc64 -M pseries -m 4096 -hda
  /dev/fedora/f20ppc64 -cdrom /tmp/Fedora-19-ppc64-DVD.iso -netdev
  user,id=usernet,net=169.254.0.0/16 -device virtio-net-pci,netdev=usernet
  
  Guest gets to yaboot.  If you hit return, qemu segfaults:
  
  Program received signal SIGABRT, Aborted.
  0x7041fa19 in __GI_raise (sig=sig@entry=6)
- at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
+ at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
  56  return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
  (gdb) t a a bt
  
  Thread 4 (Thread 0x7fff6eef7700 (LWP 7553)):
  #0  sem_timedwait ()
- at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_timedwait.S:101
- #1  0x559a5897 in qemu_sem_timedwait (sem=sem@entry=0x5631e788, 
- ms=ms@entry=1) at util/qemu-thread-posix.c:238
+ at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_timedwait.S:101
+ #1  0x559a5897 in qemu_sem_timedwait (sem=sem@entry=0x5631e788,
+ ms=ms@entry=1) at util/qemu-thread-posix.c:238
  #2  0x5577e54c in worker_thread (opaque=0x5631e6f0)
- at thread-pool.c:97
+ at thread-pool.c:97
  #3  0x7625ec53 in start_thread (arg=0x7fff6eef7700)
- at pthread_create.c:308
+ at pthread_create.c:308
  #4  0x704df13d in clone ()
- at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
+ at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
  
  Thread 3 (Thread 0x7fff6e605700 (LWP 7547)):
  #0  0x7041fa19 in __GI_raise (sig=sig@entry=6)
- at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
+ at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
  #1  0x70421128 in __GI_abort () at abort.c:90
- #2  0x5583ea33 in helper_ldl_mmu (env=0x77fd7140, addr=1572864, 
- mmu_idx=1) at /home/rjones/d/qemu/include/exec/softmmu_template.h:153
+ #2  0x5583ea33 in helper_ldl_mmu (env=0x77fd7140, addr=1572864,
+ mmu_idx=1) at /home/rjones/d/qemu/include/exec/softmmu_template.h:153
  #3  0x7fffab0819d8 in code_gen_buffer ()
- #4  0x557aa7ae in cpu_tb_exec (tb_ptr=optimized out, 
- cpu=0x77fd7010) at /home/rjones/d/qemu/cpu-exec.c:56
+ #4  0x557aa7ae in cpu_tb_exec (tb_ptr=optimized out,
+ cpu=0x77fd7010) at /home/rjones/d/qemu/cpu-exec.c:56
  #5  cpu_ppc_exec (env=env@entry=0x77fd7140)
- at /home/rjones/d/qemu/cpu-exec.c:631
+ at /home/rjones/d/qemu/cpu-exec.c:631
  #6  0x557abc35 in tcg_cpu_exec (env=0x77fd7140)
- at /home/rjones/d/qemu/cpus.c:1193
+ at /home/rjones/d/qemu/cpus.c:1193
  #7  tcg_exec_all () at /home/rjones/d/qemu/cpus.c:1226
  #8  qemu_tcg_cpu_thread_fn (arg=optimized out)
- at /home/rjones/d/qemu/cpus.c:885
+ at /home/rjones/d/qemu/cpus.c:885
  #9  0x7625ec53 in start_thread (arg=0x7fff6e605700)
- at pthread_create.c:308
+ at pthread_create.c:308
  #10 0x704df13d in clone ()
- at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
+ at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
  
  Thread 1 (Thread 0x77fa9a40 (LWP 7542)):
- #0  0x704d4c2f in __GI_ppoll (fds=0x56483210, nfds=4, 
- timeout=optimized out, timeout@entry=0x7fffd940, 
- sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:56
- #1  0x55762db9 in ppoll (__ss=0x0, __timeout=0x7fffd940, 
- __nfds=optimized out, __fds=optimized out)
- at /usr/include/bits/poll2.h:77
- #2  qemu_poll_ns (fds=optimized out, nfds=optimized out, 
- timeout=timeout@entry=951497) at qemu-timer.c:276
+ #0  0x704d4c2f in __GI_ppoll (fds=0x56483210, nfds=4,
+ timeout=optimized out, timeout@entry=0x7fffd940,
+ sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:56
+ #1  0x55762db9 in ppoll (__ss=0x0, __timeout=0x7fffd940,
+ __nfds=optimized out, __fds=optimized out)
+ at /usr/include/bits/poll2.h:77
+ #2  qemu_poll_ns (fds=optimized out, nfds=optimized out,
+ timeout=timeout@entry=951497) at qemu-timer.c:276
  #3  0x5572b58c in os_host_main_loop_wait (timeout=951497)
- at main-loop.c:228
+ at main-loop.c:228
  #4  main_loop_wait (nonblocking=optimized out) at main-loop.c:484
  #5  0x555ef9d8 in main_loop () at vl.c:2090
  #6  main (argc=optimized out, argv=optimized out, envp=optimized out)
- at vl.c:4435
- 
- NB: This does NOT happen if you specify -cpu POWER7 on the command line.
+ at vl.c:4435

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1218098

Title:
  qemu-system-ppc64 segfaults in helper_ldl_mmu

Status in QEMU:
  New

Bug description:
  Download a Fedora 19 

[Qemu-devel] [Bug 1218098] Re: qemu-system-ppc64 segfaults in helper_ldl_mmu

2013-08-28 Thread Richard Jones
git bisect points the finger at:

401c227b0a1134245ec61c6c5a9997cfc963c8e4 is the first bad commit
commit 401c227b0a1134245ec61c6c5a9997cfc963c8e4
Author: Richard Henderson r...@twiddle.net
Date:   Thu Jul 25 07:16:52 2013 -1000

tcg-i386: Use new return-argument ld/st helpers

Discontinue the jump-around-jump-to-jump scheme, trading it for a single
immediate move instruction.  The two extra jumps always consume 7 bytes,
whereas the immediate move is either 5 or 7 bytes depending on where the
code_gen_buffer gets located.

Signed-off-by: Richard Henderson r...@twiddle.net

:04 04 dfd9a66c85713cd1886a3342de1e9ac95d7ea43f 
df8673dea69bc89cc2cc979aa24415e3fea4ed53 M  include
:04 04 1f7cd5291f2c69b4126c63bd567c6b106eb332c9 
87e7ece766168dda860b513dc97fe5af28ec2c4b M  tcg

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1218098

Title:
  qemu-system-ppc64 segfaults in helper_ldl_mmu

Status in QEMU:
  New

Bug description:
  Download a Fedora 19 ISO from:
  http://mirrors.kernel.org/fedora-secondary/releases/19/Fedora/ppc64/iso/

  Compile qemu from git (I'm using 401c227b0a1134245ec61c6c5a9997cfc963c8e4
  from today).

  Run qemu-system-ppc64 like this:

  ppc64-softmmu/qemu-system-ppc64 -M pseries -m 4096 -hda
  /dev/fedora/f20ppc64 -cdrom /tmp/Fedora-19-ppc64-DVD.iso -netdev
  user,id=usernet,net=169.254.0.0/16 -device virtio-net-
  pci,netdev=usernet

  Guest gets to yaboot.  If you hit return, qemu segfaults:

  Program received signal SIGABRT, Aborted.
  0x7041fa19 in __GI_raise (sig=sig@entry=6)
  at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
  56  return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
  (gdb) t a a bt

  Thread 4 (Thread 0x7fff6eef7700 (LWP 7553)):
  #0  sem_timedwait ()
  at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_timedwait.S:101
  #1  0x559a5897 in qemu_sem_timedwait (sem=sem@entry=0x5631e788,
  ms=ms@entry=1) at util/qemu-thread-posix.c:238
  #2  0x5577e54c in worker_thread (opaque=0x5631e6f0)
  at thread-pool.c:97
  #3  0x7625ec53 in start_thread (arg=0x7fff6eef7700)
  at pthread_create.c:308
  #4  0x704df13d in clone ()
  at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

  Thread 3 (Thread 0x7fff6e605700 (LWP 7547)):
  #0  0x7041fa19 in __GI_raise (sig=sig@entry=6)
  at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
  #1  0x70421128 in __GI_abort () at abort.c:90
  #2  0x5583ea33 in helper_ldl_mmu (env=0x77fd7140, addr=1572864,
  mmu_idx=1) at /home/rjones/d/qemu/include/exec/softmmu_template.h:153
  #3  0x7fffab0819d8 in code_gen_buffer ()
  #4  0x557aa7ae in cpu_tb_exec (tb_ptr=optimized out,
  cpu=0x77fd7010) at /home/rjones/d/qemu/cpu-exec.c:56
  #5  cpu_ppc_exec (env=env@entry=0x77fd7140)
  at /home/rjones/d/qemu/cpu-exec.c:631
  #6  0x557abc35 in tcg_cpu_exec (env=0x77fd7140)
  at /home/rjones/d/qemu/cpus.c:1193
  #7  tcg_exec_all () at /home/rjones/d/qemu/cpus.c:1226
  #8  qemu_tcg_cpu_thread_fn (arg=optimized out)
  at /home/rjones/d/qemu/cpus.c:885
  #9  0x7625ec53 in start_thread (arg=0x7fff6e605700)
  at pthread_create.c:308
  #10 0x704df13d in clone ()
  at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

  Thread 1 (Thread 0x77fa9a40 (LWP 7542)):
  #0  0x704d4c2f in __GI_ppoll (fds=0x56483210, nfds=4,
  timeout=optimized out, timeout@entry=0x7fffd940,
  sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:56
  #1  0x55762db9 in ppoll (__ss=0x0, __timeout=0x7fffd940,
  __nfds=optimized out, __fds=optimized out)
  at /usr/include/bits/poll2.h:77
  #2  qemu_poll_ns (fds=optimized out, nfds=optimized out,
  timeout=timeout@entry=951497) at qemu-timer.c:276
  #3  0x5572b58c in os_host_main_loop_wait (timeout=951497)
  at main-loop.c:228
  #4  main_loop_wait (nonblocking=optimized out) at main-loop.c:484
  #5  0x555ef9d8 in main_loop () at vl.c:2090
  #6  main (argc=optimized out, argv=optimized out, envp=optimized out)
  at vl.c:4435

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1218098/+subscriptions



[Qemu-devel] [Bug 1174654] Re: qemu-system-x86_64 takes 100% CPU after host machine resumed from suspend to ram

2013-07-31 Thread Richard Jones
** Description changed:

  I have Windows XP SP3  inside qemu VM. All works fine in 12.10. But
  after upgraiding to 13.04 i have to restart the VM each time i resuming
  my host machine, because qemu process starts to take CPU cycles and OS
  inside VM is very slow and sluggish. However it's still controllable and
  could be shutdown by itself.
  
  According to the taskmgr any active process takes 99% CPU. It's not
- stucked on some single process.
+ stuck on some single process.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1174654

Title:
  qemu-system-x86_64 takes 100% CPU after host machine resumed from
  suspend to ram

Status in QEMU:
  Confirmed
Status in “qemu” package in Ubuntu:
  Invalid

Bug description:
  I have Windows XP SP3  inside qemu VM. All works fine in 12.10. But
  after upgraiding to 13.04 i have to restart the VM each time i
  resuming my host machine, because qemu process starts to take CPU
  cycles and OS inside VM is very slow and sluggish. However it's still
  controllable and could be shutdown by itself.

  According to the taskmgr any active process takes 99% CPU. It's not
  stuck on some single process.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1174654/+subscriptions



[Qemu-devel] [Bug 1186984] Re: large -initrd crashes qemu

2013-06-03 Thread Richard Jones
I'm using qemu from git (f10acc8b38d65a66ffa0588a036489d7fa6a593e).

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1186984

Title:
  large -initrd crashes qemu

Status in QEMU:
  New

Bug description:
  We don't use large -initrd in libguestfs any more, but I noticed that
  a large -initrd file now crashes qemu spectacularly:

  $ ls -lh /tmp/kernel /tmp/initrd 
  -rw-r--r--. 1 rjones rjones 273M Jun  3 14:02 /tmp/initrd
  lrwxrwxrwx. 1 rjones rjones   35 Jun  3 14:02 /tmp/kernel - 
/boot/vmlinuz-3.9.4-200.fc18.x86_64

  $ ./x86_64-softmmu/qemu-system-x86_64 -L pc-bios \
  -kernel /tmp/kernel -initrd /tmp/initrd -hda /tmp/test1.img -serial stdio 
\
  -append console=ttyS0

  qemu crashes with one of several errors:

  PFLASH: Possible BUG - Write block confirm

  qemu: fatal: Trying to execute code outside RAM or ROM at
  0x000b96cd

  If -enable-kvm is used:

  KVM: injection failed, MSI lost (Operation not permitted)

  In all cases the SDL display fills up with coloured blocks before the
  crash (see the attached screenshot).

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1186984/+subscriptions



[Qemu-devel] [Bug 1186984] [NEW] large -initrd crashes qemu

2013-06-03 Thread Richard Jones
Public bug reported:

We don't use large -initrd in libguestfs any more, but I noticed that a
large -initrd file now crashes qemu spectacularly:

$ ls -lh /tmp/kernel /tmp/initrd 
-rw-r--r--. 1 rjones rjones 273M Jun  3 14:02 /tmp/initrd
lrwxrwxrwx. 1 rjones rjones   35 Jun  3 14:02 /tmp/kernel - 
/boot/vmlinuz-3.9.4-200.fc18.x86_64

$ ./x86_64-softmmu/qemu-system-x86_64 -L pc-bios \
-kernel /tmp/kernel -initrd /tmp/initrd -hda /tmp/test1.img -serial stdio \
-append console=ttyS0

qemu crashes with one of several errors:

PFLASH: Possible BUG - Write block confirm

qemu: fatal: Trying to execute code outside RAM or ROM at
0x000b96cd

If -enable-kvm is used:

KVM: injection failed, MSI lost (Operation not permitted)

In all cases the SDL display fills up with coloured blocks before the
crash (see the attached screenshot).

** Affects: qemu
 Importance: Undecided
 Status: New

** Attachment added: Coloured blocks screenshot
   
https://bugs.launchpad.net/bugs/1186984/+attachment/3693623/+files/Screenshot%20-%20030613%20-%2014%3A11%3A25.png

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1186984

Title:
  large -initrd crashes qemu

Status in QEMU:
  New

Bug description:
  We don't use large -initrd in libguestfs any more, but I noticed that
  a large -initrd file now crashes qemu spectacularly:

  $ ls -lh /tmp/kernel /tmp/initrd 
  -rw-r--r--. 1 rjones rjones 273M Jun  3 14:02 /tmp/initrd
  lrwxrwxrwx. 1 rjones rjones   35 Jun  3 14:02 /tmp/kernel - 
/boot/vmlinuz-3.9.4-200.fc18.x86_64

  $ ./x86_64-softmmu/qemu-system-x86_64 -L pc-bios \
  -kernel /tmp/kernel -initrd /tmp/initrd -hda /tmp/test1.img -serial stdio 
\
  -append console=ttyS0

  qemu crashes with one of several errors:

  PFLASH: Possible BUG - Write block confirm

  qemu: fatal: Trying to execute code outside RAM or ROM at
  0x000b96cd

  If -enable-kvm is used:

  KVM: injection failed, MSI lost (Operation not permitted)

  In all cases the SDL display fills up with coloured blocks before the
  crash (see the attached screenshot).

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1186984/+subscriptions



[Qemu-devel] [Bug 1186984] Re: large -initrd crashes qemu

2013-06-03 Thread Richard Jones
One way to reproduce this is to just use a large (200 MB) completely
random initrd.  Note this error seems to happen a long time before even
the kernel starts up, so the actual content of the initrd doesn't
matter.

dd if=/dev/urandom of=/tmp/initrd bs=1M count=200
qemu-system-x86_64 -kernel /boot/vmlinuz -initrd /tmp/initrd -serial stdio 
-append console=ttyS0

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1186984

Title:
  large -initrd crashes qemu

Status in QEMU:
  New

Bug description:
  We don't use large -initrd in libguestfs any more, but I noticed that
  a large -initrd file now crashes qemu spectacularly:

  $ ls -lh /tmp/kernel /tmp/initrd 
  -rw-r--r--. 1 rjones rjones 273M Jun  3 14:02 /tmp/initrd
  lrwxrwxrwx. 1 rjones rjones   35 Jun  3 14:02 /tmp/kernel - 
/boot/vmlinuz-3.9.4-200.fc18.x86_64

  $ ./x86_64-softmmu/qemu-system-x86_64 -L pc-bios \
  -kernel /tmp/kernel -initrd /tmp/initrd -hda /tmp/test1.img -serial stdio 
\
  -append console=ttyS0

  qemu crashes with one of several errors:

  PFLASH: Possible BUG - Write block confirm

  qemu: fatal: Trying to execute code outside RAM or ROM at
  0x000b96cd

  If -enable-kvm is used:

  KVM: injection failed, MSI lost (Operation not permitted)

  In all cases the SDL display fills up with coloured blocks before the
  crash (see the attached screenshot).

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1186984/+subscriptions



[Qemu-devel] [Bug 1186984] Re: large -initrd crashes qemu

2013-06-03 Thread Richard Jones
OK I see what's happening.  Because I forgot about the -m option, qemu
allocates 128 MB of RAM.  It's obviously wrapping around in memory and
overwriting all the low memory.

If you add (eg) -m 1024 it works.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1186984

Title:
  large -initrd crashes qemu

Status in QEMU:
  New

Bug description:
  We don't use large -initrd in libguestfs any more, but I noticed that
  a large -initrd file now crashes qemu spectacularly:

  $ ls -lh /tmp/kernel /tmp/initrd 
  -rw-r--r--. 1 rjones rjones 273M Jun  3 14:02 /tmp/initrd
  lrwxrwxrwx. 1 rjones rjones   35 Jun  3 14:02 /tmp/kernel - 
/boot/vmlinuz-3.9.4-200.fc18.x86_64

  $ ./x86_64-softmmu/qemu-system-x86_64 -L pc-bios \
  -kernel /tmp/kernel -initrd /tmp/initrd -hda /tmp/test1.img -serial stdio 
\
  -append console=ttyS0

  qemu crashes with one of several errors:

  PFLASH: Possible BUG - Write block confirm

  qemu: fatal: Trying to execute code outside RAM or ROM at
  0x000b96cd

  If -enable-kvm is used:

  KVM: injection failed, MSI lost (Operation not permitted)

  In all cases the SDL display fills up with coloured blocks before the
  crash (see the attached screenshot).

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1186984/+subscriptions



[Qemu-devel] [Bug 1186984] Re: large -initrd can wrap around in memory causing memory corruption

2013-06-03 Thread Richard Jones
** Summary changed:

- large -initrd crashes qemu
+ large -initrd can wrap around in memory causing memory corruption

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1186984

Title:
  large -initrd can wrap around in memory causing memory corruption

Status in QEMU:
  New

Bug description:
  We don't use large -initrd in libguestfs any more, but I noticed that
  a large -initrd file now crashes qemu spectacularly:

  $ ls -lh /tmp/kernel /tmp/initrd 
  -rw-r--r--. 1 rjones rjones 273M Jun  3 14:02 /tmp/initrd
  lrwxrwxrwx. 1 rjones rjones   35 Jun  3 14:02 /tmp/kernel - 
/boot/vmlinuz-3.9.4-200.fc18.x86_64

  $ ./x86_64-softmmu/qemu-system-x86_64 -L pc-bios \
  -kernel /tmp/kernel -initrd /tmp/initrd -hda /tmp/test1.img -serial stdio 
\
  -append console=ttyS0

  qemu crashes with one of several errors:

  PFLASH: Possible BUG - Write block confirm

  qemu: fatal: Trying to execute code outside RAM or ROM at
  0x000b96cd

  If -enable-kvm is used:

  KVM: injection failed, MSI lost (Operation not permitted)

  In all cases the SDL display fills up with coloured blocks before the
  crash (see the attached screenshot).

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1186984/+subscriptions



[Qemu-devel] [Bug 865518] Re: qemu segfaults when writing to very large qcow2 disk

2013-05-11 Thread Richard Jones
Simple reproducer using only qemu tools:

$ qemu-img create -f qcow2 huge.qcow2 $((1024*1024))T
Formatting 'huge.qcow2', fmt=qcow2 size=1152921504606846976 encryption=off 
cluster_size=65536 lazy_refcounts=off 
$ qemu-io /tmp/huge.qcow2 -c write $((1024*1024*1024*1024*1024*1024 - 1024)) 
512
Segmentation fault

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/865518

Title:
  qemu segfaults when writing to very large qcow2 disk

Status in QEMU:
  New

Bug description:
  Create a ridiculously large qcow2 disk:

  qemu-img create -f qcow2 test1.img $((2**63-513))

  Attach it to a guest and try to use parted to partition it.  This is
  easy with virt-rescue: you just do:

  virt-rescue test1.img
  rescue parted /dev/vda mklabel gpt
  -- bang! qemu segfaults here

  The stack trace is:

  Program received signal SIGSEGV, Segmentation fault.
  0x00434cac in get_cluster_table (bs=0x3193030, offset=
  9223372036854764544, new_l2_table=0x591e3c8, new_l2_offset=0x591e3c0, 
  new_l2_index=0x591e408) at block/qcow2-cluster.c:506
  506   l2_offset = s-l1_table[l1_index];
  (gdb) bt
  #0  0x00434cac in get_cluster_table (bs=0x3193030, offset=
  9223372036854764544, new_l2_table=0x591e3c8, new_l2_offset=0x591e3c0, 
  new_l2_index=0x591e408) at block/qcow2-cluster.c:506
  #1  0x0043535b in qcow2_alloc_cluster_offset (bs=0x3193030, offset=
  9223372036854764544, n_start=106, n_end=126, num=0x591e4e8, m=0x591e470)
  at block/qcow2-cluster.c:719
  #2  0x0043b8d4 in qcow2_co_writev (bs=0x3193030, sector_num=
  18014398509481962, remaining_sectors=20, qiov=0x4a81ee0)
  at block/qcow2.c:554
  #3  0x00428691 in bdrv_co_rw (opaque=0x38bad10) at block.c:2781
  #4  0x0046e03a in coroutine_trampoline (i0=59487248, i1=0)
  at coroutine-ucontext.c:125
  #5  0x0034dc6471b0 in ?? () from /lib64/libc.so.6
  #6  0x7fff76cbb430 in ?? ()
  #7  0x in ?? ()

  This is qemu from git (8f440cda08c6df574 from 2011-09-29)

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/865518/+subscriptions



[Qemu-devel] [Bug 865518] Re: qemu segfaults when writing to very large qcow2 disk

2013-05-11 Thread Richard Jones
Still happening in upstream qemu from git:

Program terminated with signal 11, Segmentation fault.
#0  0x7f4f86c721a0 in get_cluster_table (bs=bs@entry=0x7f4f886e7880, 
offset=offset@entry=1152921504606834688, 
new_l2_table=new_l2_table@entry=0x7f4f8ad9a0b0, 
new_l2_index=new_l2_index@entry=0x7f4f8ad9a0ac)
at block/qcow2-cluster.c:525
525 l2_offset = s-l1_table[l1_index]  L1E_OFFSET_MASK;
Missing separate debuginfos, use: debuginfo-install SDL-1.2.15-3.fc18.x86_64 
bluez-libs-4.101-6.fc18.x86_64 brlapi-0.5.6-12.fc18.x86_64 
celt051-0.5.1.3-5.fc18.x86_64 ceph-devel-0.56.3-1.fc18.x86_64 
ceph-libs-0.56.3-1.fc18.x86_64 cryptopp-5.6.1-8.fc18.x86_64 
cyrus-sasl-lib-2.1.25-2.fc18.x86_64 leveldb-1.7.0-4.fc18.x86_64 
libfdt-1.3.0-5.fc18.x86_64 libseccomp-1.0.1-0.fc18.x86_64 
libselinux-2.1.12-7.3.fc18.x86_64 libusbx-1.0.14-1.fc18.x86_64 
snappy-1.0.5-2.fc18.x86_64 spice-server-0.12.2-3.fc18.x86_64 
usbredir-0.6-1.fc18.x86_64 xen-libs-4.2.1-9.fc18.x86_64
(gdb) bt
#0  0x7f4f86c721a0 in get_cluster_table (bs=bs@entry=0x7f4f886e7880, 
offset=offset@entry=1152921504606834688, new_l2_table=new_l2_table@entry=
0x7f4f8ad9a0b0, new_l2_index=new_l2_index@entry=0x7f4f8ad9a0ac)
at block/qcow2-cluster.c:525
#1  0x7f4f86c72fa3 in handle_copied (m=optimized out, 
bytes=synthetic pointer, host_offset=synthetic pointer, guest_offset=
1152921504606834688, bs=0x7f4f886e7880) at block/qcow2-cluster.c:873
#2  qcow2_alloc_cluster_offset (bs=bs@entry=0x7f4f886e7880, 
offset=optimized out, offset@entry=1152921504606834688, 
n_start=n_start@entry=104, n_end=optimized out, num=num@entry=
0x7f4f8ad9a14c, host_offset=host_offset@entry=0x7f4f8ad9a150, m=m@entry=
0x7f4f8ad9a158) at block/qcow2-cluster.c:1217
#3  0x7f4f86c773b3 in qcow2_co_writev (bs=0x7f4f886e7880, sector_num=
2251799813685224, remaining_sectors=24, qiov=0x7f4f88d88f98)
at block/qcow2.c:819
#4  0x7f4f86c638d5 in bdrv_co_do_writev (bs=0x7f4f886e7880, sector_num=
2251799813685224, nb_sectors=24, qiov=0x7f4f88d88f98, flags=flags@entry=
(unknown: 0)) at block.c:2625
#5  0x7f4f86c63a38 in bdrv_co_do_rw (opaque=0x7f4f88e16160) at block.c:4139
#6  0x7f4f86c9a19a in coroutine_trampoline (i0=optimized out, 
i1=optimized out) at coroutine-ucontext.c:118
#7  0x7f4f7fd776c0 in ?? () from /lib64/libc.so.6
#8  0x7fff125e6620 in ?? ()
#9  0x in ?? ()

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/865518

Title:
  qemu segfaults when writing to very large qcow2 disk

Status in QEMU:
  New

Bug description:
  Create a ridiculously large qcow2 disk:

  qemu-img create -f qcow2 test1.img $((2**63-513))

  Attach it to a guest and try to use parted to partition it.  This is
  easy with virt-rescue: you just do:

  virt-rescue test1.img
  rescue parted /dev/vda mklabel gpt
  -- bang! qemu segfaults here

  The stack trace is:

  Program received signal SIGSEGV, Segmentation fault.
  0x00434cac in get_cluster_table (bs=0x3193030, offset=
  9223372036854764544, new_l2_table=0x591e3c8, new_l2_offset=0x591e3c0, 
  new_l2_index=0x591e408) at block/qcow2-cluster.c:506
  506   l2_offset = s-l1_table[l1_index];
  (gdb) bt
  #0  0x00434cac in get_cluster_table (bs=0x3193030, offset=
  9223372036854764544, new_l2_table=0x591e3c8, new_l2_offset=0x591e3c0, 
  new_l2_index=0x591e408) at block/qcow2-cluster.c:506
  #1  0x0043535b in qcow2_alloc_cluster_offset (bs=0x3193030, offset=
  9223372036854764544, n_start=106, n_end=126, num=0x591e4e8, m=0x591e470)
  at block/qcow2-cluster.c:719
  #2  0x0043b8d4 in qcow2_co_writev (bs=0x3193030, sector_num=
  18014398509481962, remaining_sectors=20, qiov=0x4a81ee0)
  at block/qcow2.c:554
  #3  0x00428691 in bdrv_co_rw (opaque=0x38bad10) at block.c:2781
  #4  0x0046e03a in coroutine_trampoline (i0=59487248, i1=0)
  at coroutine-ucontext.c:125
  #5  0x0034dc6471b0 in ?? () from /lib64/libc.so.6
  #6  0x7fff76cbb430 in ?? ()
  #7  0x in ?? ()

  This is qemu from git (8f440cda08c6df574 from 2011-09-29)

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/865518/+subscriptions



[Qemu-devel] [Bug 1127369] Re: i386 emulation unreliable since commit b76f0d8c2e3eac94bc7fd90a510cb7426b2a2699

2013-03-31 Thread Richard Jones
Thanks for the detailed test case and fix.  However unfortunately I cannot see
d6e839e718 in the current qemu git.  Is it possible the commit hash changed
because of a rebase when it was committed?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1127369

Title:
  i386 emulation unreliable since commit
  b76f0d8c2e3eac94bc7fd90a510cb7426b2a2699

Status in QEMU:
  Fix Committed

Bug description:
  I am running daily automated tests of the qemu git mainline that
  involve building qemu on a Linux host (32-bit), booting a NetBSD guest
  in qemu-system-i386, and running the NetBSD operating system test
  suite on the guest.

  Since commit b76f0d8c2e3eac94bc7fd90a510cb7426b2a2699, there has been
  a marked increase in the number of failing test cases.  Before that
  commit, the number of failing test cases was typically in the range 3
  to 6, but since that commit, test runs often show 10 or more failed
  tests, or they end prematurely due to a segmentation fault in the test
  framework itself.

  To aid in reproducing the problem, I have prepared a disk image
  containing a NetBSD 6.0.1 system configured to automatically run
  the test suite on boot.

  To reproduce the problem, run the following shell commands:

wget http://www.gson.org/bugs/qemu/NetBSD-6.0.1-i386-test.img.gz
gunzip NetBSD-6.0.1-i386-test.img.gz
qemu-system-i386 -m 32 -nographic -snapshot -hda NetBSD-6.0.1-i386-test.img

  The disk image is about 144 MB in size and uncompresses to 2 GB.  The
  test run typically takes a couple of hours, printing progress messages
  to the terminal as it goes.  When it finishes, the virtual machine
  will be automatically powered down, causing qemu to exit.

  Near the end of the output, before the shutdown messages, there should
  be a summary of the test results.  The expected output looks like this:

Summary for 500 test programs:
2958 passed test cases.
5 failed test cases.
45 expected failed test cases.
70 skipped test cases.

  A number of failed test cases in the range 3 to 6 should be
  considered normal.  Please ignore the expected failed test cases.
  Using a version of qemu affected by the bug, the summary will look
  more like this:

Summary for 500 test programs:
2951 passed test cases.
12 failed test cases.
45 expected failed test cases.
69 skipped test cases.

  Or it may end with a segmentation fault like this:

 p2k_ffs_race: atf-report: ERROR: 10912: Unexpected token `EOF'; 
expected end of test case or test case's stdout/stderr line
  [1]   Segmentation fault (core dumped) atf-run |
Done(1) atf-report

  The problem goes away if the -m 32 is omitted from the qemu command line,
  which leads me to suspect that the problem may be related to paging or
  swapping activity in the guest.

  The revision listed in the subject, b76f0d8c2e3eac94bc7fd90a510cb7426b2a2699,
  is the first one exhibiting the excessive test failures, but the bug may 
already
  have been introduced in the previous commit, 
fdbb84d1332ae0827d60f1a2ca03c7d5678c6edd.
  If I attempt to run the test on fdbb84d1332ae0827d60f1a2ca03c7d5678c6edd, the
  guest fails to boot.  The revision before that, 
32761257c0b9fa7ee04d2871a6e48a41f119c469,
  works as expected.
  --
  Andreas Gustafsson, g...@gson.org

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1127369/+subscriptions



[Qemu-devel] [Bug 1127369] Re: i386 emulation unreliable since commit b76f0d8c2e3eac94bc7fd90a510cb7426b2a2699

2013-03-31 Thread Richard Jones
Thanks -  fix committed to Fedora.  Hopefully this will squash the rare
and random segfaults in the libguestfs test suite.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1127369

Title:
  i386 emulation unreliable since commit
  b76f0d8c2e3eac94bc7fd90a510cb7426b2a2699

Status in QEMU:
  Fix Committed

Bug description:
  I am running daily automated tests of the qemu git mainline that
  involve building qemu on a Linux host (32-bit), booting a NetBSD guest
  in qemu-system-i386, and running the NetBSD operating system test
  suite on the guest.

  Since commit b76f0d8c2e3eac94bc7fd90a510cb7426b2a2699, there has been
  a marked increase in the number of failing test cases.  Before that
  commit, the number of failing test cases was typically in the range 3
  to 6, but since that commit, test runs often show 10 or more failed
  tests, or they end prematurely due to a segmentation fault in the test
  framework itself.

  To aid in reproducing the problem, I have prepared a disk image
  containing a NetBSD 6.0.1 system configured to automatically run
  the test suite on boot.

  To reproduce the problem, run the following shell commands:

wget http://www.gson.org/bugs/qemu/NetBSD-6.0.1-i386-test.img.gz
gunzip NetBSD-6.0.1-i386-test.img.gz
qemu-system-i386 -m 32 -nographic -snapshot -hda NetBSD-6.0.1-i386-test.img

  The disk image is about 144 MB in size and uncompresses to 2 GB.  The
  test run typically takes a couple of hours, printing progress messages
  to the terminal as it goes.  When it finishes, the virtual machine
  will be automatically powered down, causing qemu to exit.

  Near the end of the output, before the shutdown messages, there should
  be a summary of the test results.  The expected output looks like this:

Summary for 500 test programs:
2958 passed test cases.
5 failed test cases.
45 expected failed test cases.
70 skipped test cases.

  A number of failed test cases in the range 3 to 6 should be
  considered normal.  Please ignore the expected failed test cases.
  Using a version of qemu affected by the bug, the summary will look
  more like this:

Summary for 500 test programs:
2951 passed test cases.
12 failed test cases.
45 expected failed test cases.
69 skipped test cases.

  Or it may end with a segmentation fault like this:

 p2k_ffs_race: atf-report: ERROR: 10912: Unexpected token `EOF'; 
expected end of test case or test case's stdout/stderr line
  [1]   Segmentation fault (core dumped) atf-run |
Done(1) atf-report

  The problem goes away if the -m 32 is omitted from the qemu command line,
  which leads me to suspect that the problem may be related to paging or
  swapping activity in the guest.

  The revision listed in the subject, b76f0d8c2e3eac94bc7fd90a510cb7426b2a2699,
  is the first one exhibiting the excessive test failures, but the bug may 
already
  have been introduced in the previous commit, 
fdbb84d1332ae0827d60f1a2ca03c7d5678c6edd.
  If I attempt to run the test on fdbb84d1332ae0827d60f1a2ca03c7d5678c6edd, the
  guest fails to boot.  The revision before that, 
32761257c0b9fa7ee04d2871a6e48a41f119c469,
  works as expected.
  --
  Andreas Gustafsson, g...@gson.org

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1127369/+subscriptions



[Qemu-devel] [Bug 1155677] [NEW] snapshot=on fails with non file-based storage

2013-03-15 Thread Richard Jones
Public bug reported:

The snapshot=on option doesn't work with an nbd block device:

/usr/bin/qemu-system-x86_64 \
[...]
-device virtio-scsi-pci,id=scsi \
-drive file=nbd:localhost:61930,snapshot=on,format=raw,id=hd0,if=none \
-device scsi-hd,drive=hd0 \
[...]

gives the error:

qemu-system-x86_64: -drive
file=nbd:localhost:61930,snapshot=on,format=raw,id=hd0,if=none: could
not open disk image nbd:localhost:61930: No such file or directory

If you remove the snapshot=on flag, it works (although that of course
means that the block device is writable which we don't want).

Previously reported here:

  http://permalink.gmane.org/gmane.comp.emulators.qemu/148390

and I can confirm this still happens in qemu 1.4.0.

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1155677

Title:
  snapshot=on fails with non file-based storage

Status in QEMU:
  New

Bug description:
  The snapshot=on option doesn't work with an nbd block device:

  /usr/bin/qemu-system-x86_64 \
  [...]
  -device virtio-scsi-pci,id=scsi \
  -drive file=nbd:localhost:61930,snapshot=on,format=raw,id=hd0,if=none \
  -device scsi-hd,drive=hd0 \
  [...]

  gives the error:

  qemu-system-x86_64: -drive
  file=nbd:localhost:61930,snapshot=on,format=raw,id=hd0,if=none: could
  not open disk image nbd:localhost:61930: No such file or directory

  If you remove the snapshot=on flag, it works (although that of course
  means that the block device is writable which we don't want).

  Previously reported here:

http://permalink.gmane.org/gmane.comp.emulators.qemu/148390

  and I can confirm this still happens in qemu 1.4.0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1155677/+subscriptions