[Qemu-devel] [Bug 1823458] Update Released

2019-05-13 Thread Corey Bryant
The verification of the Stable Release Update for qemu has completed
successfully and the package has now been released to -updates. In the
event that you encounter a regression using the package from -updates
please report a new bug using ubuntu-bug and tag the bug report
regression-update so we can easily find any regressions.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1823458

Title:
  race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown
  crashes qemu

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Trusty:
  Won't Fix
Status in qemu source package in Xenial:
  Fix Released
Status in qemu source package in Bionic:
  Fix Released
Status in qemu source package in Cosmic:
  Fix Released
Status in qemu source package in Disco:
  Fix Released

Bug description:
  [impact]

  on shutdown of a guest, there is a race condition that results in qemu
  crashing instead of normally shutting down.  The bt looks similar to
  this (depending on the specific version of qemu, of course; this is
  taken from 2.5 version of qemu):

  (gdb) bt
  #0  __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
  #1  0x5636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
  #2  0x5636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, 
buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
  #3  0x5636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, 
fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, 
dev=0x5636c1bf6b70)
  at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
  #4  0x5636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, 
ring=0x7ffe65c087e0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
  #5  0x5636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
  #6  0x5636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
  #7  0x5636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, 
dev=dev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
  #8  0x5636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, 
ncs=0x5636c209d110, total_queues=total_queues@entry=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
  #9  0x5636c08d7745 in virtio_net_vhost_status (status=7 '\a', 
n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
  #10 virtio_net_set_status (vdev=, status=) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
  #11 0x5636c08ec42c in virtio_set_status (vdev=0x5636c2853338, 
val=) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
  #12 0x5636c098fed2 in vm_state_notify (running=running@entry=0, 
state=state@entry=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
  #13 0x5636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
  #14 vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
  #15 0x5636c085d240 in main_loop_should_exit () at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
  #16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
  #17 main (argc=, argv=, envp=) 
at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683

  [test case]

  unfortunately since this is a race condition, it's very hard to
  arbitrarily reproduce; it depends very much on the overall
  configuration of the guest as well as how exactly it's shut down -
  specifically, its vhost user net must be closed from the host side at
  a specific time during qemu shutdown.

  I have someone with such a setup who has reported to me their setup is
  able to reproduce this reliably, but the config is too complex for me
  to reproduce so I have relied on their reproduction and testing to
  debug and craft the patch for this.

  [regression potential]

  the change adds a flag to prevent repeated calls to vhost_net_stop().
  This also prevents any calls to vhost_net_cleanup() from
  net_vhost_user_event().  Any regression would be seen when stopping
  and/or cleaning up a vhost net.  Regressions might include failure to
  hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem
  leak), or crashes during cleanup or stopping a vhost net.

  [other info]

  this was originally seen in the 2.5 version of qemu - specifically,
  the UCA version in 

[Qemu-devel] [Bug 1823458] Re: race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown crashes qemu

2019-05-13 Thread Corey Bryant
This bug was fixed in the package qemu - 1:2.8+dfsg-3ubuntu2.9~cloud5.1
---

 qemu (1:2.8+dfsg-3ubuntu2.9~cloud5.1) xenial-ocata; urgency=medium
 .
   * d/p/lp1823458/add-VirtIONet-vhost_stopped-flag-to-prevent-multiple.patch,
 d/p/lp1823458/do-not-call-vhost_net_cleanup-on-running-net-from-ch.patch:
 - Prevent crash due to race condition on shutdown;
   this is fixed differently upstream (starting in Bionic), but
   the change is too large to backport into Xenial.  These two very
   small patches work around the problem in an unintrusive way.
   (LP: #1823458)


** Changed in: cloud-archive/ocata
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1823458

Title:
  race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown
  crashes qemu

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Trusty:
  Won't Fix
Status in qemu source package in Xenial:
  Fix Released
Status in qemu source package in Bionic:
  Fix Released
Status in qemu source package in Cosmic:
  Fix Released
Status in qemu source package in Disco:
  Fix Released

Bug description:
  [impact]

  on shutdown of a guest, there is a race condition that results in qemu
  crashing instead of normally shutting down.  The bt looks similar to
  this (depending on the specific version of qemu, of course; this is
  taken from 2.5 version of qemu):

  (gdb) bt
  #0  __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
  #1  0x5636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
  #2  0x5636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, 
buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
  #3  0x5636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, 
fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, 
dev=0x5636c1bf6b70)
  at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
  #4  0x5636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, 
ring=0x7ffe65c087e0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
  #5  0x5636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
  #6  0x5636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
  #7  0x5636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, 
dev=dev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
  #8  0x5636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, 
ncs=0x5636c209d110, total_queues=total_queues@entry=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
  #9  0x5636c08d7745 in virtio_net_vhost_status (status=7 '\a', 
n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
  #10 virtio_net_set_status (vdev=, status=) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
  #11 0x5636c08ec42c in virtio_set_status (vdev=0x5636c2853338, 
val=) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
  #12 0x5636c098fed2 in vm_state_notify (running=running@entry=0, 
state=state@entry=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
  #13 0x5636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
  #14 vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
  #15 0x5636c085d240 in main_loop_should_exit () at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
  #16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
  #17 main (argc=, argv=, envp=) 
at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683

  [test case]

  unfortunately since this is a race condition, it's very hard to
  arbitrarily reproduce; it depends very much on the overall
  configuration of the guest as well as how exactly it's shut down -
  specifically, its vhost user net must be closed from the host side at
  a specific time during qemu shutdown.

  I have someone with such a setup who has reported to me their setup is
  able to reproduce this reliably, but the config is too complex for me
  to reproduce so I have relied on their reproduction and testing to
  debug and craft the patch for this.

  [regression potential]

  the change adds a flag to prevent repeated calls to vhost_net_stop().
  This also prevents any calls to vhost_net_cleanup() from
  net_vhost_user_event().  

[Qemu-devel] [Bug 1823458] Re: race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown crashes qemu

2019-05-13 Thread Corey Bryant
This bug was fixed in the package qemu - 1:2.5+dfsg-5ubuntu10.37~cloud0
---

 qemu (1:2.5+dfsg-5ubuntu10.37~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 qemu (1:2.5+dfsg-5ubuntu10.37) xenial; urgency=medium
 .
   * d/p/lp1823458/add-VirtIONet-vhost_stopped-flag-to-prevent-multiple.patch,
 d/p/lp1823458/do-not-call-vhost_net_cleanup-on-running-net-from-ch.patch:
 - Prevent crash due to race condition on shutdown;
   this is fixed differently upstream (starting in Bionic), but
   the change is too large to backport into Xenial.  These two very
   small patches work around the problem in an unintrusive way.
   (LP: #1823458)


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1823458

Title:
  race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown
  crashes qemu

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Trusty:
  Won't Fix
Status in qemu source package in Xenial:
  Fix Released
Status in qemu source package in Bionic:
  Fix Released
Status in qemu source package in Cosmic:
  Fix Released
Status in qemu source package in Disco:
  Fix Released

Bug description:
  [impact]

  on shutdown of a guest, there is a race condition that results in qemu
  crashing instead of normally shutting down.  The bt looks similar to
  this (depending on the specific version of qemu, of course; this is
  taken from 2.5 version of qemu):

  (gdb) bt
  #0  __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
  #1  0x5636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
  #2  0x5636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, 
buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
  #3  0x5636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, 
fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, 
dev=0x5636c1bf6b70)
  at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
  #4  0x5636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, 
ring=0x7ffe65c087e0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
  #5  0x5636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
  #6  0x5636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
  #7  0x5636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, 
dev=dev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
  #8  0x5636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, 
ncs=0x5636c209d110, total_queues=total_queues@entry=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
  #9  0x5636c08d7745 in virtio_net_vhost_status (status=7 '\a', 
n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
  #10 virtio_net_set_status (vdev=, status=) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
  #11 0x5636c08ec42c in virtio_set_status (vdev=0x5636c2853338, 
val=) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
  #12 0x5636c098fed2 in vm_state_notify (running=running@entry=0, 
state=state@entry=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
  #13 0x5636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
  #14 vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
  #15 0x5636c085d240 in main_loop_should_exit () at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
  #16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
  #17 main (argc=, argv=, envp=) 
at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683

  [test case]

  unfortunately since this is a race condition, it's very hard to
  arbitrarily reproduce; it depends very much on the overall
  configuration of the guest as well as how exactly it's shut down -
  specifically, its vhost user net must be closed from the host side at
  a specific time during qemu shutdown.

  I have someone with such a setup who has reported to me their setup is
  able to reproduce this reliably, but the config is too complex for me
  to reproduce so I have relied on their reproduction and testing to
  debug and craft the patch for this.

  [regression potential]

  the change adds a flag to prevent repeated calls 

[Qemu-devel] [Bug 1823458] Update Released

2019-05-13 Thread Corey Bryant
The verification of the Stable Release Update for qemu has completed
successfully and the package has now been released to -updates. In the
event that you encounter a regression using the package from -updates
please report a new bug using ubuntu-bug and tag the bug report
regression-update so we can easily find any regressions.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1823458

Title:
  race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown
  crashes qemu

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Trusty:
  Won't Fix
Status in qemu source package in Xenial:
  Fix Released
Status in qemu source package in Bionic:
  Fix Released
Status in qemu source package in Cosmic:
  Fix Released
Status in qemu source package in Disco:
  Fix Released

Bug description:
  [impact]

  on shutdown of a guest, there is a race condition that results in qemu
  crashing instead of normally shutting down.  The bt looks similar to
  this (depending on the specific version of qemu, of course; this is
  taken from 2.5 version of qemu):

  (gdb) bt
  #0  __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
  #1  0x5636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
  #2  0x5636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, 
buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
  #3  0x5636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, 
fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, 
dev=0x5636c1bf6b70)
  at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
  #4  0x5636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, 
ring=0x7ffe65c087e0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
  #5  0x5636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
  #6  0x5636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
  #7  0x5636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, 
dev=dev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
  #8  0x5636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, 
ncs=0x5636c209d110, total_queues=total_queues@entry=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
  #9  0x5636c08d7745 in virtio_net_vhost_status (status=7 '\a', 
n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
  #10 virtio_net_set_status (vdev=, status=) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
  #11 0x5636c08ec42c in virtio_set_status (vdev=0x5636c2853338, 
val=) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
  #12 0x5636c098fed2 in vm_state_notify (running=running@entry=0, 
state=state@entry=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
  #13 0x5636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
  #14 vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
  #15 0x5636c085d240 in main_loop_should_exit () at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
  #16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
  #17 main (argc=, argv=, envp=) 
at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683

  [test case]

  unfortunately since this is a race condition, it's very hard to
  arbitrarily reproduce; it depends very much on the overall
  configuration of the guest as well as how exactly it's shut down -
  specifically, its vhost user net must be closed from the host side at
  a specific time during qemu shutdown.

  I have someone with such a setup who has reported to me their setup is
  able to reproduce this reliably, but the config is too complex for me
  to reproduce so I have relied on their reproduction and testing to
  debug and craft the patch for this.

  [regression potential]

  the change adds a flag to prevent repeated calls to vhost_net_stop().
  This also prevents any calls to vhost_net_cleanup() from
  net_vhost_user_event().  Any regression would be seen when stopping
  and/or cleaning up a vhost net.  Regressions might include failure to
  hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem
  leak), or crashes during cleanup or stopping a vhost net.

  [other info]

  this was originally seen in the 2.5 version of qemu - specifically,
  the UCA version in 

[Qemu-devel] [Bug 1823458] Please test proposed package

2019-04-24 Thread Corey Bryant
Hello Dan, or anyone else affected,

Accepted qemu into ocata-proposed. The package will build now and be
available in the Ubuntu Cloud Archive in a few hours, and then in the
-proposed repository.

Please help us by testing this new package. To enable the -proposed
repository:

  sudo add-apt-repository cloud-archive:ocata-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, and change the tag
from verification-ocata-needed to verification-ocata-done. If it does
not fix the bug for you, please add a comment stating that, and change
the tag to verification-ocata-failed. In either case, details of your
testing will help us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in
advance!

** Changed in: cloud-archive/ocata
   Status: Triaged => Fix Committed

** Tags added: verification-ocata-needed

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1823458

Title:
  race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown
  crashes qemu

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Committed
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Trusty:
  Won't Fix
Status in qemu source package in Xenial:
  Fix Committed
Status in qemu source package in Bionic:
  Fix Released
Status in qemu source package in Cosmic:
  Fix Released
Status in qemu source package in Disco:
  Fix Released

Bug description:
  [impact]

  on shutdown of a guest, there is a race condition that results in qemu
  crashing instead of normally shutting down.  The bt looks similar to
  this (depending on the specific version of qemu, of course; this is
  taken from 2.5 version of qemu):

  (gdb) bt
  #0  __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
  #1  0x5636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
  #2  0x5636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, 
buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
  #3  0x5636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, 
fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, 
dev=0x5636c1bf6b70)
  at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
  #4  0x5636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, 
ring=0x7ffe65c087e0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
  #5  0x5636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
  #6  0x5636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
  #7  0x5636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, 
dev=dev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
  #8  0x5636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, 
ncs=0x5636c209d110, total_queues=total_queues@entry=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
  #9  0x5636c08d7745 in virtio_net_vhost_status (status=7 '\a', 
n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
  #10 virtio_net_set_status (vdev=, status=) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
  #11 0x5636c08ec42c in virtio_set_status (vdev=0x5636c2853338, 
val=) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
  #12 0x5636c098fed2 in vm_state_notify (running=running@entry=0, 
state=state@entry=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
  #13 0x5636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
  #14 vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
  #15 0x5636c085d240 in main_loop_should_exit () at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
  #16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
  #17 main (argc=, argv=, envp=) 
at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683

  [test case]

  unfortunately since this is a race condition, it's very hard to
  arbitrarily reproduce; it depends very much on the overall
  configuration of the guest as well as how exactly it's shut down -
  specifically, its vhost user net must be closed from the host side at
  a specific time during qemu shutdown.

  I have someone with 

[Qemu-devel] [Bug 1823458] Please test proposed package

2019-04-24 Thread Corey Bryant
Hello Dan, or anyone else affected,

Accepted qemu into mitaka-proposed. The package will build now and be
available in the Ubuntu Cloud Archive in a few hours, and then in the
-proposed repository.

Please help us by testing this new package. To enable the -proposed
repository:

  sudo add-apt-repository cloud-archive:mitaka-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, and change the tag
from verification-mitaka-needed to verification-mitaka-done. If it does
not fix the bug for you, please add a comment stating that, and change
the tag to verification-mitaka-failed. In either case, details of your
testing will help us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in
advance!

** Changed in: cloud-archive/mitaka
   Status: Triaged => Fix Committed

** Tags added: verification-mitaka-needed

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1823458

Title:
  race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown
  crashes qemu

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Committed
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Trusty:
  Won't Fix
Status in qemu source package in Xenial:
  Fix Committed
Status in qemu source package in Bionic:
  Fix Released
Status in qemu source package in Cosmic:
  Fix Released
Status in qemu source package in Disco:
  Fix Released

Bug description:
  [impact]

  on shutdown of a guest, there is a race condition that results in qemu
  crashing instead of normally shutting down.  The bt looks similar to
  this (depending on the specific version of qemu, of course; this is
  taken from 2.5 version of qemu):

  (gdb) bt
  #0  __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
  #1  0x5636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
  #2  0x5636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, 
buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
  #3  0x5636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, 
fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, 
dev=0x5636c1bf6b70)
  at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
  #4  0x5636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, 
ring=0x7ffe65c087e0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
  #5  0x5636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
  #6  0x5636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
  #7  0x5636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, 
dev=dev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
  #8  0x5636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, 
ncs=0x5636c209d110, total_queues=total_queues@entry=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
  #9  0x5636c08d7745 in virtio_net_vhost_status (status=7 '\a', 
n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
  #10 virtio_net_set_status (vdev=, status=) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
  #11 0x5636c08ec42c in virtio_set_status (vdev=0x5636c2853338, 
val=) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
  #12 0x5636c098fed2 in vm_state_notify (running=running@entry=0, 
state=state@entry=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
  #13 0x5636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
  #14 vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
  #15 0x5636c085d240 in main_loop_should_exit () at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
  #16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
  #17 main (argc=, argv=, envp=) 
at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683

  [test case]

  unfortunately since this is a race condition, it's very hard to
  arbitrarily reproduce; it depends very much on the overall
  configuration of the guest as well as how exactly it's shut down -
  specifically, its vhost user net must be closed from the host side at
  a specific time during qemu shutdown.

  I have 

[Qemu-devel] [Bug 1823458] Re: race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown crashes qemu

2019-04-24 Thread Corey Bryant
** Also affects: cloud-archive/ocata
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/ocata
   Importance: Undecided => Medium

** Changed in: cloud-archive/ocata
   Status: New => Triaged

** Changed in: cloud-archive
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1823458

Title:
  race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown
  crashes qemu

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive ocata series:
  Triaged
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Trusty:
  Won't Fix
Status in qemu source package in Xenial:
  Fix Committed
Status in qemu source package in Bionic:
  Fix Released
Status in qemu source package in Cosmic:
  Fix Released
Status in qemu source package in Disco:
  Fix Released

Bug description:
  [impact]

  on shutdown of a guest, there is a race condition that results in qemu
  crashing instead of normally shutting down.  The bt looks similar to
  this (depending on the specific version of qemu, of course; this is
  taken from 2.5 version of qemu):

  (gdb) bt
  #0  __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
  #1  0x5636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
  #2  0x5636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, 
buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
  #3  0x5636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, 
fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, 
dev=0x5636c1bf6b70)
  at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
  #4  0x5636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, 
ring=0x7ffe65c087e0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
  #5  0x5636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
  #6  0x5636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
  #7  0x5636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, 
dev=dev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
  #8  0x5636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, 
ncs=0x5636c209d110, total_queues=total_queues@entry=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
  #9  0x5636c08d7745 in virtio_net_vhost_status (status=7 '\a', 
n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
  #10 virtio_net_set_status (vdev=, status=) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
  #11 0x5636c08ec42c in virtio_set_status (vdev=0x5636c2853338, 
val=) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
  #12 0x5636c098fed2 in vm_state_notify (running=running@entry=0, 
state=state@entry=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
  #13 0x5636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
  #14 vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
  #15 0x5636c085d240 in main_loop_should_exit () at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
  #16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
  #17 main (argc=, argv=, envp=) 
at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683

  [test case]

  unfortunately since this is a race condition, it's very hard to
  arbitrarily reproduce; it depends very much on the overall
  configuration of the guest as well as how exactly it's shut down -
  specifically, its vhost user net must be closed from the host side at
  a specific time during qemu shutdown.

  I have someone with such a setup who has reported to me their setup is
  able to reproduce this reliably, but the config is too complex for me
  to reproduce so I have relied on their reproduction and testing to
  debug and craft the patch for this.

  [regression potential]

  the change adds a flag to prevent repeated calls to vhost_net_stop().
  This also prevents any calls to vhost_net_cleanup() from
  net_vhost_user_event().  Any regression would be seen when stopping
  and/or cleaning up a vhost net.  Regressions might include failure to
  hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem
  leak), or crashes during cleanup or stopping a vhost net.

  [other info]

  this was originally seen in the 2.5 version of qemu - specifically,
  the UCA version in trusty-mitaka (which uses the xenial qemu
  codebase).

  

[Qemu-devel] [Bug 1823458] Re: race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown crashes qemu

2019-04-24 Thread Corey Bryant
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/mitaka
   Importance: Undecided => Medium

** Changed in: cloud-archive/mitaka
   Status: New => Triaged

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1823458

Title:
  race condition between vhost_net_stop and CHR_EVENT_CLOSED on shutdown
  crashes qemu

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Trusty:
  Won't Fix
Status in qemu source package in Xenial:
  Fix Committed
Status in qemu source package in Bionic:
  Fix Released
Status in qemu source package in Cosmic:
  Fix Released
Status in qemu source package in Disco:
  Fix Released

Bug description:
  [impact]

  on shutdown of a guest, there is a race condition that results in qemu
  crashing instead of normally shutting down.  The bt looks similar to
  this (depending on the specific version of qemu, of course; this is
  taken from 2.5 version of qemu):

  (gdb) bt
  #0  __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:66
  #1  0x5636c0bc4389 in qemu_mutex_lock (mutex=mutex@entry=0x0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/util/qemu-thread-posix.c:73
  #2  0x5636c0988130 in qemu_chr_fe_write_all (s=s@entry=0x0, 
buf=buf@entry=0x7ffe65c086a0 "\v", len=len@entry=20) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/qemu-char.c:205
  #3  0x5636c08f3483 in vhost_user_write (msg=msg@entry=0x7ffe65c086a0, 
fds=fds@entry=0x0, fd_num=fd_num@entry=0, dev=0x5636c1bf6b70, 
dev=0x5636c1bf6b70)
  at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:195
  #4  0x5636c08f411c in vhost_user_get_vring_base (dev=0x5636c1bf6b70, 
ring=0x7ffe65c087e0) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost-user.c:364
  #5  0x5636c08efff0 in vhost_virtqueue_stop (dev=dev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338, vq=0x5636c1bf6d00, idx=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:895
  #6  0x5636c08f2944 in vhost_dev_stop (hdev=hdev@entry=0x5636c1bf6b70, 
vdev=vdev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/vhost.c:1262
  #7  0x5636c08db2a8 in vhost_net_stop_one (net=0x5636c1bf6b70, 
dev=dev@entry=0x5636c2853338) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:293
  #8  0x5636c08dbe5b in vhost_net_stop (dev=dev@entry=0x5636c2853338, 
ncs=0x5636c209d110, total_queues=total_queues@entry=1) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/vhost_net.c:371
  #9  0x5636c08d7745 in virtio_net_vhost_status (status=7 '\a', 
n=0x5636c2853338) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:150
  #10 virtio_net_set_status (vdev=, status=) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/hw/net/virtio-net.c:162
  #11 0x5636c08ec42c in virtio_set_status (vdev=0x5636c2853338, 
val=) at /build/qemu-7I4i1R/qemu-2.5+dfsg/hw/virtio/virtio.c:624
  #12 0x5636c098fed2 in vm_state_notify (running=running@entry=0, 
state=state@entry=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1605
  #13 0x5636c089172a in do_vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:724
  #14 vm_stop (state=RUN_STATE_SHUTDOWN) at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/cpus.c:1407
  #15 0x5636c085d240 in main_loop_should_exit () at 
/build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1883
  #16 main_loop () at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:1931
  #17 main (argc=, argv=, envp=) 
at /build/qemu-7I4i1R/qemu-2.5+dfsg/vl.c:4683

  [test case]

  unfortunately since this is a race condition, it's very hard to
  arbitrarily reproduce; it depends very much on the overall
  configuration of the guest as well as how exactly it's shut down -
  specifically, its vhost user net must be closed from the host side at
  a specific time during qemu shutdown.

  I have someone with such a setup who has reported to me their setup is
  able to reproduce this reliably, but the config is too complex for me
  to reproduce so I have relied on their reproduction and testing to
  debug and craft the patch for this.

  [regression potential]

  the change adds a flag to prevent repeated calls to vhost_net_stop().
  This also prevents any calls to vhost_net_cleanup() from
  net_vhost_user_event().  Any regression would be seen when stopping
  and/or cleaning up a vhost net.  Regressions might include failure to
  hot-remove a vhost net from a guest, or failure to cleanup (i.e. mem
  leak), or crashes during cleanup or stopping a vhost net.

  [other info]

  this was originally seen in the 2.5 version of qemu - specifically,
  the UCA version in trusty-mitaka (which uses the xenial qemu
  codebase).

  After discussion upstream, it appears this was fixed 

[Qemu-devel] [Bug 1719196] Re: [arm64 ocata] newly created instances are unable to raise network interfaces

2017-12-11 Thread Corey Bryant
Regression testing has passed successfully.

zesty-ocata-proposed with stable charms:

==
Totals
==
Ran: 102 tests in 1897.0150 sec.
 - Passed: 93
 - Skipped: 9
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 1011.5607 sec.

zesty-ocata-proposed with dev charms:

==
Totals
==
Ran: 102 tests in 1933.5299 sec.
 - Passed: 93
 - Skipped: 9
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 963.0546 sec.

xenial-ocata-proposed with stable charms:

==
Totals
==
Ran: 102 tests in 1767.3787 sec.
 - Passed: 93
 - Skipped: 9
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 906.2188 sec.

xenial-ocata-proposed with dev charms:

==
Totals
==
Ran: 102 tests in 2051.1377 sec.
 - Passed: 93
 - Skipped: 9
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 998.9001 sec.


** Tags removed: verification-ocata-needed
** Tags added: verification-ocata-done

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1719196

Title:
  [arm64 ocata] newly created instances are unable to raise network
  interfaces

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in libvirt:
  New
Status in QEMU:
  Fix Released
Status in libvirt package in Ubuntu:
  Invalid
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Zesty:
  Fix Released

Bug description:
  [Impact]

   * A change in qemu 2.8 (83d768b virtio: set ISR on dataplane 
 notifications) broke virtio handling on platforms without a 
 controller. Those encounter flaky networking due to missed IRQs

   * Fix is a backport of the upstream fix b4b9862b: virtio: Fix no 
 interrupt when not creating msi controller

  [Test Case]

   * On Arm with Zesty (or Ocata) run a guest without PCI based devices

   * Example in e.g. c#23

   * Without the fix the networking does not work reliably (as it losses 
 IRQs), with the fix it works fine.

  [Regression Potential]

   * Changing the IRQ handling of virtio could affect virtio in general.
 But when reviwing the patch you'll see that it is small and actually 
 only changes to enable IRQ on one more place. That could cause more 
 IRQs than needed in the worst case, but those are usually not 
 breaking but only slowing things down. Also this fix is upstream 
 quite a while, increasing confidence.

  [Other Info]
   
   * There is currently 1720397 in flight in the SRU queue, so acceptance 
 of this upload has to wait until that completes.

  ---

  arm64 Ocata ,

  I'm testing to see I can get Ocata running on arm64 and using the
  openstack-base bundle to deploy it.  I have added the bundle to the
  log file attached to this bug.

  When I create a new instance via nova, the VM comes up and runs,
  however fails to raise its eth0 interface. This occurs on both
  internal and external networks.

  ubuntu@openstackaw:~$ nova list
  
+--+-+++-++
  | ID   | Name| Status | Task State | 
Power State | Networks   |
  
+--+-+++-++
  | dcaf6d51-f81e-4cbd-ac77-0c5d21bde57c | sfeole1 | ACTIVE | -  | 
Running | internal=10.5.5.3  |
  | aa0b8aee-5650-41f4-8fa0-aeccdc763425 | sfeole2 | ACTIVE | -  | 
Running | internal=10.5.5.13 |
  
+--+-+++-++
  ubuntu@openstackaw:~$ nova show aa0b8aee-5650-41f4-8fa0-aeccdc763425
  
+--+--+
  | Property | Value
|
  
+--+--+
  | OS-DCF:diskConfig| MANUAL   
|
  | OS-EXT-AZ:availability_zone  | nova 
|
  | OS-EXT-SRV-ATTR:host | awrep3   
|
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | awrep3.maas  
|
  | OS-EXT-SRV-ATTR:instance_name| instance-0003
|
  | OS-EXT-STS:power_state   | 1
|
  | OS-EXT-STS:task_state| -
|
  | OS-EXT-STS:vm_state  | active

[Qemu-devel] [Bug 1719196] Please test proposed package

2017-10-31 Thread Corey Bryant
Hello Sean, or anyone else affected,

Accepted qemu into ocata-proposed. The package will build now and be
available in the Ubuntu Cloud Archive in a few hours, and then in the
-proposed repository.

Please help us by testing this new package. To enable the -proposed
repository:

  sudo add-apt-repository cloud-archive:ocata-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, and change the tag
from verification-ocata-needed to verification-ocata-done. If it does
not fix the bug for you, please add a comment stating that, and change
the tag to verification-ocata-failed. In either case, details of your
testing will help us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in
advance!

** Changed in: cloud-archive/ocata
   Status: Triaged => Fix Committed

** Tags added: verification-ocata-needed

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1719196

Title:
  [arm64 ocata] newly created instances are unable to raise network
  interfaces

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in libvirt:
  New
Status in QEMU:
  Fix Released
Status in libvirt package in Ubuntu:
  Invalid
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Zesty:
  Fix Committed

Bug description:
  [Impact]

   * A change in qemu 2.8 (83d768b virtio: set ISR on dataplane 
 notifications) broke virtio handling on platforms without a 
 controller. Those encounter flaky networking due to missed IRQs

   * Fix is a backport of the upstream fix b4b9862b: virtio: Fix no 
 interrupt when not creating msi controller

  [Test Case]

   * On Arm with Zesty (or Ocata) run a guest without PCI based devices

   * Example in e.g. c#23

   * Without the fix the networking does not work reliably (as it losses 
 IRQs), with the fix it works fine.

  [Regression Potential]

   * Changing the IRQ handling of virtio could affect virtio in general.
 But when reviwing the patch you'll see that it is small and actually 
 only changes to enable IRQ on one more place. That could cause more 
 IRQs than needed in the worst case, but those are usually not 
 breaking but only slowing things down. Also this fix is upstream 
 quite a while, increasing confidence.

  [Other Info]
   
   * There is currently 1720397 in flight in the SRU queue, so acceptance 
 of this upload has to wait until that completes.

  ---

  arm64 Ocata ,

  I'm testing to see I can get Ocata running on arm64 and using the
  openstack-base bundle to deploy it.  I have added the bundle to the
  log file attached to this bug.

  When I create a new instance via nova, the VM comes up and runs,
  however fails to raise its eth0 interface. This occurs on both
  internal and external networks.

  ubuntu@openstackaw:~$ nova list
  
+--+-+++-++
  | ID   | Name| Status | Task State | 
Power State | Networks   |
  
+--+-+++-++
  | dcaf6d51-f81e-4cbd-ac77-0c5d21bde57c | sfeole1 | ACTIVE | -  | 
Running | internal=10.5.5.3  |
  | aa0b8aee-5650-41f4-8fa0-aeccdc763425 | sfeole2 | ACTIVE | -  | 
Running | internal=10.5.5.13 |
  
+--+-+++-++
  ubuntu@openstackaw:~$ nova show aa0b8aee-5650-41f4-8fa0-aeccdc763425
  
+--+--+
  | Property | Value
|
  
+--+--+
  | OS-DCF:diskConfig| MANUAL   
|
  | OS-EXT-AZ:availability_zone  | nova 
|
  | OS-EXT-SRV-ATTR:host | awrep3   
|
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | awrep3.maas  
|
  | OS-EXT-SRV-ATTR:instance_name| instance-0003
|
  | OS-EXT-STS:power_state   | 1
|
  | OS-EXT-STS:task_state| -
|
  | 

[Qemu-devel] [Bug 1719196] Re: [arm64 ocata] newly created instances are unable to raise network interfaces

2017-10-18 Thread Corey Bryant
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/pike
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ocata
   Importance: Undecided
   Status: New

** No longer affects: cloud-archive/pike

** Changed in: cloud-archive/ocata
   Status: New => Triaged

** Changed in: cloud-archive/ocata
   Importance: Undecided => High

** Changed in: cloud-archive
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1719196

Title:
  [arm64 ocata] newly created instances are unable to raise network
  interfaces

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Triaged
Status in libvirt:
  New
Status in QEMU:
  New
Status in libvirt package in Ubuntu:
  Incomplete
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Zesty:
  In Progress

Bug description:
  arm64 Ocata ,

  I'm testing to see I can get Ocata running on arm64 and using the
  openstack-base bundle to deploy it.  I have added the bundle to the
  log file attached to this bug.

  When I create a new instance via nova, the VM comes up and runs,
  however fails to raise its eth0 interface. This occurs on both
  internal and external networks.

  ubuntu@openstackaw:~$ nova list
  
+--+-+++-++
  | ID   | Name| Status | Task State | 
Power State | Networks   |
  
+--+-+++-++
  | dcaf6d51-f81e-4cbd-ac77-0c5d21bde57c | sfeole1 | ACTIVE | -  | 
Running | internal=10.5.5.3  |
  | aa0b8aee-5650-41f4-8fa0-aeccdc763425 | sfeole2 | ACTIVE | -  | 
Running | internal=10.5.5.13 |
  
+--+-+++-++
  ubuntu@openstackaw:~$ nova show aa0b8aee-5650-41f4-8fa0-aeccdc763425
  
+--+--+
  | Property | Value
|
  
+--+--+
  | OS-DCF:diskConfig| MANUAL   
|
  | OS-EXT-AZ:availability_zone  | nova 
|
  | OS-EXT-SRV-ATTR:host | awrep3   
|
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | awrep3.maas  
|
  | OS-EXT-SRV-ATTR:instance_name| instance-0003
|
  | OS-EXT-STS:power_state   | 1
|
  | OS-EXT-STS:task_state| -
|
  | OS-EXT-STS:vm_state  | active   
|
  | OS-SRV-USG:launched_at   | 2017-09-24T14:23:08.00   
|
  | OS-SRV-USG:terminated_at | -
|
  | accessIPv4   |  
|
  | accessIPv6   |  
|
  | config_drive |  
|
  | created  | 2017-09-24T14:22:41Z 
|
  | flavor   | m1.small 
(717660ae-0440-4b19-a762-ffeb32a0575c)  |
  | hostId   | 
5612a00671c47255d2ebd6737a64ec9bd3a5866d1233ecf3e988b025 |
  | id   | aa0b8aee-5650-41f4-8fa0-aeccdc763425 
|
  | image| zestynosplash 
(e88fd1bd-f040-44d8-9e7c-c462ccf4b945) |
  | internal network | 10.5.5.13
|
  | key_name | mykey
|
  | metadata | {}   
|
  | name | sfeole2  
|
  | os-extended-volumes:volumes_attached | []   
|
  | progress | 0
|
  | security_groups  | 

[Qemu-devel] [Bug 1546445] Re: support vhost user without specifying vhostforce

2016-03-19 Thread Corey Bryant
** Changed in: qemu (Ubuntu Vivid)
   Status: Won't Fix => In Progress

** Changed in: qemu (Ubuntu Vivid)
 Assignee: (unassigned) => Liang Chen (cbjchen)

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1546445

Title:
  support vhost user without specifying vhostforce

Status in Ubuntu Cloud Archive:
  In Progress
Status in Ubuntu Cloud Archive kilo series:
  In Progress
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Vivid:
  In Progress
Status in qemu source package in Wily:
  Fix Released

Bug description:
  [Impact]

   * vhost-user falls back to virtio-net which causes performance lose
  without specifying the vhostforce option. But it should be the default
  behavior for vhost-user, since guests using PMD doesn't support msi-x.

  [Test Case]

create a vhost-user virtio backend without specifying the vhostforce 
option, i.e. -netdev 
type=vhost-user,id=mynet1,chardev=
start the VM
vhost-user is not enabled

  [Regression Potential]

   * none

  vhost user nic doesn't support non msi guests(like pxe stage) by default.
  Vhost user nic can't fall back to qemu like normal vhost net nic does. So we 
should
  enable it for non msi guests.

  The problem has been fix in qemu upstream  -
  
http://git.qemu.org/?p=qemu.git;a=commitdiff;h=24f938a682d934b133863eb421aac33592f7a09e.
  And the patch needs to be backported to 1:2.2+dfsg-5expubuntu9.8 .

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1546445/+subscriptions



[Qemu-devel] [Bug 1546445] Re: support vhost user without specifying vhostforce

2016-03-19 Thread Corey Bryant
** Changed in: cloud-archive/kilo
   Status: New => In Progress

** Changed in: cloud-archive/kilo
 Assignee: (unassigned) => Liang Chen (cbjchen)

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1546445

Title:
  support vhost user without specifying vhostforce

Status in Ubuntu Cloud Archive:
  In Progress
Status in Ubuntu Cloud Archive kilo series:
  In Progress
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Vivid:
  Won't Fix
Status in qemu source package in Wily:
  Fix Released

Bug description:
  [Impact]

   * vhost-user falls back to virtio-net which causes performance lose
  without specifying the vhostforce option. But it should be the default
  behavior for vhost-user, since guests using PMD doesn't support msi-x.

  [Test Case]

create a vhost-user virtio backend without specifying the vhostforce 
option, i.e. -netdev 
type=vhost-user,id=mynet1,chardev=
start the VM
vhost-user is not enabled

  [Regression Potential]

   * none

  vhost user nic doesn't support non msi guests(like pxe stage) by default.
  Vhost user nic can't fall back to qemu like normal vhost net nic does. So we 
should
  enable it for non msi guests.

  The problem has been fix in qemu upstream  -
  
http://git.qemu.org/?p=qemu.git;a=commitdiff;h=24f938a682d934b133863eb421aac33592f7a09e.
  And the patch needs to be backported to 1:2.2+dfsg-5expubuntu9.8 .

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1546445/+subscriptions



[Qemu-devel] [Bug 1546445] Re: support vhost user without specifying vhostforce

2016-03-18 Thread Corey Bryant
** No longer affects: cloud-archive

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1546445

Title:
  support vhost user without specifying vhostforce

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive kilo series:
  New
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Vivid:
  Won't Fix
Status in qemu source package in Wily:
  Fix Released

Bug description:
  [Impact]

   * vhost-user falls back to virtio-net which causes performance lose
  without specifying the vhostforce option. But it should be the default
  behavior for vhost-user, since guests using PMD doesn't support msi-x.

  [Test Case]

create a vhost-user virtio backend without specifying the vhostforce 
option, i.e. -netdev 
type=vhost-user,id=mynet1,chardev=
start the VM
vhost-user is not enabled

  [Regression Potential]

   * none

  vhost user nic doesn't support non msi guests(like pxe stage) by default.
  Vhost user nic can't fall back to qemu like normal vhost net nic does. So we 
should
  enable it for non msi guests.

  The problem has been fix in qemu upstream  -
  
http://git.qemu.org/?p=qemu.git;a=commitdiff;h=24f938a682d934b133863eb421aac33592f7a09e.
  And the patch needs to be backported to 1:2.2+dfsg-5expubuntu9.8 .

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1546445/+subscriptions



[Qemu-devel] [Bug 1546445] Re: support vhost user without specifying vhostforce

2016-03-18 Thread Corey Bryant
** Changed in: qemu (Ubuntu Vivid)
   Status: In Progress => Invalid

** Changed in: qemu (Ubuntu Vivid)
 Assignee: Liang Chen (cbjchen) => (unassigned)

** No longer affects: qemu (Ubuntu Vivid)

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1546445

Title:
  support vhost user without specifying vhostforce

Status in Ubuntu Cloud Archive:
  In Progress
Status in Ubuntu Cloud Archive kilo series:
  In Progress
Status in QEMU:
  Fix Released
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Wily:
  Fix Released

Bug description:
  [Impact]

   * vhost-user falls back to virtio-net which causes performance lose
  without specifying the vhostforce option. But it should be the default
  behavior for vhost-user, since guests using PMD doesn't support msi-x.

  [Test Case]

create a vhost-user virtio backend without specifying the vhostforce 
option, i.e. -netdev 
type=vhost-user,id=mynet1,chardev=
start the VM
vhost-user is not enabled

  [Regression Potential]

   * none

  vhost user nic doesn't support non msi guests(like pxe stage) by default.
  Vhost user nic can't fall back to qemu like normal vhost net nic does. So we 
should
  enable it for non msi guests.

  The problem has been fix in qemu upstream  -
  
http://git.qemu.org/?p=qemu.git;a=commitdiff;h=24f938a682d934b133863eb421aac33592f7a09e.
  And the patch needs to be backported to 1:2.2+dfsg-5expubuntu9.8 .

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1546445/+subscriptions



Re: [Qemu-devel] Missing vhost=on support on -netdev bridge.

2014-01-09 Thread Corey Bryant



On 01/09/2014 11:20 AM, Piotr Karbowski wrote:

Hi

On 01/08/2014 05:29 AM, Stefan Hajnoczi wrote:

On Sun, Dec 22, 2013 at 08:59:56PM +0100, Piotr Karbowski wrote:

Looks like netdev bridge have no support for vhost=on switch. Would
be ince if that could be added.


Try -netdev tap,helper=/path/to/qemu-bridge-helper,vhost=on,...


Sadly, -netdev tap does not support br= so I can't specify the bridge
for helper.

And I can't pass the '--use-br=' argument for helper= tap's option so no
workaround here.

-- Piotr.





You should be able to pass a command line like this:

  -net tap,helper=/usr/local/libexec/qemu-bridge-helper --br=br0

--
Regards,
Corey Bryant




[Qemu-devel] [PATCH] seccomp: exit if seccomp_init() fails

2013-12-18 Thread Corey Bryant
This fixes a bug where we weren't exiting if seccomp_init() failed.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 qemu-seccomp.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/qemu-seccomp.c b/qemu-seccomp.c
index cf07869..b7c1253 100644
--- a/qemu-seccomp.c
+++ b/qemu-seccomp.c
@@ -231,6 +231,7 @@ int seccomp_start(void)
 
 ctx = seccomp_init(SCMP_ACT_KILL);
 if (ctx == NULL) {
+rc = -1;
 goto seccomp_return;
 }
 
-- 
1.8.1.4




Re: [Qemu-devel] [PATCH] seccomp: -sandbox on won't kill Qemu when option not built in

2013-12-09 Thread Corey Bryant



On 12/09/2013 12:51 PM, Eduardo Otubo wrote:



On 12/09/2013 03:33 PM, Daniel P. Berrange wrote:

On Mon, Dec 09, 2013 at 03:20:52PM -0200, Eduardo Otubo wrote:

This option was requested by virt-test team so they can run tests with
Qemu and -sandbox on set without breaking whole test if host doesn't
have support for seccomp in kernel. It covers two possibilities:

  1) Host kernel support does not support seccomp, but user installed
Qemu
 package with sandbox support: Libseccomp will fail - qemu will
fail
 nicely and won't stop execution.

  2) Host kernel has support but Qemu package wasn't built with sandbox
 feature. Qemu will fail nicely and won't stop execution.

Signed-off-by: Eduardo Otubo ot...@linux.vnet.ibm.com
---
  vl.c | 10 +++---
  1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/vl.c b/vl.c
index b0399de..a0806dc 100644
--- a/vl.c
+++ b/vl.c
@@ -967,13 +967,11 @@ static int parse_sandbox(QemuOpts *opts, void
*opaque)
  #ifdef CONFIG_SECCOMP
  if (seccomp_start()  0) {
  qerror_report(ERROR_CLASS_GENERIC_ERROR,
-  failed to install seccomp syscall filter
in the kernel);
-return -1;
+  failed to install seccomp syscall filter
in the kernel, disabling it);
  }
  #else
  qerror_report(ERROR_CLASS_GENERIC_ERROR,
-  sandboxing request but seccomp is not
compiled into this build);
-return -1;
+  sandboxing request but seccomp is not
compiled into this build, disabling it);
  #endif
  }

@@ -3808,9 +3806,7 @@ int main(int argc, char **argv, char **envp)
  exit(1);
  }

-if (qemu_opts_foreach(qemu_find_opts(sandbox), parse_sandbox,
NULL, 0)) {
-exit(1);
-}
+qemu_opts_foreach(qemu_find_opts(sandbox), parse_sandbox,
NULL, 0);

  #ifndef _WIN32
  if (qemu_opts_foreach(qemu_find_opts(add-fd), parse_add_fd,
NULL, 1)) {


This change is really dubious from a security POV. If the admin requested
sandboxing and the host or QEMU build cannot support it, then QEMU really
*must* exit.


I think an admin must know what he's doing. If he requested sandbox but
without kernel support he need to step back a little and understand what
he's doing. This patch won't decrease the security level, IMHO.



Preventing qemu from exiting is definitely not the right approach. 
Please feel free to run code by me ahead of time in the future.




IMHO the test suite should probe to see if sandbox is working or not, and
just not use the -sandbox on arg if the host doesn't support it.


But I think this could be done on virt-test as well :)



This would make sense.

Although it sounds like Lucas was looking for an error message when 
seccomp kills qemu.  Maybe virt-test could grep the audit log for the 
existence of a type=SECCOMP record within the test's time of 
execution, and issue a message based on that.


--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH for-1.7] seccomp: setting -sandbox on by default

2013-12-04 Thread Corey Bryant



On 12/04/2013 08:21 AM, Eduardo Otubo wrote:



On 12/04/2013 07:39 AM, Stefan Hajnoczi wrote:

On Fri, Nov 22, 2013 at 11:00:24AM -0500, Paul Moore wrote:

Developers will only be happy with seccomp if it's easy and
rewarding to
support/debug.


Agreed.

As a developer, how do you feel about the audit/syslog based approach I
mentioned earlier?


I used the commands you posted (I think that's what you mean).  They
produce useful output.

The problem is that without an error message on stderr or from the
shell, no one will think QEMU process dead and hung == check seccomp
immediately.  It's frustrating to deal with a silent failure.


The process dies with a SIGKILL, and sig handling in Qemu is hard to
implement due to dozen of external linked libraries that has their own
signal masks and conflicts with seccomp. I've already tried this
approach in the past (you can find in the list by searching for debug mode)


And just to be clear, the signal handling approach was only for debug 
purposes.


There are basically three ways to fail a syscall with seccomp:

SECCOMP_RET_KILL - kernel kills the task immediately without executing 
syscall

SECCOMP_RET_TRAP - kernel sends SIGSYS to the task without executing syscall
SECCOMP_RET_ERRNO - kernel returns an errno to the task wtihout 
executing syscall


You could issue a better error messages if you used TRAP or ERRNO, but 
giving control back to QEMU after (presumably) arbitrary code is being 
executed sort of defeats the purpose.


--
Regards,
Corey Bryant



The optimal goal here is to use virt-test and audit log to eliminate
these sorts of things.






Re: [Qemu-devel] [PATCH 4/4] tpm: Provide libtpms software TPM backend

2013-11-26 Thread Corey Bryant


On 11/25/2013 10:04 PM, Xu, Quan wrote:

  Thanks Bryant, this problem has been solved by following 
http://www.mail-archive.com/qemu-devel@nongnu.org/msg200808.html;.
  But there is another problem when run configure with ./configure --target-list=x86_64-softmmu --enable-tpm. The 
value of libtpms is still no. when I modified tpm_libtpms to yes in configure 
file directly and make, then reported with error
hw/tpm/tpm_libtpms.c:21:33: fatal error: libtpms/tpm_library.h: No such file or 
directory.  Now I am installing libtpms with https://github.com/coreycb/libtpms for 
libtpms lib. Could you share specific step to configure QEMU based on your patch, if it 
comes
easily to you?


Here's what I've been using to build libtpms:

$ CFLAGS='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'

$ export CFLAGS
$ ./configure --build=x86_64-redhat-linux-gnu --prefix=/usr 
--libdir=/usr/lib64

$ make
$ sudo make install

And then the configure you're using above should work for QEMU.


  BTW, one target of my team is enabling stubdom vtpm for HVM virtual 
machine on Xen virtualization, your patches and seabios are big breakthroughs. 
My team is very interested to collaborate with you / Qemu community on similar 
areas.


That's great to hear!

Unfortunately, the current approach of linking QEMU against libtpms 
doesn't look like it's going to make it upstream.  So it looks like we 
need to take a different approach.


Btw, I thought Xen already had TPM support.  Is that not supported in 
stubdom's?


--
Regards,
Corey Bryant



I'd be really pleased if you can help me on these issues.

Quan Xu
Intel



-Original Message-
From: Corey Bryant [mailto:cor...@linux.vnet.ibm.com]
Sent: Monday, November 25, 2013 9:53 PM
To: Xu, Quan
Cc: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 4/4] tpm: Provide libtpms software TPM
backend



On 11/24/2013 10:36 PM, Xu, Quan wrote:

Bryant,

   I found that there is some conflict in qemu-options.hx between your

patch andqemu-1.7.0-rc1.tar.bz2
http://wiki.qemu-project.org/download/qemu-1.7.0-rc1.tar.bz2.


What QEMU version does this patch base on? Thanks.





Quan Xu

Intel





Thanks Quan.  I believe I built these on top of commit
c2d30667760e3d7b81290d801e567d4f758825ca.  I don't think this series is going
to make it upstream though so I likely won't be submitting a v2.

--
Regards,
Corey Bryant








Re: [Qemu-devel] [PATCH 4/4] tpm: Provide libtpms software TPM backend

2013-11-25 Thread Corey Bryant



On 11/24/2013 10:36 PM, Xu, Quan wrote:

Bryant,

  I found that there is some conflict in qemu-options.hx between your patch 
andqemu-1.7.0-rc1.tar.bz2  
http://wiki.qemu-project.org/download/qemu-1.7.0-rc1.tar.bz2.

What QEMU version does this patch base on? Thanks.





Quan Xu

Intel





Thanks Quan.  I believe I built these on top of commit 
c2d30667760e3d7b81290d801e567d4f758825ca.  I don't think this series is 
going to make it upstream though so I likely won't be submitting a v2.


--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH 0/4] tpm: Provide a software vTPM

2013-11-19 Thread Corey Bryant


On 11/19/2013 02:50 PM, Anthony Liguori wrote:

On Wed, Nov 6, 2013 at 6:38 AM, Corey Bryant cor...@linux.vnet.ibm.com wrote:

This patch series provides support for a software Trusted Platform
Module (otherwise known as a vTPM).  This support is provided via a
new backend that works with the existing QEMU tpm-tis front end.


We do device emulation within QEMU.  This is fundamentally what QEMU does.

Why should we link against an external library instead of providing
TPM emulation within QEMU itself?  What makes TPM so special here?


Because 70k+ LOC *definitely* doesn't have a chance of getting into 
QEMU, so it makes more sense to link against a library.




I know the answer to these questions of course.  There isn't a good
reason but there exists vTPM as an external tool for historical
reasons.  I don't think that's a good justification for doing this.
libtpms has had no review by anyone and does not have a community
around it.  Once we link against it, we are responsible for resolving


The source is now more readily available on github and while the 
community is small, there is a community.  Besides, QEMU uses other 
libraries that have very small communities doesn't it?



any security issue around it and fixing any bug within it.


Is this really true?  Is QEMU responsible for fixing every bug in glibc?

--
Regards,
Corey Bryant



That's essentially asking us to merge 70k+ LOCS without any review or
validation ahead of time.  That's an unreasonable request.

Regards,

Anthony Liguori


With this patch series, multiple guests can run with their own vTPM.
In comparison, the existing passthrough vTPM does not allow this
because the host TPM cannot be shared.

Note: There is seabios code that is not yet upstream that is
required to run with this support.  It provides support such as
initialization, ACPI table updates, and menu updates.  If anyone
would like to run with that support, let me know and I can send you
a bios.bin.

Following is a sample command line:

qemu-img create -f qcow2 /home/qemu/images/nvram.qcow2 500K

qemu-system-x86_64 ... \
-drive file=/home/qemu/images/nvram.qcow2,if=none,id=nvram0-0-0,format=qcow2 \
-device tpm-tis,tpmdev=tpm-tpm0,id=tpm0 \
-tpmdev libtpms,id=tpm-tpm0,nvram=nvram0-0-0

Corey Bryant (4):
   tpm: Add TPM NVRAM implementation
   tpm: Share tpm_write_fatal_error_response
   tpm: QMP/HMP support for libtpms TPM backend
   tpm: Provide libtpms software TPM backend

  configure|   24 ++
  hmp.c|5 +
  hw/tpm/Makefile.objs |2 +
  hw/tpm/tpm_libtpms.c |  885 ++
  hw/tpm/tpm_nvram.c   |  340 
  hw/tpm/tpm_nvram.h   |   25 ++
  hw/tpm/tpm_passthrough.c |   14 -
  hw/tpm/tpm_tis.h |1 +
  include/sysemu/tpm_backend.h |3 +
  qapi-schema.json |   18 +-
  qemu-options.hx  |   31 ++-
  tpm.c|   28 ++-
  12 files changed, 1357 insertions(+), 19 deletions(-)
  create mode 100644 hw/tpm/tpm_libtpms.c
  create mode 100644 hw/tpm/tpm_nvram.c
  create mode 100644 hw/tpm/tpm_nvram.h










[Qemu-devel] [PATCH 0/4] tpm: Provide a software vTPM

2013-11-06 Thread Corey Bryant
This patch series provides support for a software Trusted Platform
Module (otherwise known as a vTPM).  This support is provided via a
new backend that works with the existing QEMU tpm-tis front end.

With this patch series, multiple guests can run with their own vTPM.
In comparison, the existing passthrough vTPM does not allow this
because the host TPM cannot be shared.

Note: There is seabios code that is not yet upstream that is
required to run with this support.  It provides support such as
initialization, ACPI table updates, and menu updates.  If anyone
would like to run with that support, let me know and I can send you
a bios.bin.

Following is a sample command line:

qemu-img create -f qcow2 /home/qemu/images/nvram.qcow2 500K

qemu-system-x86_64 ... \
-drive file=/home/qemu/images/nvram.qcow2,if=none,id=nvram0-0-0,format=qcow2 \
-device tpm-tis,tpmdev=tpm-tpm0,id=tpm0 \
-tpmdev libtpms,id=tpm-tpm0,nvram=nvram0-0-0

Corey Bryant (4):
  tpm: Add TPM NVRAM implementation
  tpm: Share tpm_write_fatal_error_response
  tpm: QMP/HMP support for libtpms TPM backend
  tpm: Provide libtpms software TPM backend

 configure|   24 ++
 hmp.c|5 +
 hw/tpm/Makefile.objs |2 +
 hw/tpm/tpm_libtpms.c |  885 ++
 hw/tpm/tpm_nvram.c   |  340 
 hw/tpm/tpm_nvram.h   |   25 ++
 hw/tpm/tpm_passthrough.c |   14 -
 hw/tpm/tpm_tis.h |1 +
 include/sysemu/tpm_backend.h |3 +
 qapi-schema.json |   18 +-
 qemu-options.hx  |   31 ++-
 tpm.c|   28 ++-
 12 files changed, 1357 insertions(+), 19 deletions(-)
 create mode 100644 hw/tpm/tpm_libtpms.c
 create mode 100644 hw/tpm/tpm_nvram.c
 create mode 100644 hw/tpm/tpm_nvram.h




[Qemu-devel] [PATCH 1/4] tpm: Add TPM NVRAM implementation

2013-11-06 Thread Corey Bryant
Provides TPM NVRAM implementation that enables storing of TPM
NVRAM data in a persistent image file.  The block driver is
used to read/write the drive image.  This will enable, for
example, an encrypted QCOW2 image to be used to store sensitive
keys.

This patch provides APIs that a TPM backend can use to read and
write data.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
Reviewed-by: Stefan Hajnoczi stefa...@redhat.com
---
v1
 -This patch was formerly known as:
  [PATCH v3 1/3] nvram: Add TPM NVRAM implementation
 -Updated g_malloc to TPM_Malloc in tpm_nvram_co_read
 -Added tpm_nvram_bh NULL check to tpm_nvram_read/write
 -Change API names to tpm_nvram_bdrv_* to avoid conflict
  with libtpms' tpm_nvram_init
 -Moved makefile support to libtpms patch
---
 hw/tpm/tpm_nvram.c |  340 
 hw/tpm/tpm_nvram.h |   25 
 2 files changed, 365 insertions(+), 0 deletions(-)
 create mode 100644 hw/tpm/tpm_nvram.c
 create mode 100644 hw/tpm/tpm_nvram.h

diff --git a/hw/tpm/tpm_nvram.c b/hw/tpm/tpm_nvram.c
new file mode 100644
index 000..ded6d66
--- /dev/null
+++ b/hw/tpm/tpm_nvram.c
@@ -0,0 +1,340 @@
+/*
+ * TPM NVRAM - enables storage of persistent NVRAM data on an image file
+ *
+ * Copyright (C) 2013 IBM Corporation
+ *
+ * Authors:
+ *  Stefan Bergerstef...@us.ibm.com
+ *  Corey Bryant cor...@linux.vnet.ibm.com
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include tpm_nvram.h
+#include block/block_int.h
+#include qemu/thread.h
+#include sysemu/sysemu.h
+
+#include libtpms/tpm_memory.h
+#include libtpms/tpm_error.h
+
+#define TPM_NVRAM_DEBUG 0
+#define DPRINTF(fmt, ...) \
+do { \
+if (TPM_NVRAM_DEBUG) { \
+fprintf(stderr, fmt, ## __VA_ARGS__); \
+} \
+} while (0)
+
+/* Read/write request data */
+typedef struct TPMNvramRWRequest {
+BlockDriverState *bdrv;
+bool is_write;
+uint64_t offset;
+uint8_t **blob_r;
+uint8_t *blob_w;
+uint32_t size;
+int rc;
+
+QemuMutex completion_mutex;
+QemuCond completion;
+
+QSIMPLEQ_ENTRY(TPMNvramRWRequest) list;
+} TPMNvramRWRequest;
+
+/* Mutex protected queue of read/write requests */
+static QemuMutex tpm_nvram_rwrequests_mutex;
+static QSIMPLEQ_HEAD(, TPMNvramRWRequest) tpm_nvram_rwrequests =
+QSIMPLEQ_HEAD_INITIALIZER(tpm_nvram_rwrequests);
+
+static QEMUBH *tpm_nvram_bh;
+
+/*
+ * Get the disk size in kilobytes needed to store a blob (rounded up to next 
kb)
+ */
+static uint64_t tpm_nvram_required_size_kb(uint64_t offset, uint32_t size)
+{
+uint64_t required_size = offset + size;
+return DIV_ROUND_UP(required_size, 1024);
+}
+
+/*
+ * Increase the drive size if it's too small to store the blob
+ */
+static int tpm_nvram_adjust_size(BlockDriverState *bdrv, uint64_t offset,
+ uint32_t size)
+{
+int rc = 0;
+int64_t drive_size, required_size;
+
+rc = bdrv_getlength(bdrv);
+if (rc  0) {
+DPRINTF(%s: Unable to determine TPM NVRAM drive size\n, __func__);
+return rc;
+}
+
+drive_size = rc;
+required_size = tpm_nvram_required_size_kb(offset, size) * 1024;
+
+if (drive_size  required_size) {
+rc = bdrv_truncate(bdrv, required_size);
+if (rc  0) {
+DPRINTF(%s: TPM NVRAM drive too small\n, __func__);
+}
+}
+
+return rc;
+}
+
+/*
+ * Coroutine that reads a blob from the drive asynchronously
+ */
+static void coroutine_fn tpm_nvram_co_read(void *opaque)
+{
+int rc;
+TPMNvramRWRequest *rwr = opaque;
+
+rc = TPM_Malloc((unsigned char **)rwr-blob_r, rwr-size);
+if (rc != TPM_SUCCESS) {
+goto exit;
+}
+
+rc = bdrv_pread(rwr-bdrv, rwr-offset, *rwr-blob_r, rwr-size);
+if (rc != rwr-size) {
+TPM_Free(*rwr-blob_r);
+*rwr-blob_r = NULL;
+}
+
+exit:
+qemu_mutex_lock(rwr-completion_mutex);
+rwr-rc = rc;
+qemu_cond_signal(rwr-completion);
+qemu_mutex_unlock(rwr-completion_mutex);
+}
+
+/*
+ * Coroutine that writes a blob to the drive asynchronously
+ */
+static void coroutine_fn tpm_nvram_co_write(void *opaque)
+{
+int rc;
+TPMNvramRWRequest *rwr = opaque;
+
+rc = tpm_nvram_adjust_size(rwr-bdrv, rwr-offset, rwr-size);
+if (rc  0) {
+goto exit;
+}
+
+rc = bdrv_pwrite(rwr-bdrv, rwr-offset, rwr-blob_w, rwr-size);
+
+exit:
+qemu_mutex_lock(rwr-completion_mutex);
+rwr-rc = rc;
+qemu_cond_signal(rwr-completion);
+qemu_mutex_unlock(rwr-completion_mutex);
+}
+
+/*
+ * Enter a coroutine to read a blob from the drive
+ */
+static void tpm_nvram_do_co_read(TPMNvramRWRequest *rwr)
+{
+Coroutine *co;
+
+co = qemu_coroutine_create(tpm_nvram_co_read);
+qemu_coroutine_enter(co, rwr);
+}
+
+/*
+ * Enter a coroutine to write a blob to the drive
+ */
+static void tpm_nvram_do_co_write(TPMNvramRWRequest *rwr

[Qemu-devel] [PATCH 2/4] tpm: Share tpm_write_fatal_error_response

2013-11-06 Thread Corey Bryant
Move tpm_write_fatal_error_response to tpm.c where it can be shared
by multiple backends, and change the return code from void to the
size of the message.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 hw/tpm/tpm_passthrough.c |   14 --
 hw/tpm/tpm_tis.h |1 +
 include/sysemu/tpm_backend.h |2 ++
 tpm.c|   19 +++
 4 files changed, 22 insertions(+), 14 deletions(-)

diff --git a/hw/tpm/tpm_passthrough.c b/hw/tpm/tpm_passthrough.c
index 56e9e0f..602307a 100644
--- a/hw/tpm/tpm_passthrough.c
+++ b/hw/tpm/tpm_passthrough.c
@@ -98,20 +98,6 @@ static uint32_t tpm_passthrough_get_size_from_buffer(const 
uint8_t *buf)
 return be32_to_cpu(resp-len);
 }
 
-/*
- * Write an error message in the given output buffer.
- */
-static void tpm_write_fatal_error_response(uint8_t *out, uint32_t out_len)
-{
-if (out_len = sizeof(struct tpm_resp_hdr)) {
-struct tpm_resp_hdr *resp = (struct tpm_resp_hdr *)out;
-
-resp-tag = cpu_to_be16(TPM_TAG_RSP_COMMAND);
-resp-len = cpu_to_be32(sizeof(struct tpm_resp_hdr));
-resp-errcode = cpu_to_be32(TPM_FAIL);
-}
-}
-
 static int tpm_passthrough_unix_tx_bufs(TPMPassthruState *tpm_pt,
 const uint8_t *in, uint32_t in_len,
 uint8_t *out, uint32_t out_len)
diff --git a/hw/tpm/tpm_tis.h b/hw/tpm/tpm_tis.h
index 916152a..6dec43b 100644
--- a/hw/tpm/tpm_tis.h
+++ b/hw/tpm/tpm_tis.h
@@ -19,6 +19,7 @@
 
 #include hw/isa/isa.h
 #include qemu-common.h
+#include sysemu/tpm_backend.h
 
 #define TPM_TIS_ADDR_BASE   0xFED4
 
diff --git a/include/sysemu/tpm_backend.h b/include/sysemu/tpm_backend.h
index 825f33b..c7a7281 100644
--- a/include/sysemu/tpm_backend.h
+++ b/include/sysemu/tpm_backend.h
@@ -207,4 +207,6 @@ const TPMDriverOps *tpm_get_backend_driver(const char 
*type);
 int tpm_register_model(enum TpmModel model);
 int tpm_register_driver(const TPMDriverOps *tdo);
 
+uint32_t tpm_write_fatal_error_response(uint8_t *out, uint32_t out_len);
+
 #endif
diff --git a/tpm.c b/tpm.c
index d68d69f..1dd516b 100644
--- a/tpm.c
+++ b/tpm.c
@@ -19,6 +19,7 @@
 #include sysemu/tpm.h
 #include qemu/config-file.h
 #include qmp-commands.h
+#include hw/tpm/tpm_int.h
 
 static QLIST_HEAD(, TPMBackend) tpm_backends =
 QLIST_HEAD_INITIALIZER(tpm_backends);
@@ -61,6 +62,24 @@ static bool tpm_model_is_registered(enum TpmModel model)
 return false;
 }
 
+/*
+ * Write an error message in the given output buffer.
+ */
+uint32_t tpm_write_fatal_error_response(uint8_t *out, uint32_t out_len)
+{
+if (out_len = sizeof(struct tpm_resp_hdr)) {
+struct tpm_resp_hdr *resp = (struct tpm_resp_hdr *)out;
+
+resp-tag = cpu_to_be16(TPM_TAG_RSP_COMMAND);
+resp-len = cpu_to_be32(sizeof(struct tpm_resp_hdr));
+resp-errcode = cpu_to_be32(TPM_FAIL);
+
+return sizeof(struct tpm_resp_hdr);
+}
+
+return 0;
+}
+
 const TPMDriverOps *tpm_get_backend_driver(const char *type)
 {
 int i;
-- 
1.7.1




[Qemu-devel] [PATCH 4/4] tpm: Provide libtpms software TPM backend

2013-11-06 Thread Corey Bryant
This patch provides a software TPM backend implementation. The
core software TPM functionality is provided by the libtpms
library.  With this patch, multiple guests can run with their
own emulated TPMs.

The libtpms repository can be found at:
https://github.com/coreycb/libtpms

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 configure|   24 ++
 hw/tpm/Makefile.objs |2 +
 hw/tpm/tpm_libtpms.c |  885 ++
 qemu-options.hx  |   31 ++-
 tpm.c|2 +-
 5 files changed, 941 insertions(+), 3 deletions(-)
 create mode 100644 hw/tpm/tpm_libtpms.c

diff --git a/configure b/configure
index 9addff1..085142f 100755
--- a/configure
+++ b/configure
@@ -2475,6 +2475,26 @@ else
 fi
 
 ##
+# TPM libtpms probe
+
+tpm_libtpms=no
+if test $tpm != no ; then
+cat  $TMPC EOF
+#include libtpms/tpm_library.h
+#include libtpms/tpm_error.h
+#include libtpms/tpm_memory.h
+#include libtpms/tpm_nvfilename.h
+#include libtpms/tpm_tis.h
+int main(int argc, char **argv) { TPMLIB_GetVersion(); return 0; }
+EOF
+  libtpms_libs=`$pkg_config libtpms --libs 2 /dev/null`
+  if compile_prog $libtpms_libs ; then
+tpm_libtpms=$tpm
+libs_softmmu=$libs_softmmu $libtpms_libs
+  fi
+fi
+
+##
 # adjust virtio-blk-data-plane based on linux-aio
 
 if test $virtio_blk_data_plane = yes -a \
@@ -3746,6 +3766,7 @@ echo gcov enabled  $gcov
 echo TPM support   $tpm
 echo libssh2 support   $libssh2
 echo TPM passthrough   $tpm_passthrough
+echo TPM libtpms   $tpm_libtpms
 echo QOM debugging $qom_cast_debug
 
 if test $sdl_too_old = yes; then
@@ -4154,6 +4175,9 @@ if test $tpm = yes; then
   if test $tpm_passthrough = yes; then
 echo CONFIG_TPM_PASSTHROUGH=y  $config_host_mak
   fi
+  if test $tpm_libtpms = yes; then
+echo CONFIG_TPM_LIBTPMS=y  $config_host_mak
+  fi
 fi
 
 # use default implementation for tracing backend-specific routines
diff --git a/hw/tpm/Makefile.objs b/hw/tpm/Makefile.objs
index 99f5983..77e9065 100644
--- a/hw/tpm/Makefile.objs
+++ b/hw/tpm/Makefile.objs
@@ -1,2 +1,4 @@
 common-obj-$(CONFIG_TPM_TIS) += tpm_tis.o
 common-obj-$(CONFIG_TPM_PASSTHROUGH) += tpm_passthrough.o
+common-obj-$(CONFIG_TPM_LIBTPMS) += tpm_libtpms.o
+common-obj-$(CONFIG_TPM_LIBTPMS) += tpm_nvram.o
diff --git a/hw/tpm/tpm_libtpms.c b/hw/tpm/tpm_libtpms.c
new file mode 100644
index 000..f9c1f80
--- /dev/null
+++ b/hw/tpm/tpm_libtpms.c
@@ -0,0 +1,885 @@
+/*
+ * libtpms TPM driver
+ *
+ * Copyright (C) 2013 IBM Corporation
+ *
+ * Authors:
+ *  Stefan Berger   stef...@us.ibm.com
+ *  Corey Bryantcor...@linux.vnet.ibm.com
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include sysemu/tpm_backend.h
+#include tpm_int.h
+#include tpm_nvram.h
+#include qapi/qmp/qerror.h
+#include migration/migration.h
+#include sysemu/tpm_backend_int.h
+
+#include libtpms/tpm_library.h
+#include libtpms/tpm_error.h
+#include libtpms/tpm_memory.h
+#include libtpms/tpm_nvfilename.h
+#include libtpms/tpm_tis.h
+
+/* #define DEBUG_TPM */
+
+#ifdef DEBUG_TPM
+#define DPRINTF(fmt, ...) \
+do { fprintf(stderr, fmt, ## __VA_ARGS__); } while (0)
+#define DPRINTF_BUFFER(buffer, len) \
+do { tpm_ltpms_dump_buffer(stderr, buffer, len); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+do { } while (0)
+#define DPRINTF_BUFFER(buffer, len) \
+do { } while (0)
+#endif
+
+#define NVRAM_BLOB_OFFSET_FROM_ENTRY(entry_offset) \
+(entry_offset + sizeof(uint32_t))
+
+#define TYPE_TPM_LIBTPMS tpm-libtpms
+#define TPM_LIBTPMS(obj) \
+OBJECT_CHECK(TPMLTPMsState, (obj), TYPE_TPM_LIBTPMS)
+
+static const TPMDriverOps tpm_ltpms_driver;
+
+/* data structures */
+typedef struct TPMLTPMsThreadParams {
+TPMState *tpm_state;
+
+TPMRecvDataCB *recv_data_callback;
+TPMBackend *tb;
+} TPMLTPMsThreadParams;
+
+struct NVRAMEntry {
+uint32_t cur_size;
+uint8_t *buffer;
+};
+
+typedef struct NVRAMEntry NVRAMEntry;
+
+struct TPMLTPMsState {
+TPMBackend parent;
+
+TPMBackendThread tbt;
+
+TPMLTPMsThreadParams tpm_thread_params;
+
+bool tpm_initialized;
+bool had_fatal_error;
+
+BlockDriverState *bdrv;
+
+NVRAMEntry *perm_state_entry;
+NVRAMEntry *save_state_entry;
+NVRAMEntry *vola_state_entry;
+
+uint32_t perm_state_entry_offset;
+uint32_t save_state_entry_offset;
+uint32_t vola_state_entry_offset;
+
+uint32_t perm_state_max_size;
+uint32_t save_state_max_size;
+uint32_t vola_state_max_size;
+
+QemuMutex tpm_initialized_mutex;
+
+uint8_t locty; /* locality of command being executed by libtpms */
+};
+
+typedef struct TPMLTPMsState TPMLTPMsState;
+
+static TPMBackend *tpm_backend;
+
+/* functions */
+
+#ifdef DEBUG_TPM
+static inline void tpm_ltpms_dump_buffer(FILE *stream, unsigned char *buffer

[Qemu-devel] [PATCH 3/4] tpm: QMP/HMP support for libtpms TPM backend

2013-11-06 Thread Corey Bryant
This patch provides HMP 'info tpm', QMP 'query-tpm' and QMP
'query-tpm-types' support for the libtpms TPM backend.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 hmp.c|5 +
 include/sysemu/tpm_backend.h |1 +
 qapi-schema.json |   18 --
 tpm.c|7 +++
 4 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/hmp.c b/hmp.c
index 32ee285..0903969 100644
--- a/hmp.c
+++ b/hmp.c
@@ -673,6 +673,7 @@ void hmp_info_tpm(Monitor *mon, const QDict *qdict)
 Error *err = NULL;
 unsigned int c = 0;
 TPMPassthroughOptions *tpo;
+TPMLibtpmsOptions *tlo;
 
 info_list = qmp_query_tpm(err);
 if (err) {
@@ -702,6 +703,10 @@ void hmp_info_tpm(Monitor *mon, const QDict *qdict)
tpo-has_cancel_path ? ,cancel-path= : ,
tpo-has_cancel_path ? tpo-cancel_path : );
 break;
+case TPM_TYPE_OPTIONS_KIND_LIBTPMS:
+tlo = ti-options-libtpms;
+monitor_printf(mon, ,nvram=%s, tlo-nvram);
+break;
 case TPM_TYPE_OPTIONS_KIND_MAX:
 break;
 }
diff --git a/include/sysemu/tpm_backend.h b/include/sysemu/tpm_backend.h
index c7a7281..e648f79 100644
--- a/include/sysemu/tpm_backend.h
+++ b/include/sysemu/tpm_backend.h
@@ -51,6 +51,7 @@ struct TPMBackend {
 enum TpmModel fe_model;
 char *path;
 char *cancel_path;
+char *nvram_id;
 const TPMDriverOps *ops;
 
 QLIST_ENTRY(TPMBackend) list;
diff --git a/qapi-schema.json b/qapi-schema.json
index 81a375b..564e529 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -3673,10 +3673,11 @@
 # An enumeration of TPM types
 #
 # @passthrough: TPM passthrough type
+# @libtpms: TPM libtpms type
 #
 # Since: 1.5
 ##
-{ 'enum': 'TpmType', 'data': [ 'passthrough' ] }
+{ 'enum': 'TpmType', 'data': [ 'passthrough', 'libtpms' ] }
 
 ##
 # @query-tpm-types:
@@ -3705,16 +3706,29 @@
  '*cancel-path' : 'str'} }
 
 ##
+# @TPMLibtpmsOptions:
+#
+# Information about the TPM libtpms type
+#
+# @nvram: string showing the NVRAM drive id
+#
+# Since: 1.8
+##
+{ 'type': 'TPMLibtpmsOptions', 'data': { 'nvram' : 'str' } }
+
+##
 # @TpmTypeOptions:
 #
 # A union referencing different TPM backend types' configuration options
 #
 # @passthrough: The configuration options for the TPM passthrough type
+# @libtpms: The configuration options for the TPM libtpms type
 #
 # Since: 1.5
 ##
 { 'union': 'TpmTypeOptions',
-   'data': { 'passthrough' : 'TPMPassthroughOptions' } }
+   'data': { 'passthrough' : 'TPMPassthroughOptions',
+ 'libtpms' : 'TPMLibtpmsOptions' } }
 
 ##
 # @TpmInfo:
diff --git a/tpm.c b/tpm.c
index 1dd516b..2f4ef52 100644
--- a/tpm.c
+++ b/tpm.c
@@ -275,6 +275,7 @@ static TPMInfo *qmp_query_tpm_inst(TPMBackend *drv)
 {
 TPMInfo *res = g_new0(TPMInfo, 1);
 TPMPassthroughOptions *tpo;
+TPMLibtpmsOptions *tlo;
 
 res-id = g_strdup(drv-id);
 res-model = drv-fe_model;
@@ -294,6 +295,12 @@ static TPMInfo *qmp_query_tpm_inst(TPMBackend *drv)
 tpo-has_cancel_path = true;
 }
 break;
+ case TPM_TYPE_LIBTPMS:
+res-options-kind = TPM_TYPE_OPTIONS_KIND_LIBTPMS;
+tlo = g_new0(TPMLibtpmsOptions, 1);
+res-options-libtpms = tlo;
+tlo-nvram = g_strdup(drv-nvram_id);
+break;
 case TPM_TYPE_MAX:
 break;
 }
-- 
1.7.1




Re: [Qemu-devel] [PATCH 3/4] tpm: QMP/HMP support for libtpms TPM backend

2013-11-06 Thread Corey Bryant



On 11/06/2013 11:13 AM, Eric Blake wrote:

On 11/06/2013 07:39 AM, Corey Bryant wrote:

[your git settings are odd; your messages came across as individual
threads rather than in-reply to the cover letter]



I wonder if that's because I sent each patch separately with git 
send-email?  Sometimes it hangs when I send more than 1.



This patch provides HMP 'info tpm', QMP 'query-tpm' and QMP
'query-tpm-types' support for the libtpms TPM backend.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
  hmp.c|5 +
  include/sysemu/tpm_backend.h |1 +
  qapi-schema.json |   18 --
  tpm.c|7 +++
  4 files changed, 29 insertions(+), 2 deletions(-)

+++ b/qapi-schema.json
@@ -3673,10 +3673,11 @@
  # An enumeration of TPM types
  #
  # @passthrough: TPM passthrough type
+# @libtpms: TPM libtpms type


Worth adding '(since 1.8)' to mark when this enum value was added.


  # @TpmTypeOptions:
  #
  # A union referencing different TPM backend types' configuration options
  #
  # @passthrough: The configuration options for the TPM passthrough type
+# @libtpms: The configuration options for the TPM libtpms type


here as well.


  #
  # Since: 1.5
  ##
  { 'union': 'TpmTypeOptions',
-   'data': { 'passthrough' : 'TPMPassthroughOptions' } }
+   'data': { 'passthrough' : 'TPMPassthroughOptions',
+ 'libtpms' : 'TPMLibtpmsOptions' } }



Otherwise it looks okay to me.



Thanks for the comments.  I'll add them to the next version.  Can I add 
your Reviewed-by to this patch?


--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCHv3 1/3] seccomp: adding blacklist support

2013-10-10 Thread Corey Bryant



On 10/09/2013 05:36 PM, Paul Moore wrote:

On Tuesday, October 08, 2013 09:42:24 PM Eduardo Otubo wrote:

v3: The -netdev tap option is checked in the vl.c file during the
process of the command line argument list. It sets tap_enabled to true
or false according to the configuration found. Later at the seccomp
filter installation, this value is checked wheter to install or not this
feature.


I like the idea of slowly making the QEMU syscall filter dependent on the
runtime configuration.  With that in mind, I wonder if we should have a more
general purpose API in include/sysemu/seccomp.h that allows QEMU to indicate
to the the QEMU/seccomp code that a particular feature is enabled.

Maybe something like this:

   #define SCMP_FEAT_TAP ...

   int seccomp_feature_enable(int feature);


This is a good approach, and then the blacklist can vary based on what 
features are enabled.


--
Regards,
Corey Bryant



One more comment below.


Adding a system call blacklist right before the vcpus starts. This
filter is composed by the system calls that can't be executed after the
guests are up. This list should be refined as whitelist is, with as much
testing as we can do using virt-test.

Signed-off-by: Eduardo Otubo ot...@linux.vnet.ibm.com
---
  include/sysemu/seccomp.h |  6 -
  qemu-seccomp.c   | 64
+++- vl.c |
21 +++-
  3 files changed, 77 insertions(+), 14 deletions(-)

diff --git a/include/sysemu/seccomp.h b/include/sysemu/seccomp.h
index 1189fa2..9dc7e52 100644
--- a/include/sysemu/seccomp.h
+++ b/include/sysemu/seccomp.h
@@ -15,8 +15,12 @@
  #ifndef QEMU_SECCOMP_H
  #define QEMU_SECCOMP_H

+#define WHITELIST 0
+#define BLACKLIST 1


Should these #defines be namespaced in some way, e.g. SCMP_LIST_BLACKLIST?


  #include seccomp.h
  #include qemu/osdep.h

-int seccomp_start(void);
+int seccomp_start(int list_type);
+
  #endif








Re: [Qemu-devel] [PATCHv3 1/3] seccomp: adding blacklist support

2013-10-09 Thread Corey Bryant
)
+{
+if (enable_blacklist  !tap_enabled) {
+if (seccomp_start(BLACKLIST)  0) {


I don't think this is flexible enough for future growth.  If you're 
going to use a dynamic approach to building the blacklist, then wouldn't 
you want to blacklist syscalls individually based on the option that 
causes them to be used?  The approach you have here would be all or nothing.



+qerror_report(ERROR_CLASS_GENERIC_ERROR,
+  failed to install seccomp syscall second level filter in 
the kernel);
+exit(1);
+}
+}
+}
+
  void vm_start(void)
  {
  if (!runstate_is_running()) {
  cpu_enable_ticks();
  runstate_set(RUN_STATE_RUNNING);
  vm_state_notify(1, RUN_STATE_RUNNING);
+install_seccomp_blacklist();
  resume_all_vcpus();
  monitor_protocol_event(QEVENT_RESUME, NULL);
  }
@@ -3208,6 +3222,11 @@ int main(int argc, char **argv, char **envp)
  if (net_client_parse(qemu_find_opts(netdev), optarg) == -1) 
{
  exit(1);
  }
+
+if(strcmp(optarg, tap)){
+tap_enabled = true;
+}


You're not covering all command line options that lead to exec calls.  I 
see the following with 'git grep execv':


net/tap.c:execv(setup_script, args);
net/tap.c:execv(/bin/sh, args);
net/tap.c:execv(helper, args);
slirp/misc.c:   execvp(argv[0], (char **)argv);

So I know you're at least missing -net bridge.  And maybe slirp, but I'm 
not sure about that.


What about hotplugging a network tap or bridge device? You'll need to at 
least document that they're not going to work when -sandbox is in 
effect, and you'll need to fail nicely if they're attempted.



+
  break;
  case QEMU_OPTION_net:
  if (net_client_parse(qemu_find_opts(net), optarg) == -1) {



--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCHv2 2/3] seccomp: adding command line support for blacklist

2013-09-18 Thread Corey Bryant
 a vulnerability before an actual fix 
is available, by blacklisting a syscall or a syscall argument.


--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCHv2 2/3] seccomp: adding command line support for blacklist

2013-09-17 Thread Corey Bryant



On 09/17/2013 01:14 PM, Eduardo Otubo wrote:



On 09/17/2013 11:43 AM, Paul Moore wrote:

On Tuesday, September 17, 2013 02:06:06 PM Daniel P. Berrange wrote:

On Tue, Sep 17, 2013 at 10:01:23AM -0300, Eduardo Otubo wrote:


Paul, what exactly are you planning to add to libvirt? I'm not a big
fan of using qemu command line to pass syscalls for blacklist as
arguments, but I can't see other way to avoid problems (like -net
bridge / -net tap) from happening.


At present, and as far as I'm concerned pretty much everything is open
for
discussion, the code works similar to the libvirt network filters.
You create
a separate XML configuration file which defines the filter and you
reference
that filter from the domain's XML configuration.  When a QEMU/KVM or
LXC based
domain starts it uses libseccomp to create the seccomp filter and then
loads
it into the kernel after the fork but before the domain is exec'd.


Clever approach. I tihnk a possible way to do this is something like:

  -sandbox
-on[,strict=on|off][,whitelist=qemu_whitelist.conf][,blacklist=qemu_blacklist.conf]


 where:

[,whitelist=qemu_whitelist.conf] will override default whitelist filter
[,blacklist=blacklist.conf] will override default blacklist filter

But when we add seccomp support for qemu on libvirt, we make sure to
just add -sandbox off and use Paul's approach.

Is that a reasonable approach? What do you think?



QEMU wouldn't require any changes for the approach Paul describes. The 
QEMU process that is exec'd by libvirt would be constrained by the 
filter that libvirt installed.


--
Regards,
Corey Bryant



There are no command line arguments passed to QEMU.  This work can
co-exist
with the QEMU seccomp filters without problem.

The original goal of this effort wasn't to add libvirt syscall
filtering for
QEMU, but rather for LXC; adding QEMU support just happened to be a
trivial
patch once the LXC support was added.

(I also apologize for the delays, I hit a snag with an existing
problem on
libvirt which stopped work and then some other BZs grabbed my
attention...)


IMHO, if libvirt is enabling seccomp, then making all possible cli
args work is a non-goal. If there are things which require privileges
seccomp is blocking, then libvirt should avoid using them. eg by making
use of FD passing where appropriate to reduce privileges qemu needs.


I agree.








Re: [Qemu-devel] [PATCHv2 1/3] seccomp: adding blacklist support

2013-09-11 Thread Corey Bryant
) {
+if (seccomp_start(BLACKLIST)  0) {
+qerror_report(ERROR_CLASS_GENERIC_ERROR,
+  failed to install seccomp syscall second level filter in 
the kernel);


As Lei Li mentioned, should you return -1 here like the whitelist 
failure path does?



+}
+}
+}
+
  void vm_start(void)
  {
  if (!runstate_is_running()) {
  cpu_enable_ticks();
  runstate_set(RUN_STATE_RUNNING);
  vm_state_notify(1, RUN_STATE_RUNNING);
+install_seccomp_blacklist();


The blacklist isn't installed when the incoming migration path is taken. 
 In other words, should install_seccomp_blacklist() be called before 
the if (incoming) check in main()?



  resume_all_vcpus();
  monitor_protocol_event(QEVENT_RESUME, NULL);
  }



--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCHv2 2/3] seccomp: adding command line support for blacklist

2013-09-11 Thread Corey Bryant



On 09/06/2013 03:21 PM, Eduardo Otubo wrote:

New command line options for the seccomp blacklist feature:

  $ qemu -sandbox on[,strict=on|off]

The strict parameter will turn on or off the new system call blacklist


I mentioned this before but I'll say it again since I think it needs to 
be discussed.  Since this regresses support (it'll prevent -net bridge 
and -net tap from using execv) the concern I have with the strict=on|off 
option is whether or not we will have the flexibility to modify the 
blacklist once QEMU is released with this support.  Of course we should 
be able to add more syscalls to the blacklist as long as they don't 
regress QEMU functionality.  But if we want to add a syscall that does 
regress QEMU functionality, I think we'd have to add a new command line 
option, which doesn't seem desirable.


So a more flexible approach may be necessary.  Maybe the blacklist 
should be passed on the command line, which would enable it to be 
defined by libvirt and passed to QEMU.  I know Paul is working on 
something for libvirt so maybe that answers this question.




Signed-off-by: Eduardo Otubo ot...@linux.vnet.ibm.com
---
  qemu-options.hx |  8 +---
  vl.c| 11 ++-
  2 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/qemu-options.hx b/qemu-options.hx
index d15338e..05485e1 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -2978,13 +2978,15 @@ Old param mode (ARM only).
  ETEXI

  DEF(sandbox, HAS_ARG, QEMU_OPTION_sandbox, \
--sandbox arg  Enable seccomp mode 2 system call filter (default 
'off').\n,
+-sandbox arg  Enable seccomp mode 2 system call filter (default 
'off').\n
+-sandbox on[,strict=arg]\n
+Enable seccomp mode 2 system call second level filter (default 
'off').\n,


Does this need to mention the QEMU features restricted by the blacklist?


  QEMU_ARCH_ALL)
  STEXI
-@item -sandbox @var{arg}
+@item -sandbox @var{arg}[,strict=@var{value}]
  @findex -sandbox
  Enable Seccomp mode 2 system call filter. 'on' will enable syscall filtering 
and 'off' will
-disable it.  The default is 'off'.
+disable it.  The default is 'off'. 'strict=on' will enable second level filter 
(default is 'off').


And here too?


  ETEXI

  DEF(readconfig, HAS_ARG, QEMU_OPTION_readconfig,
diff --git a/vl.c b/vl.c
index 02f7486..909f685 100644
--- a/vl.c
+++ b/vl.c
@@ -329,6 +329,9 @@ static QemuOptsList qemu_sandbox_opts = {
  {
  .name = enable,
  .type = QEMU_OPT_BOOL,
+},{
+.name = strict,
+.type = QEMU_OPT_STRING,
  },
  { /* end of list */ }
  },
@@ -1031,6 +1034,7 @@ static int bt_parse(const char *opt)

  static int parse_sandbox(QemuOpts *opts, void *opaque)
  {
+const char * strict_value = NULL;
  /* FIXME: change this to true for 1.3 */
  if (qemu_opt_get_bool(opts, enable, false)) {
  #ifdef CONFIG_SECCOMP
@@ -1040,7 +1044,12 @@ static int parse_sandbox(QemuOpts *opts, void *opaque)
  return -1;
  }

-enable_blacklist = true;
+strict_value = qemu_opt_get(opts, strict);
+if (strict_value) {
+if (!strcmp(strict_value, on)) {
+enable_blacklist = true;
+}
+}
  #else
  qerror_report(ERROR_CLASS_GENERIC_ERROR,
sandboxing request but seccomp is not compiled into this 
build);



--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCHv3 3/3] seccomp: general fixes

2013-09-11 Thread Corey Bryant



On 09/06/2013 03:21 PM, Eduardo Otubo wrote:

  1) On qemu-seccomp.c:255, the variable ctx was being used
uninitialized; now it's initialized with NULL and it's being checked at
the end of the function.

  2) Changed the name of the command line option from enable to
sandbox for a better understanding from user side.

Signed-off-by: Eduardo Otuboot...@linux.vnet.ibm.com
---
  qemu-seccomp.c | 5 +++--
  vl.c   | 6 +++---
  2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/qemu-seccomp.c b/qemu-seccomp.c
index 5e85eb5..f39d636 100644
--- a/qemu-seccomp.c
+++ b/qemu-seccomp.c
@@ -252,7 +252,7 @@ seccomp_return:
  int seccomp_start(int list_type)
  {
  int rc = 0;
-scmp_filter_ctx ctx;
+scmp_filter_ctx ctx = NULL;

  switch (list_type) {
  case WHITELIST:
@@ -280,6 +280,7 @@ int seccomp_start(int list_type)
  rc = seccomp_load(ctx);

  seccomp_return:
-seccomp_release(ctx);
+if (!ctx)


You need to remove the ! from this check.


+seccomp_release(ctx);
  return rc;
  }
diff --git a/vl.c b/vl.c
index 909f685..129919d 100644
--- a/vl.c
+++ b/vl.c
@@ -323,11 +323,11 @@ static QemuOptsList qemu_rtc_opts = {

  static QemuOptsList qemu_sandbox_opts = {
  .name = sandbox,
-.implied_opt_name = enable,
+.implied_opt_name = sandbox,


So does this technically make it -sandbox,sandbox=on?  If I understand 
correctly, I don't think the implied option is ever seen or used by the 
user anyway so it probably doesn't matter.  But I don't know if it's 
worth changing.



  .head = QTAILQ_HEAD_INITIALIZER(qemu_sandbox_opts.head),
  .desc = {
  {
-.name = enable,
+.name = sandbox,
  .type = QEMU_OPT_BOOL,
  },{
  .name = strict,
@@ -1036,7 +1036,7 @@ static int parse_sandbox(QemuOpts *opts, void *opaque)
  {
  const char * strict_value = NULL;
  /* FIXME: change this to true for 1.3 */
-if (qemu_opt_get_bool(opts, enable, false)) {
+if (qemu_opt_get_bool(opts, sandbox, false)) {
  #ifdef CONFIG_SECCOMP
  if (seccomp_start(WHITELIST)  0) {
  qerror_report(ERROR_CLASS_GENERIC_ERROR,
-- 1.8.3.1



--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH] seccomp: adding times() to the whitelist

2013-09-04 Thread Corey Bryant



On 09/04/2013 08:25 AM, Eduardo Otubo wrote:

This was causing Qemu process to hang when using -sandbox on.

Related RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1004175

Signed-off-by: Eduardo Otubo ot...@linux.vnet.ibm.com
---
  qemu-seccomp.c |1 +
  1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/qemu-seccomp.c b/qemu-seccomp.c
index 37d38f8..69cee44 100644
--- a/qemu-seccomp.c
+++ b/qemu-seccomp.c
@@ -90,6 +90,7 @@ static const struct QemuSeccompSyscall seccomp_whitelist[] = {
  { SCMP_SYS(getuid), 245 },
  { SCMP_SYS(geteuid), 245 },
  { SCMP_SYS(timer_create), 245 },
+{ SCMP_SYS(times), 245 },
  { SCMP_SYS(exit), 245 },
  { SCMP_SYS(clock_gettime), 245 },
  { SCMP_SYS(time), 245 },



Reviewed-by: Corey Bryant cor...@linux.vnet.ibm.com

--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH] seccomp: adding a second whitelist

2013-09-03 Thread Corey Bryant



On 08/30/2013 10:21 AM, Eduardo Otubo wrote:



On 08/29/2013 05:34 AM, Stefan Hajnoczi wrote:

On Wed, Aug 28, 2013 at 10:04:32PM -0300, Eduardo Otubo wrote:

Now there's a second whitelist, right before the vcpu starts. The second
whitelist is the same as the first one, except for exec() and select().


-netdev tap,downscript=/path/to/script requires exec() in the QEMU
shutdown code path.  Will this work with seccomp?


I actually don't know, but I'll test that as well. Can you run a test
with this patch and -netdev? I mean, if you're pointing that out you
might have a scenario already setup, right?

Thanks!



This uses exec() in net/tap.c.

I think if we're going to introduce a sandbox environment that restricts 
existing QEMU behavior, then we have to introduce a new argument to the 
-sandbox option.  So for example, -sandbox on would continue to use 
the whitelist that allows everything in QEMU to work (or at least it 
should :).  And something like -sandbox on,strict=on would use the 
whitelist + blacklist.


If this is acceptable though, then I wonder how we could go about adding 
new syscalls to the blacklist in future QEMU releases without regressing 
-sandbox on,strict=on.


By the way, are any test buckets running regularly with -sandbox on?

--
Regards,
Corey Bryant



Stefan








Re: [Qemu-devel] [PATCH] seccomp: adding a second whitelist

2013-09-03 Thread Corey Bryant



On 09/03/2013 02:02 PM, Corey Bryant wrote:



On 08/30/2013 10:21 AM, Eduardo Otubo wrote:



On 08/29/2013 05:34 AM, Stefan Hajnoczi wrote:

On Wed, Aug 28, 2013 at 10:04:32PM -0300, Eduardo Otubo wrote:

Now there's a second whitelist, right before the vcpu starts. The
second
whitelist is the same as the first one, except for exec() and select().


-netdev tap,downscript=/path/to/script requires exec() in the QEMU
shutdown code path.  Will this work with seccomp?


I actually don't know, but I'll test that as well. Can you run a test
with this patch and -netdev? I mean, if you're pointing that out you
might have a scenario already setup, right?

Thanks!



This uses exec() in net/tap.c.

I think if we're going to introduce a sandbox environment that restricts
existing QEMU behavior, then we have to introduce a new argument to the
-sandbox option.  So for example, -sandbox on would continue to use
the whitelist that allows everything in QEMU to work (or at least it
should :).  And something like -sandbox on,strict=on would use the
whitelist + blacklist.

If this is acceptable though, then I wonder how we could go about adding
new syscalls to the blacklist in future QEMU releases without regressing
-sandbox on,strict=on.


Maybe a better approach is to provide support that allows libvirt to 
define the blacklist and pass it to QEMU?




By the way, are any test buckets running regularly with -sandbox on?



--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH] seccomp: adding a second whitelist

2013-09-03 Thread Corey Bryant



On 09/03/2013 02:21 PM, Paul Moore wrote:

On Tuesday, September 03, 2013 02:08:28 PM Corey Bryant wrote:

On 09/03/2013 02:02 PM, Corey Bryant wrote:

On 08/30/2013 10:21 AM, Eduardo Otubo wrote:

On 08/29/2013 05:34 AM, Stefan Hajnoczi wrote:

On Wed, Aug 28, 2013 at 10:04:32PM -0300, Eduardo Otubo wrote:

Now there's a second whitelist, right before the vcpu starts. The
second
whitelist is the same as the first one, except for exec() and select().


-netdev tap,downscript=/path/to/script requires exec() in the QEMU
shutdown code path.  Will this work with seccomp?


I actually don't know, but I'll test that as well. Can you run a test
with this patch and -netdev? I mean, if you're pointing that out you
might have a scenario already setup, right?

Thanks!


This uses exec() in net/tap.c.

I think if we're going to introduce a sandbox environment that restricts
existing QEMU behavior, then we have to introduce a new argument to the
-sandbox option.  So for example, -sandbox on would continue to use
the whitelist that allows everything in QEMU to work (or at least it
should :).  And something like -sandbox on,strict=on would use the
whitelist + blacklist.

If this is acceptable though, then I wonder how we could go about adding
new syscalls to the blacklist in future QEMU releases without regressing
-sandbox on,strict=on.


Maybe a better approach is to provide support that allows libvirt to
define the blacklist and pass it to QEMU?


FYI: I didn't want to mention this until I had some patches ready to post, but
I'm currently working on adding syscall filtering, via libseccomp, to libvirt.
I hope to get an initial RFC-quality patch out soon.



Great, looking forward to seeing them.

--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH] seccomp: adding a second whitelist

2013-09-03 Thread Corey Bryant



On 09/03/2013 04:05 PM, Eduardo Otubo wrote:



On 09/03/2013 03:02 PM, Corey Bryant wrote:



On 08/30/2013 10:21 AM, Eduardo Otubo wrote:



On 08/29/2013 05:34 AM, Stefan Hajnoczi wrote:

On Wed, Aug 28, 2013 at 10:04:32PM -0300, Eduardo Otubo wrote:

Now there's a second whitelist, right before the vcpu starts. The
second
whitelist is the same as the first one, except for exec() and
select().


-netdev tap,downscript=/path/to/script requires exec() in the QEMU
shutdown code path.  Will this work with seccomp?


I actually don't know, but I'll test that as well. Can you run a test
with this patch and -netdev? I mean, if you're pointing that out you
might have a scenario already setup, right?

Thanks!



This uses exec() in net/tap.c.

I think if we're going to introduce a sandbox environment that restricts
existing QEMU behavior, then we have to introduce a new argument to the
-sandbox option.  So for example, -sandbox on would continue to use
the whitelist that allows everything in QEMU to work (or at least it
should :).  And something like -sandbox on,strict=on would use the
whitelist + blacklist.


I think tihs is very reasonable. I'll working on implementing this
options for v2.



If this is acceptable though, then I wonder how we could go about adding
new syscalls to the blacklist in future QEMU releases without regressing
-sandbox on,strict=on.

By the way, are any test buckets running regularly with -sandbox on?


I am running tests with virt-test. Need to come up with a script that
checks for unused syscalls, though.



Would it be possible to submit a patch to turn on -sandbox for some/all 
QEMU virt-test tests?  That would enable regular regression runs that 
aren't dependent on you.  Plus it would be a good proof point of the 
QEMU seccomp support if the tests run successfully.


--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH v3 0/3] TPM NVRAM persistent storage

2013-06-14 Thread Corey Bryant



On 06/14/2013 10:01 AM, Anthony Liguori wrote:

Corey Bryant cor...@linux.vnet.ibm.com writes:


This patch series provides persistent storage support that a TPM
can use to store NVRAM data.  It uses QEMU's block driver to store
data on a drive image.  The libtpms TPM 1.2 backend will be the
initial user of this functionality to store data that must persist
through a reboot or migration.  A sample command line may look like
this:


This should be folded into the libtpms backend series.

There are no users for this so this would just be untestable code in the
tree subject to bitrot.

Regards,

Anthony Liguori



Fair enough.  I assume you're ok with this code though?

--
Regards,
Corey Bryant



qemu-system-x86_64 ...
-drive file=/path/to/nvram.qcow2,id=drive-nvram0-0-0
-tpmdev libtpms,id=tpm-tpm0
-device tpm-tis,tpmdev=tpm-tpm0,id=tpm0,drive=drive-nvram0-0-0

Thanks,
Corey

Corey Bryant (3):
   nvram: Add TPM NVRAM implementation
   nvram: Add tpm-tis drive support
   TPM NVRAM test

  hw/tpm/Makefile.objs |1 +
  hw/tpm/tpm_int.h |2 +
  hw/tpm/tpm_nvram.c   |  324 ++
  hw/tpm/tpm_nvram.h   |   25 
  hw/tpm/tpm_passthrough.c |   85 
  hw/tpm/tpm_tis.c |8 +
  6 files changed, 445 insertions(+), 0 deletions(-)
  create mode 100644 hw/tpm/tpm_nvram.c
  create mode 100644 hw/tpm/tpm_nvram.h










Re: [Qemu-devel] [PATCH v3 0/3] TPM NVRAM persistent storage

2013-06-14 Thread Corey Bryant



On 06/14/2013 11:38 AM, Anthony Liguori wrote:

Corey Bryant cor...@linux.vnet.ibm.com writes:


On 06/14/2013 10:01 AM, Anthony Liguori wrote:

Corey Bryant cor...@linux.vnet.ibm.com writes:


This patch series provides persistent storage support that a TPM
can use to store NVRAM data.  It uses QEMU's block driver to store
data on a drive image.  The libtpms TPM 1.2 backend will be the
initial user of this functionality to store data that must persist
through a reboot or migration.  A sample command line may look like
this:


This should be folded into the libtpms backend series.

There are no users for this so this would just be untestable code in the
tree subject to bitrot.

Regards,

Anthony Liguori



Fair enough.  I assume you're ok with this code though?


I don't understand why it's needed to be honest.  I suspect this has to
do with the fact that the libtpms implementation will need significant
reworking.

Regards,

Anthony Liguori



In regards to why it is needed..  The QEMU software-emulated vTPM 
backend will pass callback functions to libtpms for writing/reading 
nvram data.  Those callbacks will use the code in this patch series to 
do the writing/reading of nvram data to/from image files so that the 
data persists through migration/reboot.


I'm not sure I completely understand your second sentence, but yes the 
software-emulated vTPM backend code for QEMU will certainly need rework 
to use the code in this patch series.


--
Regards,
Corey Bryant



--
Regards,
Corey Bryant



qemu-system-x86_64 ...
-drive file=/path/to/nvram.qcow2,id=drive-nvram0-0-0
-tpmdev libtpms,id=tpm-tpm0
-device tpm-tis,tpmdev=tpm-tpm0,id=tpm0,drive=drive-nvram0-0-0

Thanks,
Corey

Corey Bryant (3):
nvram: Add TPM NVRAM implementation
nvram: Add tpm-tis drive support
TPM NVRAM test

   hw/tpm/Makefile.objs |1 +
   hw/tpm/tpm_int.h |2 +
   hw/tpm/tpm_nvram.c   |  324 
++
   hw/tpm/tpm_nvram.h   |   25 
   hw/tpm/tpm_passthrough.c |   85 
   hw/tpm/tpm_tis.c |8 +
   6 files changed, 445 insertions(+), 0 deletions(-)
   create mode 100644 hw/tpm/tpm_nvram.c
   create mode 100644 hw/tpm/tpm_nvram.h














Re: [Qemu-devel] [PATCH v3 0/3] TPM NVRAM persistent storage

2013-06-14 Thread Corey Bryant



On 06/14/2013 11:56 AM, Anthony Liguori wrote:

Corey Bryant cor...@linux.vnet.ibm.com writes:


On 06/14/2013 11:38 AM, Anthony Liguori wrote:

Corey Bryant cor...@linux.vnet.ibm.com writes:


On 06/14/2013 10:01 AM, Anthony Liguori wrote:

Corey Bryant cor...@linux.vnet.ibm.com writes:


This patch series provides persistent storage support that a TPM
can use to store NVRAM data.  It uses QEMU's block driver to store
data on a drive image.  The libtpms TPM 1.2 backend will be the
initial user of this functionality to store data that must persist
through a reboot or migration.  A sample command line may look like
this:


This should be folded into the libtpms backend series.

There are no users for this so this would just be untestable code in the
tree subject to bitrot.

Regards,

Anthony Liguori



Fair enough.  I assume you're ok with this code though?


I don't understand why it's needed to be honest.  I suspect this has to
do with the fact that the libtpms implementation will need significant
reworking.

Regards,

Anthony Liguori



In regards to why it is needed..  The QEMU software-emulated vTPM
backend will pass callback functions to libtpms for writing/reading
nvram data.  Those callbacks will use the code in this patch series to
do the writing/reading of nvram data to/from image files so that the
data persists through migration/reboot.

I'm not sure I completely understand your second sentence, but yes the
software-emulated vTPM backend code for QEMU will certainly need rework
to use the code in this patch series.


I think it's easiest to discuss this in the context of the actual patch
series.

Regards,

Anthony Liguori



I suppose, but the earlier we can get feedback the better so that we 
don't waste any more time.  This NVRAM code alone has gone through far 
too many iterations as folks have asked for it to go in different 
directions, and we went in those directions to find that they were the 
wrong directions.  Anyway, for the record, this latest patch series 
adheres to the direction you suggested we take last month: 
http://lists.nongnu.org/archive/html/qemu-devel/2013-05/msg04275.html


--
Regards,
Corey Bryant



--
Regards,
Corey Bryant



--
Regards,
Corey Bryant



qemu-system-x86_64 ...
-drive file=/path/to/nvram.qcow2,id=drive-nvram0-0-0
-tpmdev libtpms,id=tpm-tpm0
-device tpm-tis,tpmdev=tpm-tpm0,id=tpm0,drive=drive-nvram0-0-0

Thanks,
Corey

Corey Bryant (3):
 nvram: Add TPM NVRAM implementation
 nvram: Add tpm-tis drive support
 TPM NVRAM test

hw/tpm/Makefile.objs |1 +
hw/tpm/tpm_int.h |2 +
hw/tpm/tpm_nvram.c   |  324 
++
hw/tpm/tpm_nvram.h   |   25 
hw/tpm/tpm_passthrough.c |   85 
hw/tpm/tpm_tis.c |8 +
6 files changed, 445 insertions(+), 0 deletions(-)
create mode 100644 hw/tpm/tpm_nvram.c
create mode 100644 hw/tpm/tpm_nvram.h


















Re: [Qemu-devel] [PATCH v3 0/3] TPM NVRAM persistent storage

2013-06-07 Thread Corey Bryant



On 06/07/2013 03:41 AM, Stefan Hajnoczi wrote:

On Thu, Jun 06, 2013 at 09:32:42AM -0400, Corey Bryant wrote:

This patch series provides persistent storage support that a TPM
can use to store NVRAM data.  It uses QEMU's block driver to store
data on a drive image.  The libtpms TPM 1.2 backend will be the
initial user of this functionality to store data that must persist
through a reboot or migration.  A sample command line may look like
this:

qemu-system-x86_64 ...
-drive file=/path/to/nvram.qcow2,id=drive-nvram0-0-0
-tpmdev libtpms,id=tpm-tpm0
-device tpm-tis,tpmdev=tpm-tpm0,id=tpm0,drive=drive-nvram0-0-0

Thanks,
Corey

Corey Bryant (3):
   nvram: Add TPM NVRAM implementation
   nvram: Add tpm-tis drive support
   TPM NVRAM test

  hw/tpm/Makefile.objs |1 +
  hw/tpm/tpm_int.h |2 +
  hw/tpm/tpm_nvram.c   |  324 ++
  hw/tpm/tpm_nvram.h   |   25 
  hw/tpm/tpm_passthrough.c |   85 
  hw/tpm/tpm_tis.c |8 +
  6 files changed, 445 insertions(+), 0 deletions(-)
  create mode 100644 hw/tpm/tpm_nvram.c
  create mode 100644 hw/tpm/tpm_nvram.h



Reviewed-by: Stefan Hajnoczi stefa...@redhat.com




Thanks again for the review!

--
Regards,
Corey Bryant




[Qemu-devel] [PATCH v3 2/3] nvram: Add tpm-tis drive support

2013-06-06 Thread Corey Bryant
Add a drive property to the tpm-tis device and initialize the TPM
NVRAM if a drive is specified.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
v2
  -No changes

v3
  -No changes
---
 hw/tpm/tpm_int.h |2 ++
 hw/tpm/tpm_tis.c |8 
 2 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/hw/tpm/tpm_int.h b/hw/tpm/tpm_int.h
index 2f582ca..05471ef 100644
--- a/hw/tpm/tpm_int.h
+++ b/hw/tpm/tpm_int.h
@@ -29,6 +29,8 @@ struct TPMState {
 
 char *backend;
 TPMBackend *be_driver;
+
+BlockDriverState *bdrv;
 };
 
 #define TPM(obj) OBJECT_CHECK(TPMState, (obj), TYPE_TPM_TIS)
diff --git a/hw/tpm/tpm_tis.c b/hw/tpm/tpm_tis.c
index d4d8152..8648b3b 100644
--- a/hw/tpm/tpm_tis.c
+++ b/hw/tpm/tpm_tis.c
@@ -27,6 +27,7 @@
 #include hw/i386/pc.h
 #include hw/pci/pci_ids.h
 #include tpm_tis.h
+#include tpm_nvram.h
 #include qemu-common.h
 
 /*#define DEBUG_TIS */
@@ -849,6 +850,7 @@ static Property tpm_tis_properties[] = {
 DEFINE_PROP_UINT32(irq, TPMState,
s.tis.irq_num, TPM_TIS_IRQ),
 DEFINE_PROP_STRING(tpmdev, TPMState, backend),
+DEFINE_PROP_DRIVE(drive, TPMState, bdrv),
 DEFINE_PROP_END_OF_LIST(),
 };
 
@@ -864,6 +866,12 @@ static void tpm_tis_realizefn(DeviceState *dev, Error 
**errp)
 return;
 }
 
+if (s-bdrv  tpm_nvram_init(s-bdrv)) {
+error_setg(errp, tpm_tis: backend drive with id %s could not 
+   initialize TPM NVRAM drive, s-backend);
+return;
+}
+
 s-be_driver-fe_model = TPM_MODEL_TPM_TIS;
 
 if (tpm_backend_init(s-be_driver, s, tpm_tis_receive_cb)) {
-- 
1.7.1




Re: [Qemu-devel] [PATCH v2 1/3] nvram: Add TPM NVRAM implementation

2013-06-06 Thread Corey Bryant



On 06/06/2013 05:22 AM, Stefan Hajnoczi wrote:

On Wed, Jun 05, 2013 at 04:47:59PM -0400, Corey Bryant wrote:

+/*
+ * Coroutine that reads a blob from the drive asynchronously
+ */
+static void coroutine_fn tpm_nvram_co_read(void *opaque)
+{
+TPMNvramRWRequest *rwr = opaque;
+
+*rwr-blob_r = g_malloc(rwr-size);
+
+rwr-rc = bdrv_pread(rwr-bdrv,
+ rwr-offset,
+ *rwr-blob_r,
+ rwr-size);
+if (rwr-rc != rwr-size) {
+g_free(*rwr-blob_r);
+*rwr-blob_r = NULL;
+}
+
+qemu_mutex_lock(rwr-completion_mutex);


Race condition: we must only store rwr-rc while holding
-completion_mutex.  Otherwise the other thread may see -rc and
g_free(rwr) before we leave this function, causing us to operate on
freed memory.

I suggest storing rc into a local variable first and then assigning to
rwr-rc here.


+/*
+ * Enter a coroutine to write a blob to the drive
+ */
+static void tpm_nvram_do_co_write(TPMNvramRWRequest *rwr)
+{
+int rc;
+Coroutine *co;
+
+rc = tpm_nvram_adjust_size(rwr-bdrv, rwr-offset, rwr-size);
+if (rc  0) {
+rwr-rc = rc;
+return;
+}


Do this inside the tpm_nvram_co_write() coroutine so error return still
signals the condvar.  Right now the other thread may miss completion and
deadlock.




Thanks for the review.  Sending a v3 out to the list now.

--
Regards,
Corey Bryant




[Qemu-devel] [PATCH v3 1/3] nvram: Add TPM NVRAM implementation

2013-06-06 Thread Corey Bryant
Provides TPM NVRAM implementation that enables storing of TPM
NVRAM data in a persistent image file.  The block driver is
used to read/write the drive image.  This will enable, for
example, an encrypted QCOW2 image to be used to store sensitive
keys.

This patch provides APIs that a TPM backend can use to read and
write data.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
v2
 -Use non bit-rotting DPRINTF (stefa...@redhat.com)
 -Use DIV_ROUND_UP (stefa...@redhat.com)
 -Use bdrv_pread/bdrv_pwrite in coroutine - causes global
  sector to byte I/O changes (stefa...@redhat.com, kw...@redhat.com)
 -Add tpm_nvram_required_size_kb() and update tpm_nvram_adjust_size()
 -Drop qemu_aio_wait() from tpm_nvram_do_co_read/write() and move
  any completion code path to coroutines
 -Replace tpm_nvram_rwrequest_free() with g_free(rwr)
 -Remove unneeded arg checks from tpm_nvram_read/write()
  (stefa...@redhat.com)
 -Init rwr-rc to -EINPROGRESS and wrap qemu_cond_wait() with check
  (stefa...@redhat.com)
 -Add init of completion_mutex and completion condition

v3
 -Set rwr-rc within locked completion_mutex (stefa...@redhat.com)
 -Move tpm_nvram_adjust_size to tpm_nvram_co_write
  (stefa...@redhat.com)
---
 hw/tpm/Makefile.objs |1 +
 hw/tpm/tpm_nvram.c   |  324 ++
 hw/tpm/tpm_nvram.h   |   25 
 3 files changed, 350 insertions(+), 0 deletions(-)
 create mode 100644 hw/tpm/tpm_nvram.c
 create mode 100644 hw/tpm/tpm_nvram.h

diff --git a/hw/tpm/Makefile.objs b/hw/tpm/Makefile.objs
index 99f5983..49faef4 100644
--- a/hw/tpm/Makefile.objs
+++ b/hw/tpm/Makefile.objs
@@ -1,2 +1,3 @@
 common-obj-$(CONFIG_TPM_TIS) += tpm_tis.o
+common-obj-$(CONFIG_TPM_TIS) += tpm_nvram.o
 common-obj-$(CONFIG_TPM_PASSTHROUGH) += tpm_passthrough.o
diff --git a/hw/tpm/tpm_nvram.c b/hw/tpm/tpm_nvram.c
new file mode 100644
index 000..a0b547b
--- /dev/null
+++ b/hw/tpm/tpm_nvram.c
@@ -0,0 +1,324 @@
+/*
+ * TPM NVRAM - enables storage of persistent NVRAM data on an image file
+ *
+ * Copyright (C) 2013 IBM Corporation
+ *
+ * Authors:
+ *  Stefan Bergerstef...@us.ibm.com
+ *  Corey Bryant cor...@linux.vnet.ibm.com
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include tpm_nvram.h
+#include block/block_int.h
+#include qemu/thread.h
+#include sysemu/sysemu.h
+
+#define TPM_NVRAM_DEBUG 0
+#define DPRINTF(fmt, ...) \
+do { \
+if (TPM_NVRAM_DEBUG) { \
+fprintf(stderr, fmt, ## __VA_ARGS__); \
+} \
+} while (0)
+
+/* Read/write request data */
+typedef struct TPMNvramRWRequest {
+BlockDriverState *bdrv;
+bool is_write;
+uint64_t offset;
+uint8_t **blob_r;
+uint8_t *blob_w;
+uint32_t size;
+int rc;
+
+QemuMutex completion_mutex;
+QemuCond completion;
+
+QSIMPLEQ_ENTRY(TPMNvramRWRequest) list;
+} TPMNvramRWRequest;
+
+/* Mutex protected queue of read/write requests */
+static QemuMutex tpm_nvram_rwrequests_mutex;
+static QSIMPLEQ_HEAD(, TPMNvramRWRequest) tpm_nvram_rwrequests =
+QSIMPLEQ_HEAD_INITIALIZER(tpm_nvram_rwrequests);
+
+static QEMUBH *tpm_nvram_bh;
+
+/*
+ * Get the disk size in kilobytes needed to store a blob (rounded up to next 
kb)
+ */
+static uint64_t tpm_nvram_required_size_kb(uint64_t offset, uint32_t size)
+{
+uint64_t required_size = offset + size;
+return DIV_ROUND_UP(required_size, 1024);
+}
+
+/*
+ * Increase the drive size if it's too small to store the blob
+ */
+static int tpm_nvram_adjust_size(BlockDriverState *bdrv, uint64_t offset,
+ uint32_t size)
+{
+int rc = 0;
+int64_t drive_size, required_size;
+
+rc = bdrv_getlength(bdrv);
+if (rc  0) {
+DPRINTF(%s: Unable to determine TPM NVRAM drive size\n, __func__);
+return rc;
+}
+
+drive_size = rc;
+required_size = tpm_nvram_required_size_kb(offset, size) * 1024;
+
+if (drive_size  required_size) {
+rc = bdrv_truncate(bdrv, required_size);
+if (rc  0) {
+DPRINTF(%s: TPM NVRAM drive too small\n, __func__);
+}
+}
+
+return rc;
+}
+
+/*
+ * Coroutine that reads a blob from the drive asynchronously
+ */
+static void coroutine_fn tpm_nvram_co_read(void *opaque)
+{
+int rc;
+TPMNvramRWRequest *rwr = opaque;
+
+*rwr-blob_r = g_malloc(rwr-size);
+
+rc = bdrv_pread(rwr-bdrv, rwr-offset, *rwr-blob_r, rwr-size);
+if (rc != rwr-size) {
+g_free(*rwr-blob_r);
+*rwr-blob_r = NULL;
+}
+
+qemu_mutex_lock(rwr-completion_mutex);
+rwr-rc = rc;
+qemu_cond_signal(rwr-completion);
+qemu_mutex_unlock(rwr-completion_mutex);
+}
+
+/*
+ * Coroutine that writes a blob to the drive asynchronously
+ */
+static void coroutine_fn tpm_nvram_co_write(void *opaque)
+{
+int rc;
+TPMNvramRWRequest *rwr = opaque;
+
+rc = tpm_nvram_adjust_size(rwr-bdrv, rwr-offset, rwr-size

[Qemu-devel] [PATCH v3 0/3] TPM NVRAM persistent storage

2013-06-06 Thread Corey Bryant
This patch series provides persistent storage support that a TPM
can use to store NVRAM data.  It uses QEMU's block driver to store
data on a drive image.  The libtpms TPM 1.2 backend will be the
initial user of this functionality to store data that must persist
through a reboot or migration.  A sample command line may look like
this:

qemu-system-x86_64 ...
-drive file=/path/to/nvram.qcow2,id=drive-nvram0-0-0
-tpmdev libtpms,id=tpm-tpm0
-device tpm-tis,tpmdev=tpm-tpm0,id=tpm0,drive=drive-nvram0-0-0

Thanks,
Corey

Corey Bryant (3):
  nvram: Add TPM NVRAM implementation
  nvram: Add tpm-tis drive support
  TPM NVRAM test

 hw/tpm/Makefile.objs |1 +
 hw/tpm/tpm_int.h |2 +
 hw/tpm/tpm_nvram.c   |  324 ++
 hw/tpm/tpm_nvram.h   |   25 
 hw/tpm/tpm_passthrough.c |   85 
 hw/tpm/tpm_tis.c |8 +
 6 files changed, 445 insertions(+), 0 deletions(-)
 create mode 100644 hw/tpm/tpm_nvram.c
 create mode 100644 hw/tpm/tpm_nvram.h




Re: [Qemu-devel] [PATCH 1/2] nvram: Add TPM NVRAM implementation

2013-06-05 Thread Corey Bryant

Thanks for reviewing!

On 06/05/2013 05:05 AM, Stefan Hajnoczi wrote:

On Tue, Jun 04, 2013 at 02:18:40PM -0400, Corey Bryant wrote:

Provides TPM NVRAM implementation that enables storing of TPM
NVRAM data in a persistent image file.  The block driver is
used to read/write the drive image.  This will enable, for
example, an ecrypted QCOW2 image to be used to store sensitive
keys.

This patch provides APIs that a TPM backend can use to read and
write data.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
  hw/tpm/Makefile.objs |1 +
  hw/tpm/tpm_nvram.c   |  399 ++
  hw/tpm/tpm_nvram.h   |   25 +++
  3 files changed, 425 insertions(+), 0 deletions(-)
  create mode 100644 hw/tpm/tpm_nvram.c
  create mode 100644 hw/tpm/tpm_nvram.h

diff --git a/hw/tpm/Makefile.objs b/hw/tpm/Makefile.objs
index 99f5983..49faef4 100644
--- a/hw/tpm/Makefile.objs
+++ b/hw/tpm/Makefile.objs
@@ -1,2 +1,3 @@
  common-obj-$(CONFIG_TPM_TIS) += tpm_tis.o
+common-obj-$(CONFIG_TPM_TIS) += tpm_nvram.o
  common-obj-$(CONFIG_TPM_PASSTHROUGH) += tpm_passthrough.o
diff --git a/hw/tpm/tpm_nvram.c b/hw/tpm/tpm_nvram.c
new file mode 100644
index 000..95ff396
--- /dev/null
+++ b/hw/tpm/tpm_nvram.c
@@ -0,0 +1,399 @@
+/*
+ * TPM NVRAM - enables storage of persistent NVRAM data on an image file
+ *
+ * Copyright (C) 2013 IBM Corporation
+ *
+ * Authors:
+ *  Stefan Bergerstef...@us.ibm.com
+ *  Corey Bryant cor...@linux.vnet.ibm.com
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include tpm_nvram.h
+#include block/block_int.h
+#include qemu/thread.h
+#include sysemu/sysemu.h
+
+/* #define TPM_NVRAM_DEBUG */
+
+#ifdef TPM_NVRAM_DEBUG
+#define DPRINTF(fmt, ...) \
+do { fprintf(stderr, fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+do { } while (0)
+#endif


I suggest:

#define TPM_NVRAM_DEBUG 0
#define DPRINTF(fmt, ...) \
 do { \
 if (TPM_NVRAM_DEBUG) { \
fprintf(stderr, fmt, ## __VA_ARGS__); \
} \
 } while (0)

This approach prevents bitrot since the compiler always parses the
printf() whether TPM_NVRAM_DEBUG is 0 or 1.  If you #ifdef out the code
completely, like above, then you don't notice compiler warnings/errors
until you actually #define TPM_NVRAM_DEBUG (i.e. prone to bitrot).



Ok


+
+/* Round a value up to the next SIZE */
+#define ROUNDUP(VAL, SIZE) \
+(((VAL)+(SIZE)-1)  ~((SIZE)-1))


Please drop this macro and use include/qemu/osdep.h:ROUND_UP()



Ok


+
+/* Get the number of sectors required to contain SIZE bytes */
+#define NUM_SECTORS(SIZE) \
+(ROUNDUP(SIZE, BDRV_SECTOR_SIZE) / BDRV_SECTOR_SIZE)


Please drop this macro and use include/qemu/osdep.h:DIV_ROUND_UP() instead.



Ok


+
+/* Read/write request data */
+typedef struct TPMNvramRWRequest {
+BlockDriverState *bdrv;
+bool is_write;
+uint64_t sector_num;
+int num_sectors;
+uint8_t **blob_r;
+uint8_t *blob_w;
+uint32_t size;
+QEMUIOVector *qiov;
+bool done;
+int rc;
+
+QemuMutex completion_mutex;
+QemuCond completion;
+
+QSIMPLEQ_ENTRY(TPMNvramRWRequest) list;
+} TPMNvramRWRequest;
+
+/* Mutex protected queue of read/write requests */
+static QemuMutex tpm_nvram_rwrequests_mutex;
+static QSIMPLEQ_HEAD(, TPMNvramRWRequest) tpm_nvram_rwrequests =
+QSIMPLEQ_HEAD_INITIALIZER(tpm_nvram_rwrequests);
+
+static QEMUBH *tpm_nvram_bh;
+
+/*
+ * Increase the drive size if it's too small to store the blob
+ */
+static int tpm_nvram_adjust_size(BlockDriverState *bdrv, uint64_t sector_num,
+ int num_sectors)
+{
+int rc = 0;
+int64_t drive_size, required_size;
+
+drive_size = bdrv_getlength(bdrv);
+if (drive_size  0) {
+DPRINTF(%s: Unable to determine TPM NVRAM drive size\n, __func__);
+rc = drive_size;
+goto err_exit;
+}
+
+required_size = (sector_num + num_sectors) * BDRV_SECTOR_SIZE;
+
+if (drive_size  required_size) {
+rc = bdrv_truncate(bdrv, required_size);
+if (rc  0) {
+DPRINTF(%s: TPM NVRAM drive too small\n, __func__);
+}
+}
+
+err_exit:
+return rc;
+}
+
+/*
+ * Coroutine that reads a blob from the drive asynchronously
+ */
+static void coroutine_fn tpm_nvram_co_read(void *opaque)
+{
+TPMNvramRWRequest *rwr = opaque;
+
+rwr-rc = bdrv_co_readv(rwr-bdrv,
+rwr-sector_num,
+rwr-num_sectors,
+rwr-qiov);
+rwr-done = true;
+}
+
+/*
+ * Coroutine that writes a blob to the drive asynchronously
+ */
+static void coroutine_fn tpm_nvram_co_write(void *opaque)
+{
+TPMNvramRWRequest *rwr = opaque;
+
+rwr-rc = bdrv_co_writev(rwr-bdrv,
+ rwr-sector_num,
+ rwr-num_sectors,
+ rwr-qiov

Re: [Qemu-devel] [PATCH 1/2] nvram: Add TPM NVRAM implementation

2013-06-05 Thread Corey Bryant



On 06/05/2013 09:42 AM, Kevin Wolf wrote:

Am 05.06.2013 um 15:28 hat Corey Bryant geschrieben:

+
+if (rwr-rc == 0) {
+rwr-rc = rwr-num_sectors;
+*rwr-blob_r = g_malloc(rwr-size);
+memcpy(*rwr-blob_r, buf, rwr-size);


Use bdrv_pread()/bdrv_pwrite() for byte-granularity I/O instead of
duplicating the buffering yourself.



Aren't bdrv_pread()/bdrv_pwrite() synchronous?  Wouldn't using them
block the main QEMU thread?  That is why I switched to using the
coroutine versions.


You need to call them from coroutine context to avoid that they invoke
their on coroutine on which they wait in this this while (!done)
{ qemu_aio_wait(); } loop that blocks everything. Called from coroutine
context, they do the Right Thing, though.

Kevin





Ah, thanks for explaining.  Now I can work in bytes rather than sectors.  :)

--
Regards,
Corey Bryant




[Qemu-devel] [PATCH v2 2/3] nvram: Add tpm-tis drive support

2013-06-05 Thread Corey Bryant
Add a drive property to the tpm-tis device and initialize the TPM
NVRAM if a drive is specified.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
v2
  -No changes
---
 hw/tpm/tpm_int.h |2 ++
 hw/tpm/tpm_tis.c |8 
 2 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/hw/tpm/tpm_int.h b/hw/tpm/tpm_int.h
index 2f582ca..05471ef 100644
--- a/hw/tpm/tpm_int.h
+++ b/hw/tpm/tpm_int.h
@@ -29,6 +29,8 @@ struct TPMState {
 
 char *backend;
 TPMBackend *be_driver;
+
+BlockDriverState *bdrv;
 };
 
 #define TPM(obj) OBJECT_CHECK(TPMState, (obj), TYPE_TPM_TIS)
diff --git a/hw/tpm/tpm_tis.c b/hw/tpm/tpm_tis.c
index d4d8152..8648b3b 100644
--- a/hw/tpm/tpm_tis.c
+++ b/hw/tpm/tpm_tis.c
@@ -27,6 +27,7 @@
 #include hw/i386/pc.h
 #include hw/pci/pci_ids.h
 #include tpm_tis.h
+#include tpm_nvram.h
 #include qemu-common.h
 
 /*#define DEBUG_TIS */
@@ -849,6 +850,7 @@ static Property tpm_tis_properties[] = {
 DEFINE_PROP_UINT32(irq, TPMState,
s.tis.irq_num, TPM_TIS_IRQ),
 DEFINE_PROP_STRING(tpmdev, TPMState, backend),
+DEFINE_PROP_DRIVE(drive, TPMState, bdrv),
 DEFINE_PROP_END_OF_LIST(),
 };
 
@@ -864,6 +866,12 @@ static void tpm_tis_realizefn(DeviceState *dev, Error 
**errp)
 return;
 }
 
+if (s-bdrv  tpm_nvram_init(s-bdrv)) {
+error_setg(errp, tpm_tis: backend drive with id %s could not 
+   initialize TPM NVRAM drive, s-backend);
+return;
+}
+
 s-be_driver-fe_model = TPM_MODEL_TPM_TIS;
 
 if (tpm_backend_init(s-be_driver, s, tpm_tis_receive_cb)) {
-- 
1.7.1




[Qemu-devel] [PATCH v2 0/3] TPM NVRAM persistent storage

2013-06-05 Thread Corey Bryant
This patch series provides persistent storage support that a TPM
can use to store NVRAM data.  It uses QEMU's block driver to store
data on a drive image.  The libtpms TPM 1.2 backend will be the
initial user of this functionality to store data that must persist
through a reboot or migration.  A sample command line may look like
this:

qemu-system-x86_64 ...
-drive file=/path/to/nvram.qcow2,id=drive-nvram0-0-0
-tpmdev libtpms,id=tpm-tpm0
-device tpm-tis,tpmdev=tpm-tpm0,id=tpm0,drive=drive-nvram0-0-0

Thanks,
Corey

Corey Bryant (3):
  nvram: Add TPM NVRAM implementation
  nvram: Add tpm-tis drive support
  TPM NVRAM test

 hw/tpm/Makefile.objs |1 +
 hw/tpm/tpm_int.h |2 +
 hw/tpm/tpm_nvram.c   |  326 ++
 hw/tpm/tpm_nvram.h   |   25 
 hw/tpm/tpm_passthrough.c |   85 
 hw/tpm/tpm_tis.c |8 +
 6 files changed, 447 insertions(+), 0 deletions(-)
 create mode 100644 hw/tpm/tpm_nvram.c
 create mode 100644 hw/tpm/tpm_nvram.h




[Qemu-devel] [PATCH v2 1/3] nvram: Add TPM NVRAM implementation

2013-06-05 Thread Corey Bryant
Provides TPM NVRAM implementation that enables storing of TPM
NVRAM data in a persistent image file.  The block driver is
used to read/write the drive image.  This will enable, for
example, an encrypted QCOW2 image to be used to store sensitive
keys.

This patch provides APIs that a TPM backend can use to read and
write data.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
v2
 -Use non bit-rotting DPRINTF (stefa...@redhat.com)
 -Use DIV_ROUND_UP (stefa...@redhat.com)
 -Use bdrv_pread/bdrv_pwrite in coroutine - causes global
  sector to byte I/O changes (stefa...@redhat.com, kw...@redhat.com)
 -Add tpm_nvram_required_size_kb() and update tpm_nvram_adjust_size()
 -Drop qemu_aio_wait() from tpm_nvram_do_co_read/write() and move
  any completion code path to coroutines
 -Replace tpm_nvram_rwrequest_free() with g_free(rwr)
 -Remove unneeded arg checks from tpm_nvram_read/write()
  (stefa...@redhat.com)
 -Init rwr-rc to -EINPROGRESS and wrap qemu_cond_wait() with check
  (stefa...@redhat.com)
 -Add init of completion_mutex and completion condition
---
 hw/tpm/Makefile.objs |1 +
 hw/tpm/tpm_nvram.c   |  326 ++
 hw/tpm/tpm_nvram.h   |   25 
 3 files changed, 352 insertions(+), 0 deletions(-)
 create mode 100644 hw/tpm/tpm_nvram.c
 create mode 100644 hw/tpm/tpm_nvram.h

diff --git a/hw/tpm/Makefile.objs b/hw/tpm/Makefile.objs
index 99f5983..49faef4 100644
--- a/hw/tpm/Makefile.objs
+++ b/hw/tpm/Makefile.objs
@@ -1,2 +1,3 @@
 common-obj-$(CONFIG_TPM_TIS) += tpm_tis.o
+common-obj-$(CONFIG_TPM_TIS) += tpm_nvram.o
 common-obj-$(CONFIG_TPM_PASSTHROUGH) += tpm_passthrough.o
diff --git a/hw/tpm/tpm_nvram.c b/hw/tpm/tpm_nvram.c
new file mode 100644
index 000..01f0dee
--- /dev/null
+++ b/hw/tpm/tpm_nvram.c
@@ -0,0 +1,326 @@
+/*
+ * TPM NVRAM - enables storage of persistent NVRAM data on an image file
+ *
+ * Copyright (C) 2013 IBM Corporation
+ *
+ * Authors:
+ *  Stefan Bergerstef...@us.ibm.com
+ *  Corey Bryant cor...@linux.vnet.ibm.com
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include tpm_nvram.h
+#include block/block_int.h
+#include qemu/thread.h
+#include sysemu/sysemu.h
+
+#define TPM_NVRAM_DEBUG 0
+#define DPRINTF(fmt, ...) \
+do { \
+if (TPM_NVRAM_DEBUG) { \
+fprintf(stderr, fmt, ## __VA_ARGS__); \
+} \
+} while (0)
+
+/* Read/write request data */
+typedef struct TPMNvramRWRequest {
+BlockDriverState *bdrv;
+bool is_write;
+uint64_t offset;
+uint8_t **blob_r;
+uint8_t *blob_w;
+uint32_t size;
+int rc;
+
+QemuMutex completion_mutex;
+QemuCond completion;
+
+QSIMPLEQ_ENTRY(TPMNvramRWRequest) list;
+} TPMNvramRWRequest;
+
+/* Mutex protected queue of read/write requests */
+static QemuMutex tpm_nvram_rwrequests_mutex;
+static QSIMPLEQ_HEAD(, TPMNvramRWRequest) tpm_nvram_rwrequests =
+QSIMPLEQ_HEAD_INITIALIZER(tpm_nvram_rwrequests);
+
+static QEMUBH *tpm_nvram_bh;
+
+/*
+ * Get the disk size in kilobytes needed to store a blob (rounded up to next 
kb)
+ */
+static uint64_t tpm_nvram_required_size_kb(uint64_t offset, uint32_t size)
+{
+uint64_t required_size = offset + size;
+return DIV_ROUND_UP(required_size, 1024);
+}
+
+/*
+ * Increase the drive size if it's too small to store the blob
+ */
+static int tpm_nvram_adjust_size(BlockDriverState *bdrv, uint64_t offset,
+ uint32_t size)
+{
+int rc = 0;
+int64_t drive_size, required_size;
+
+rc = bdrv_getlength(bdrv);
+if (rc  0) {
+DPRINTF(%s: Unable to determine TPM NVRAM drive size\n, __func__);
+return rc;
+}
+
+drive_size = rc;
+required_size = tpm_nvram_required_size_kb(offset, size) * 1024;
+
+if (drive_size  required_size) {
+rc = bdrv_truncate(bdrv, required_size);
+if (rc  0) {
+DPRINTF(%s: TPM NVRAM drive too small\n, __func__);
+}
+}
+
+return rc;
+}
+
+/*
+ * Coroutine that reads a blob from the drive asynchronously
+ */
+static void coroutine_fn tpm_nvram_co_read(void *opaque)
+{
+TPMNvramRWRequest *rwr = opaque;
+
+*rwr-blob_r = g_malloc(rwr-size);
+
+rwr-rc = bdrv_pread(rwr-bdrv,
+ rwr-offset,
+ *rwr-blob_r,
+ rwr-size);
+if (rwr-rc != rwr-size) {
+g_free(*rwr-blob_r);
+*rwr-blob_r = NULL;
+}
+
+qemu_mutex_lock(rwr-completion_mutex);
+qemu_cond_signal(rwr-completion);
+qemu_mutex_unlock(rwr-completion_mutex);
+}
+
+/*
+ * Coroutine that writes a blob to the drive asynchronously
+ */
+static void coroutine_fn tpm_nvram_co_write(void *opaque)
+{
+TPMNvramRWRequest *rwr = opaque;
+
+rwr-rc = bdrv_pwrite(rwr-bdrv,
+  rwr-offset,
+  rwr-blob_w,
+  rwr-size);
+
+qemu_mutex_lock

[Qemu-devel] [PATCH 0/2] TPM NVRAM persistent storage

2013-06-04 Thread Corey Bryant
This patch series provides persistent storage support that a TPM
can use to store NVRAM data.  It uses QEMU's block driver to store
data on a drive image.  The libtpms TPM 1.2 backend will be the
initial user of this functionality to store data that must persist
through a reboot or migration.  A sample command line may look like
this:

qemu-system-x86_64 ...
-drive file=/path/to/nvram.qcow2,id=drive-nvram0-0-0
-tpmdev libtpms,id=tpm-tpm0
-device tpm-tis,tpmdev=tpm-tpm0,id=tpm0,drive=drive-nvram0-0-0

Thanks,
Corey

Corey Bryant (2):
  nvram: Add TPM NVRAM implementation
  nvram: Add tpm-tis drive support

 hw/tpm/Makefile.objs |1 +
 hw/tpm/tpm_int.h |2 +
 hw/tpm/tpm_nvram.c   |  399 ++
 hw/tpm/tpm_nvram.h   |   25 +++
 hw/tpm/tpm_tis.c |8 +
 5 files changed, 435 insertions(+), 0 deletions(-)
 create mode 100644 hw/tpm/tpm_nvram.c
 create mode 100644 hw/tpm/tpm_nvram.h




[Qemu-devel] [PATCH 1/2] nvram: Add TPM NVRAM implementation

2013-06-04 Thread Corey Bryant
Provides TPM NVRAM implementation that enables storing of TPM
NVRAM data in a persistent image file.  The block driver is
used to read/write the drive image.  This will enable, for
example, an ecrypted QCOW2 image to be used to store sensitive
keys.

This patch provides APIs that a TPM backend can use to read and
write data.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 hw/tpm/Makefile.objs |1 +
 hw/tpm/tpm_nvram.c   |  399 ++
 hw/tpm/tpm_nvram.h   |   25 +++
 3 files changed, 425 insertions(+), 0 deletions(-)
 create mode 100644 hw/tpm/tpm_nvram.c
 create mode 100644 hw/tpm/tpm_nvram.h

diff --git a/hw/tpm/Makefile.objs b/hw/tpm/Makefile.objs
index 99f5983..49faef4 100644
--- a/hw/tpm/Makefile.objs
+++ b/hw/tpm/Makefile.objs
@@ -1,2 +1,3 @@
 common-obj-$(CONFIG_TPM_TIS) += tpm_tis.o
+common-obj-$(CONFIG_TPM_TIS) += tpm_nvram.o
 common-obj-$(CONFIG_TPM_PASSTHROUGH) += tpm_passthrough.o
diff --git a/hw/tpm/tpm_nvram.c b/hw/tpm/tpm_nvram.c
new file mode 100644
index 000..95ff396
--- /dev/null
+++ b/hw/tpm/tpm_nvram.c
@@ -0,0 +1,399 @@
+/*
+ * TPM NVRAM - enables storage of persistent NVRAM data on an image file
+ *
+ * Copyright (C) 2013 IBM Corporation
+ *
+ * Authors:
+ *  Stefan Bergerstef...@us.ibm.com
+ *  Corey Bryant cor...@linux.vnet.ibm.com
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include tpm_nvram.h
+#include block/block_int.h
+#include qemu/thread.h
+#include sysemu/sysemu.h
+
+/* #define TPM_NVRAM_DEBUG */
+
+#ifdef TPM_NVRAM_DEBUG
+#define DPRINTF(fmt, ...) \
+do { fprintf(stderr, fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+do { } while (0)
+#endif
+
+/* Round a value up to the next SIZE */
+#define ROUNDUP(VAL, SIZE) \
+(((VAL)+(SIZE)-1)  ~((SIZE)-1))
+
+/* Get the number of sectors required to contain SIZE bytes */
+#define NUM_SECTORS(SIZE) \
+(ROUNDUP(SIZE, BDRV_SECTOR_SIZE) / BDRV_SECTOR_SIZE)
+
+/* Read/write request data */
+typedef struct TPMNvramRWRequest {
+BlockDriverState *bdrv;
+bool is_write;
+uint64_t sector_num;
+int num_sectors;
+uint8_t **blob_r;
+uint8_t *blob_w;
+uint32_t size;
+QEMUIOVector *qiov;
+bool done;
+int rc;
+
+QemuMutex completion_mutex;
+QemuCond completion;
+
+QSIMPLEQ_ENTRY(TPMNvramRWRequest) list;
+} TPMNvramRWRequest;
+
+/* Mutex protected queue of read/write requests */
+static QemuMutex tpm_nvram_rwrequests_mutex;
+static QSIMPLEQ_HEAD(, TPMNvramRWRequest) tpm_nvram_rwrequests =
+QSIMPLEQ_HEAD_INITIALIZER(tpm_nvram_rwrequests);
+
+static QEMUBH *tpm_nvram_bh;
+
+/*
+ * Increase the drive size if it's too small to store the blob
+ */
+static int tpm_nvram_adjust_size(BlockDriverState *bdrv, uint64_t sector_num,
+ int num_sectors)
+{
+int rc = 0;
+int64_t drive_size, required_size;
+
+drive_size = bdrv_getlength(bdrv);
+if (drive_size  0) {
+DPRINTF(%s: Unable to determine TPM NVRAM drive size\n, __func__);
+rc = drive_size;
+goto err_exit;
+}
+
+required_size = (sector_num + num_sectors) * BDRV_SECTOR_SIZE;
+
+if (drive_size  required_size) {
+rc = bdrv_truncate(bdrv, required_size);
+if (rc  0) {
+DPRINTF(%s: TPM NVRAM drive too small\n, __func__);
+}
+}
+
+err_exit:
+return rc;
+}
+
+/*
+ * Coroutine that reads a blob from the drive asynchronously
+ */
+static void coroutine_fn tpm_nvram_co_read(void *opaque)
+{
+TPMNvramRWRequest *rwr = opaque;
+
+rwr-rc = bdrv_co_readv(rwr-bdrv,
+rwr-sector_num,
+rwr-num_sectors,
+rwr-qiov);
+rwr-done = true;
+}
+
+/*
+ * Coroutine that writes a blob to the drive asynchronously
+ */
+static void coroutine_fn tpm_nvram_co_write(void *opaque)
+{
+TPMNvramRWRequest *rwr = opaque;
+
+rwr-rc = bdrv_co_writev(rwr-bdrv,
+ rwr-sector_num,
+ rwr-num_sectors,
+ rwr-qiov);
+rwr-done = true;
+}
+
+/*
+ * Prepare for and enter a coroutine to read a blob from the drive
+ */
+static void tpm_nvram_do_co_read(TPMNvramRWRequest *rwr)
+{
+Coroutine *co;
+size_t buf_len = rwr-num_sectors * BDRV_SECTOR_SIZE;
+uint8_t *buf = g_malloc(buf_len);
+
+memset(buf, 0x0, buf_len);
+
+struct iovec iov = {
+.iov_base = (void *)buf,
+.iov_len = rwr-size,
+};
+
+qemu_iovec_init_external(rwr-qiov, iov, 1);
+
+co = qemu_coroutine_create(tpm_nvram_co_read);
+qemu_coroutine_enter(co, rwr);
+
+while (!rwr-done) {
+qemu_aio_wait();
+}
+
+if (rwr-rc == 0) {
+rwr-rc = rwr-num_sectors;
+*rwr-blob_r = g_malloc(rwr-size);
+memcpy(*rwr-blob_r, buf, rwr-size);
+} else

[Qemu-devel] [PATCH 2/2] nvram: Add tpm-tis drive support

2013-06-04 Thread Corey Bryant
Add a drive property to the tpm-tis device and initialize the TPM
NVRAM if a drive is specified.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 hw/tpm/tpm_int.h |2 ++
 hw/tpm/tpm_tis.c |8 
 2 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/hw/tpm/tpm_int.h b/hw/tpm/tpm_int.h
index 2f582ca..05471ef 100644
--- a/hw/tpm/tpm_int.h
+++ b/hw/tpm/tpm_int.h
@@ -29,6 +29,8 @@ struct TPMState {
 
 char *backend;
 TPMBackend *be_driver;
+
+BlockDriverState *bdrv;
 };
 
 #define TPM(obj) OBJECT_CHECK(TPMState, (obj), TYPE_TPM_TIS)
diff --git a/hw/tpm/tpm_tis.c b/hw/tpm/tpm_tis.c
index d4d8152..8648b3b 100644
--- a/hw/tpm/tpm_tis.c
+++ b/hw/tpm/tpm_tis.c
@@ -27,6 +27,7 @@
 #include hw/i386/pc.h
 #include hw/pci/pci_ids.h
 #include tpm_tis.h
+#include tpm_nvram.h
 #include qemu-common.h
 
 /*#define DEBUG_TIS */
@@ -849,6 +850,7 @@ static Property tpm_tis_properties[] = {
 DEFINE_PROP_UINT32(irq, TPMState,
s.tis.irq_num, TPM_TIS_IRQ),
 DEFINE_PROP_STRING(tpmdev, TPMState, backend),
+DEFINE_PROP_DRIVE(drive, TPMState, bdrv),
 DEFINE_PROP_END_OF_LIST(),
 };
 
@@ -864,6 +866,12 @@ static void tpm_tis_realizefn(DeviceState *dev, Error 
**errp)
 return;
 }
 
+if (s-bdrv  tpm_nvram_init(s-bdrv)) {
+error_setg(errp, tpm_tis: backend drive with id %s could not 
+   initialize TPM NVRAM drive, s-backend);
+return;
+}
+
 s-be_driver-fe_model = TPM_MODEL_TPM_TIS;
 
 if (tpm_backend_init(s-be_driver, s, tpm_tis_receive_cb)) {
-- 
1.7.1




Re: [Qemu-devel] [PATCH 0/2] TPM NVRAM persistent storage

2013-06-04 Thread Corey Bryant



On 06/04/2013 03:23 PM, Eric Blake wrote:

On 06/04/2013 12:18 PM, Corey Bryant wrote:

This patch series provides persistent storage support that a TPM

can use to store NVRAM data.  It uses QEMU's block driver to store

data on a drive image.  The libtpms TPM 1.2 backend will be the
initial user of this functionality to store data that must persist
through a reboot or migration.  A sample command line may look like
this:

qemu-system-x86_64 ...
-drive file=/path/to/nvram.qcow2,id=drive-nvram0-0-0
-tpmdev libtpms,id=tpm-tpm0
-device tpm-tis,tpmdev=tpm-tpm0,id=tpm0,drive=drive-nvram0-0-0


Is a TPM device hot-pluggable?  If so, do you have a design for the QMP
counterpart in mind?



Well the TPM is not hot-pluggable.  And the feedback we've been getting 
is to simplify this support so I'm not sure it's needed/wanted. (?)


--
Regards,
Corey Bryant



Thanks,
Corey

Corey Bryant (2):
   nvram: Add TPM NVRAM implementation
   nvram: Add tpm-tis drive support

  hw/tpm/Makefile.objs |1 +
  hw/tpm/tpm_int.h |2 +
  hw/tpm/tpm_nvram.c   |  399 ++
  hw/tpm/tpm_nvram.h   |   25 +++
  hw/tpm/tpm_tis.c |8 +
  5 files changed, 435 insertions(+), 0 deletions(-)
  create mode 100644 hw/tpm/tpm_nvram.c
  create mode 100644 hw/tpm/tpm_nvram.h












Re: [Qemu-devel] [PATCH 7/7] monitor: QMP/HMP support for retrieving VNVRAM details

2013-05-29 Thread Corey Bryant



On 05/29/2013 01:15 PM, Luiz Capitulino wrote:

On Thu, 23 May 2013 13:44:47 -0400
Corey Bryant cor...@linux.vnet.ibm.com wrote:


Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com


Looks good to me, only one small nit below.



It looks like this series is going to get dropped, but thanks for the 
review!


--
Regards,
Corey Bryant


---
  hmp.c|   32 
  hmp.h|1 +
  monitor.c|7 +
  qapi-schema.json |   47 +++
  qmp-commands.hx  |   41 +++
  vnvram.c |   71 ++
  6 files changed, 199 insertions(+), 0 deletions(-)

diff --git a/hmp.c b/hmp.c
index 4fb76ec..a144f73 100644
--- a/hmp.c
+++ b/hmp.c
@@ -653,6 +653,38 @@ void hmp_info_tpm(Monitor *mon, const QDict *qdict)
  qapi_free_TPMInfoList(info_list);
  }

+void hmp_info_vnvram(Monitor *mon, const QDict *dict)
+{
+VNVRAMInfoList *info_list, *info;
+Error *err = NULL;
+unsigned int c = 0;
+
+info_list = qmp_query_vnvram(err);
+if (err) {
+monitor_printf(mon, VNVRAM not found\n);
+error_free(err);
+return;
+}
+
+for (info = info_list; info; info = info-next) {
+VNVRAMInfo *ni = info-value;
+VNVRAMEntryInfoList *einfo_list = ni-entries, *einfo;
+unsigned int d = 0;
+monitor_printf(mon, vnvram%u: drive-id=%s 
+   virtual-disk-size=%PRId64 vnvram-size=%PRIu64\n,
+   c++, ni-drive_id, ni-virtual_disk_size,
+   ni-vnvram_size);
+
+for (einfo = einfo_list; einfo; einfo = einfo-next) {
+VNVRAMEntryInfo *nei = einfo-value;
+monitor_printf(mon,   entry%u: name=%s cur-size=%PRIu64 
+   max-size=%PRIu64\n,
+   d++, nei-name, nei-cur_size, nei-max_size);
+}
+}
+qapi_free_VNVRAMInfoList(info_list);
+}
+
  void hmp_quit(Monitor *mon, const QDict *qdict)
  {
  monitor_suspend(mon);
diff --git a/hmp.h b/hmp.h
index 95fe76e..e26daf2 100644
--- a/hmp.h
+++ b/hmp.h
@@ -37,6 +37,7 @@ void hmp_info_balloon(Monitor *mon, const QDict *qdict);
  void hmp_info_pci(Monitor *mon, const QDict *qdict);
  void hmp_info_block_jobs(Monitor *mon, const QDict *qdict);
  void hmp_info_tpm(Monitor *mon, const QDict *qdict);
+void hmp_info_vnvram(Monitor *mon, const QDict *dict);
  void hmp_quit(Monitor *mon, const QDict *qdict);
  void hmp_stop(Monitor *mon, const QDict *qdict);
  void hmp_system_reset(Monitor *mon, const QDict *qdict);
diff --git a/monitor.c b/monitor.c
index 62aaebe..c10fe15 100644
--- a/monitor.c
+++ b/monitor.c
@@ -2764,6 +2764,13 @@ static mon_cmd_t info_cmds[] = {
  .mhandler.cmd = hmp_info_tpm,
  },
  {
+.name   = vnvram,
+.args_type  = ,
+.params = ,
+.help   = show VNVRAM information,
+.mhandler.cmd = hmp_info_vnvram,
+},
+{
  .name   = NULL,
  },
  };
diff --git a/qapi-schema.json b/qapi-schema.json
index 9302e7d..73d42d6 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -3619,3 +3619,50 @@
  '*cpuid-input-ecx': 'int',
  'cpuid-register': 'X86CPURegister32',
  'features': 'int' } }
+
+# @VNVRAMEntryInfo:
+#
+# Information about an entry in the VNVRAM.
+#
+# @name: name of the entry
+#
+# @cur-size: current size of the entry's blob in bytes


It's preferable not to abbreviate, you can have current-size.


+#
+# @max-size: max size of the entry's blob in bytes
+#
+# Since: 1.6
+#
+##
+{ 'type': 'VNVRAMEntryInfo',
+  'data': {'name': 'str', 'cur-size': 'int', 'max-size': 'int', } }
+
+##
+# @VNVRAMInfo:
+#
+# Information about the VNVRAM device.
+#
+# @drive-id: ID of the VNVRAM (and associated drive)
+#
+# @virtual-disk-size: Virtual size of the associated disk drive in bytes
+#
+# @vnvram-size: Size of the VNVRAM in bytes
+#
+# @entries: Array of @VNVRAMEntryInfo
+#
+# Since: 1.6
+#
+##
+{ 'type': 'VNVRAMInfo',
+  'data': {'drive-id': 'str', 'virtual-disk-size': 'int',
+   'vnvram-size': 'int', 'entries' : ['VNVRAMEntryInfo']} }
+
+##
+# @query-vnvram:
+#
+# Return information about the VNVRAM devices.
+#
+# Returns: @VNVRAMInfo on success
+#
+# Since: 1.6
+##
+{ 'command': 'query-vnvram', 'returns': ['VNVRAMInfo'] }
diff --git a/qmp-commands.hx b/qmp-commands.hx
index ffd130e..56a57b7 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -2932,3 +2932,44 @@ Example:
  - { return: {} }

  EQMP
+
+{
+.name   = query-vnvram,
+.args_type  = ,
+.mhandler.cmd_new = qmp_marshal_input_query_vnvram,
+},
+
+SQMP
+query-vnvram
+
+
+Show VNVRAM info.
+
+Return a json-array of json-objects representing VNVRAMs.  Each VNVRAM
+is described by a json-object with the following:
+
+- drive-id: ID of the VNVRAM (json-string)
+- vitual-disk-size: Virtual size

Re: [Qemu-devel] [PATCH] seccomp: add the asynchronous I/O syscalls to the whitelist

2013-05-29 Thread Corey Bryant



On 05/29/2013 04:30 PM, Paul Moore wrote:

In order to enable the asynchronous I/O functionality when using the
seccomp sandbox we need to add the associated syscalls to the
whitelist.

Signed-off-by: Paul Moore pmo...@redhat.com
---
  qemu-seccomp.c |5 -
  1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/qemu-seccomp.c b/qemu-seccomp.c
index 031da1d..ca123bf 100644
--- a/qemu-seccomp.c
+++ b/qemu-seccomp.c
@@ -87,6 +87,7 @@ static const struct QemuSeccompSyscall seccomp_whitelist[] = {
  { SCMP_SYS(stat), 245 },
  { SCMP_SYS(uname), 245 },
  { SCMP_SYS(eventfd2), 245 },
+{ SCMP_SYS(io_getevents), 245 },
  { SCMP_SYS(dup), 245 },
  { SCMP_SYS(dup2), 245 },
  { SCMP_SYS(dup3), 245 },
@@ -229,7 +230,9 @@ static const struct QemuSeccompSyscall seccomp_whitelist[] 
= {
  { SCMP_SYS(sendmmsg), 241 },
  { SCMP_SYS(recvmmsg), 241 },
  { SCMP_SYS(prlimit64), 241 },
-{ SCMP_SYS(waitid), 241 }
+{ SCMP_SYS(waitid), 241 },
+{ SCMP_SYS(io_setup), 241 },
+{ SCMP_SYS(io_destroy), 241 }
  };

  int seccomp_start(void)





Thanks for the patch.  It looks good to me and I see these are used by 
the block aio code.


Reviewed-by: Corey Bryant cor...@linux.vnet.ibm.com

--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH 1/7] vnvram: VNVRAM bdrv support

2013-05-24 Thread Corey Bryant



On 05/24/2013 09:06 AM, Kevin Wolf wrote:

Am 23.05.2013 um 19:44 hat Corey Bryant geschrieben:

Provides low-level VNVRAM functionality that reads and writes data,
such as an entry's binary blob, to a drive image using the block
driver.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com



+/*
+ * Increase the drive size if it's too small to fit the VNVRAM data
+ */
+static int vnvram_drv_adjust_size(VNVRAM *vnvram)
+{
+int rc = 0;
+int64_t needed_size;
+
+needed_size = 0;
+
+if (bdrv_getlength(vnvram-bds)  needed_size) {
+rc = bdrv_truncate(vnvram-bds, needed_size);
+if (rc != 0) {
+DPRINTF(%s: VNVRAM drive too small\n, __func__);
+}
+}
+
+return rc;
+}


This function doesn't make a whole lot of sense. It truncates the file
to size 0 if and only if bdrv_getlength() returns an error.



There's a later patch that adds a get size function and changes the 
initialization of needed_size to the actual size needed to store VNVRAM 
data.  Anyway I should probably just include that change in this patch. 
 I think I'll still need this function or part of it with the new 
simplified approach that it looks like we're going to take.



+
+/*
+ * Write a header to the drive with entry count of zero
+ */
+static int vnvram_drv_hdr_create_empty(VNVRAM *vnvram)
+{
+VNVRAMDrvHdr hdr;
+
+hdr.version = VNVRAM_CURRENT_VERSION;
+hdr.magic = VNVRAM_MAGIC;
+hdr.num_entries = 0;
+
+vnvram_drv_hdr_cpu_to_be((hdr));
+
+if (bdrv_pwrite(vnvram-bds, 0, (hdr), sizeof(hdr)) != sizeof(hdr)) {
+DPRINTF(%s: Write of header to drive failed\n, __func__);
+return -EIO;
+}
+
+vnvram-end_offset = sizeof(VNVRAMDrvHdr);
+
+return 0;
+}
+
+/*
+ * Read the header from the drive
+ */
+static int vnvram_drv_hdr_read(VNVRAM *vnvram, VNVRAMDrvHdr *hdr)
+{
+if (bdrv_pread(vnvram-bds, 0, hdr, sizeof(*hdr)) != sizeof(*hdr)) {
+DPRINTF(%s: Read of header from drive failed\n, __func__);
+return -EIO;
+}


Why do you turn all errors into -EIO instead of returning the real error
code? (More instances of the same thing follow)



Good point, there's no reason to mask the original error code.


+
+vnvram_drv_hdr_be_to_cpu(hdr);
+
+return 0;
+}
+}


Kevin





--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage

2013-05-24 Thread Corey Bryant



On 05/23/2013 03:15 PM, Anthony Liguori wrote:

Corey Bryant cor...@linux.vnet.ibm.com writes:


On 05/23/2013 02:03 PM, Anthony Liguori wrote:

Corey Bryant cor...@linux.vnet.ibm.com writes:


One of the difficulties in virtualizing a TPM is that it doesn't support
SR-IOV.  So the existing passthrough vTPM can only be used by one guest.
   We're planning to provide a software emulated vTPM that uses libtpms
and it needs to store blobs somewhere that is persistent.  We can't
store blobs in the host TPM's hardware NVRAM.  So we have to virtualize
it in software.  And we figured we'd provide a persistent storage
mechanism that other parts of QEMU could use rather than limit it to
just the vTPM's use.


I think you are misunderstanding my feedback.

See http://mid.gmane.org/87ehf03dgw@codemonkey.ws



It looks like we'll be able to follow what you said in that thread, 
specifically:


Just make the TPM have a DRIVE property, drop all notion of
NVRAM/blobstore, and used fixed offsets into the BlockDriverState for
each blob.

This will limit the functionality to only the vTPM, but it sounds like 
that's desired.  Also it looks like vTPM 1.2 will only have 4 blobs and 
we'll know their max sizes, so we should be able to use fixed offsets 
for them.  This will simplify the code quite a bit.


I assume we'll still need to use a bottom-half to send read/write 
requests to the main thread.  And from the sounds of it the reads/writes 
will need to be asynchronous.


Does this sound ok?

--
Regards,
Corey Bryant





VNVRAM *vnvram;
int errcode
const VNVRAMEntryName entry_name;
const char *blob_w = blob data;
char *blob_r;
uint32_t blob_r_size;

vnvram = vnvram_create(drive-ide0-0-0, false, errcode);
strcpy((char *)entry_name, first-entry);
vnvram_register_entry(vnvram, entry_name, 1024);
vnvram_write_entry(vnvram, entry_name, (char *)blob_w, strlen(blob_w)+1);
vnvram_read_entry(vnvram, entry_name, blob_r, blob_r_size);
vnvram_delete(vnvram);

Thanks,
Corey

Corey Bryant (7):
vnvram: VNVRAM bdrv support
vnvram: VNVRAM in-memory support
vnvram: VNVRAM bottom-half r/w scheduling support
vnvram: VNVRAM internal APIs
vnvram: VNVRAM additional debug support
main: Initialize VNVRAM
monitor: QMP/HMP support for retrieving VNVRAM details

   Makefile.objs|2 +
   hmp.c|   32 ++
   hmp.h|1 +
   monitor.c|7 +
   qapi-schema.json |   47 ++
   qmp-commands.hx  |   41 ++
   vl.c |6 +
   vnvram.c | 1254 
++
   vnvram.h |   36 ++
   9 files changed, 1426 insertions(+), 0 deletions(-)
   create mode 100644 vnvram.c
   create mode 100644 vnvram.h











Re: [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage

2013-05-24 Thread Corey Bryant



On 05/24/2013 08:36 AM, Stefan Hajnoczi wrote:

On Fri, May 24, 2013 at 08:13:27AM -0400, Stefan Berger wrote:

On 05/24/2013 05:59 AM, Stefan Hajnoczi wrote:

On Thu, May 23, 2013 at 01:44:40PM -0400, Corey Bryant wrote:

This patch series provides VNVRAM persistent storage support that
QEMU can use internally.  The initial target user will be a software
vTPM 1.2 backend that needs to store keys in VNVRAM and be able to
reboot/migrate and retain the keys.

This support uses QEMU's block driver to provide persistent storage
by reading/writing VNVRAM data from/to a drive image.  The VNVRAM
drive image is provided with the -drive command line option just like
any other drive image and the vnvram_create() API will find it.

The APIs allow for VNVRAM entries to be registered, one at a time,
each with a maximum blob size.  Entry blobs can then be read/written
from/to an entry on the drive.  Here's an example of usage:

VNVRAM *vnvram;
int errcode
const VNVRAMEntryName entry_name;
const char *blob_w = blob data;
char *blob_r;
uint32_t blob_r_size;

vnvram = vnvram_create(drive-ide0-0-0, false, errcode);
strcpy((char *)entry_name, first-entry);

VNVRAMEntryName is very prone to buffer overflow.  I hope real code
doesn't use strcpy().  The cast is ugly, please don't hide the type.


vnvram_register_entry(vnvram, entry_name, 1024);
vnvram_write_entry(vnvram, entry_name, (char *)blob_w, strlen(blob_w)+1);
vnvram_read_entry(vnvram, entry_name, blob_r, blob_r_size);

These are synchronous functions.  If I/O is involved then this is a
problem: QEMU will be blocked waiting for host I/O to complete and the
big QEMU lock is held.  This can cause poor guest interactivity and poor
scalability because vcpus cannot make progress, neither can the QEMU
monitor respond.


The vTPM is going to run as a thread and will have to write state
blobs into a bdrv. The above functions will typically be called from
this thead. When I originally wrote the code, the vTPM thread could
not write the blobs into bdrv directly, so I had to resort to
sending a message to the main QEMU thread to write the data to the
bdrv. How else could we do this?


How else: use asynchronous APIs like bdrv_aio_writev() or the coroutine
versions (which eliminate the need for callbacks) like bdrv_co_writev().

I'm preparing patches that allow the QEMU block layer to be used safely
outside the QEMU global mutex.  Once this is possible it would be okay
to use synchronous methods.


Ok thanks.  I'll use aio APIs next time around.  Just to be clear, does 
eliminating the callback mean I don't have to use a bottom-half if I 
use coroutine reads/writes?


--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH 1/7] vnvram: VNVRAM bdrv support

2013-05-24 Thread Corey Bryant



On 05/24/2013 11:37 AM, Kevin Wolf wrote:

Am 24.05.2013 um 17:33 hat Corey Bryant geschrieben:



On 05/24/2013 09:06 AM, Kevin Wolf wrote:

Am 23.05.2013 um 19:44 hat Corey Bryant geschrieben:

Provides low-level VNVRAM functionality that reads and writes data,
such as an entry's binary blob, to a drive image using the block
driver.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com



+/*
+ * Increase the drive size if it's too small to fit the VNVRAM data
+ */
+static int vnvram_drv_adjust_size(VNVRAM *vnvram)
+{
+int rc = 0;
+int64_t needed_size;
+
+needed_size = 0;
+
+if (bdrv_getlength(vnvram-bds)  needed_size) {
+rc = bdrv_truncate(vnvram-bds, needed_size);
+if (rc != 0) {
+DPRINTF(%s: VNVRAM drive too small\n, __func__);
+}
+}
+
+return rc;
+}


This function doesn't make a whole lot of sense. It truncates the file
to size 0 if and only if bdrv_getlength() returns an error.



There's a later patch that adds a get size function and changes
the initialization of needed_size to the actual size needed to store
VNVRAM data.  Anyway I should probably just include that change in
this patch.  I think I'll still need this function or part of it
with the new simplified approach that it looks like we're going to
take.


Okay. But even then, do you really want to truncate on errors?

Kevin





True, it'll need something to account for bdrv_getlength() failures and 
not truncate in that case.


--
Regards,
Corey Bryant




[Qemu-devel] [PATCH 2/7] vnvram: VNVRAM in-memory support

2013-05-23 Thread Corey Bryant
Provides support for in-memory VNVRAM entries.  The in-memory
entries are used for fast access to entry data such as the
current or max size of an entry and the disk offset where an
entry's binary blob data is stored.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 vnvram.c |  196 +-
 1 files changed, 195 insertions(+), 1 deletions(-)

diff --git a/vnvram.c b/vnvram.c
index e467198..37b7070 100644
--- a/vnvram.c
+++ b/vnvram.c
@@ -13,6 +13,7 @@
 
 #include vnvram.h
 #include block/block.h
+#include monitor/monitor.h
 
 /*
 #define VNVRAM_DEBUG
@@ -69,6 +70,14 @@ typedef struct VNVRAMDrvEntry {
 
 static int vnvram_drv_entry_create(VNVRAM *, VNVRAMEntry *, uint64_t, 
uint32_t);
 static int vnvram_drv_entry_update(VNVRAM *, VNVRAMEntry *, uint64_t, 
uint32_t);
+static int vnvram_register_entry_internal(VNVRAM *, const VNVRAMEntryName *,
+  uint64_t, uint32_t, uint32_t);
+static VNVRAMEntry *vnvram_find_entry(VNVRAM *, const VNVRAMEntryName *);
+static uint64_t vnvram_get_size_kb(VNVRAM *);
+
+/* Round a value up to the next SIZE */
+#define ROUNDUP(VAL, SIZE) \
+(((VAL)+(SIZE)-1)  ~((SIZE)-1))
 
 /*
  * Macros for finding entries and their drive offsets
@@ -154,7 +163,8 @@ static int vnvram_drv_adjust_size(VNVRAM *vnvram)
 int rc = 0;
 int64_t needed_size;
 
-needed_size = 0;
+/* qcow2 size needs to be multiple of 512 */
+needed_size = vnvram_get_size_kb(vnvram) * 1024;
 
 if (bdrv_getlength(vnvram-bds)  needed_size) {
 rc = bdrv_truncate(vnvram-bds, needed_size);
@@ -485,3 +495,187 @@ static bool vnvram_drv_hdr_is_valid(VNVRAM *vnvram, 
VNVRAMDrvHdr *hdr)
 
 return true;
 }
+
+/ VNVRAM in-memory ***/
+/* High-level VNVRAM functions that work with in-memory entries. */
+/*/
+
+/*
+ * Check if the specified vnvram has been created
+ */
+static bool vnvram_exists(VNVRAM *vnvram_target)
+{
+VNVRAM *vnvram;
+
+QLIST_FOREACH(vnvram, vnvrams, list) {
+if (vnvram == vnvram_target) {
+return true;
+}
+}
+
+return false;
+}
+
+/*
+ * Get total size of the VNVRAM
+ */
+static uint64_t vnvram_get_size(VNVRAM *vnvram)
+{
+const VNVRAMEntry *entry;
+uint64_t totsize = sizeof(VNVRAMDrvHdr);
+
+for (entry = VNVRAM_FIRST_ENTRY(vnvram); entry != NULL;
+ entry = VNVRAM_NEXT_ENTRY(entry)) {
+totsize += sizeof(VNVRAMDrvEntry) + entry-max_size;
+}
+
+return totsize;
+}
+
+/*
+ * Get the total size of the VNVRAM in kilobytes (rounded up to the next kb)
+ */
+static uint64_t vnvram_get_size_kb(VNVRAM *vnvram)
+{
+return ROUNDUP(vnvram_get_size(vnvram), 1024) / 1024;
+}
+
+/*
+ * Check if the VNVRAM entries are valid
+ */
+static bool vnvram_entries_are_valid(VNVRAM *vnvram, uint64_t drv_size)
+{
+const VNVRAMEntry *i_entry, *j_entry;
+
+/* Entries must not overlap or point beyond end of drive size */
+for (i_entry = VNVRAM_FIRST_ENTRY(vnvram); i_entry != NULL;
+ i_entry = VNVRAM_NEXT_ENTRY(i_entry)) {
+
+uint64_t i_blob_start = i_entry-blob_offset;
+uint64_t i_blob_end = i_blob_start + i_entry-max_size-1;
+
+if (i_entry-max_size == 0) {
+DPRINTF(%s: VNVRAM entry max size shouldn't be 0\n, __func__);
+return false;
+}
+
+if (i_blob_end  drv_size) {
+DPRINTF(%s: VNVRAM entry blob too large for drive\n, __func__);
+return false;
+}
+
+for (j_entry = VNVRAM_NEXT_ENTRY(i_entry); j_entry != NULL;
+ j_entry = VNVRAM_NEXT_ENTRY(j_entry)) {
+
+uint64_t j_blob_start = j_entry-blob_offset;
+uint64_t j_blob_end = j_blob_start + j_entry-max_size-1;
+
+if (j_entry-max_size == 0) {
+DPRINTF(%s: VNVRAM entry max size shouldn't be 0\n, 
__func__);
+return false;
+}
+
+if (j_blob_end  drv_size) {
+DPRINTF(%s: VNVRAM entry blob too large for drive\n,
+__func__);
+return false;
+}
+
+if ((i_blob_start = j_blob_start  i_blob_start = j_blob_end) ||
+(i_blob_end   = j_blob_start  i_blob_end   = j_blob_end)) {
+DPRINTF(%s: VNVRAM entries overlap\n, __func__);
+return false;
+}
+}
+}
+
+return true;
+}
+
+/*
+ * Synchronize the in-memory VNVRAM entries with those found on the drive.
+ */
+static int vnvram_sync_from_drv(VNVRAM *vnvram, VNVRAMDrvHdr *hdr)
+{
+int rc = 0, num_entries = 0, i;
+VNVRAMDrvEntry *drv_entries = NULL;
+
+rc = vnvram_drv_entries_get(vnvram, hdr, drv_entries, num_entries);
+if (rc != 0) {
+return rc;
+}
+
+for (i = 0; i  num_entries; i++) {
+rc

[Qemu-devel] [PATCH 4/7] vnvram: VNVRAM internal APIs

2013-05-23 Thread Corey Bryant
Provides VNVRAM APIs that can be used by other areas of QEMU to
provide persistent storage.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 vnvram.c |  266 ++
 vnvram.h |   14 +++
 2 files changed, 280 insertions(+), 0 deletions(-)

diff --git a/vnvram.c b/vnvram.c
index 4157482..357923d 100644
--- a/vnvram.c
+++ b/vnvram.c
@@ -15,6 +15,7 @@
 #include block/block.h
 #include monitor/monitor.h
 #include qemu/thread.h
+#include sysemu/sysemu.h
 
 /*
 #define VNVRAM_DEBUG
@@ -821,3 +822,268 @@ static int vnvram_rwrequest_schedule(VNVRAMRWRequest *rwr)
 
 return rc;
 }
+
+/* VNVRAM APIs ***/
+/* VNVRAM APIs that can be used by QEMU to provide persistent storage*/
+/*/
+
+/*
+ * Initialize VNVRAM
+ *
+ * This must be called before any other APIs.
+ */
+int vnvram_init(void)
+{
+qemu_mutex_init(vnvram_rwrequests_mutex);
+vnvram_bh = qemu_bh_new(vnvram_rwrequest_callback, NULL);
+DPRINTF(%s: VNVRAM initialized\n, __func__);
+
+return 0;
+}
+
+/*
+ * Create a VNVRAM instance
+ *
+ * The VNVRAM instance will use the drive with the corresponding ID as
+ * its persistent storage device.
+ */
+VNVRAM *vnvram_create(const char *drv_id, bool fail_on_invalid, int *errcode)
+{
+int rc;
+VNVRAM *vnvram = NULL;
+VNVRAMDrvHdr hdr;
+BlockDriverState *bds;
+
+*errcode = 0;
+
+if (runstate_check(RUN_STATE_INMIGRATE)) {
+qerror_report(QERR_MIGRATION_ACTIVE);
+rc = -EAGAIN;
+goto err_exit;
+}
+
+bds = bdrv_find(drv_id);
+if (!bds) {
+qerror_report(QERR_DEVICE_NOT_FOUND, drv_id);
+rc = -ENOENT;
+goto err_exit;
+}
+
+if (bdrv_is_read_only(bds)) {
+qerror_report(QERR_DEVICE_IS_READ_ONLY, drv_id);
+rc = -EPERM;
+goto err_exit;
+}
+
+bdrv_lock_medium(bds, true);
+
+vnvram = vnvram_drv_find_by_id(drv_id);
+if (vnvram) {
+/* This VNVRAM was already created - sucess */
+return vnvram;
+}
+
+vnvram = g_new0(VNVRAM, 1);
+vnvram-drv_id = g_strdup(drv_id);
+vnvram-bds = bds;
+
+QLIST_INIT(vnvram-entries_head);
+
+rc = vnvram_drv_adjust_size(vnvram);
+if (rc != 0) {
+qerror_report(QERR_IO_ERROR);
+goto err_exit;
+}
+
+rc = vnvram_drv_hdr_read(vnvram, (hdr));
+if (rc != 0) {
+qerror_report(QERR_IO_ERROR);
+goto err_exit;
+}
+
+if (vnvram_drv_hdr_is_valid(vnvram, (hdr))) {
+rc = vnvram_sync_from_drv(vnvram, (hdr));
+if (rc != 0) {
+qerror_report(ERROR_CLASS_GENERIC_ERROR, VNVRAM drive sync 
error);
+goto err_exit;
+}
+
+if (vnvram_entries_are_valid(vnvram, bdrv_getlength(vnvram-bds))) {
+/* Sync'd VNVRAM drive looks good - success */
+goto exit;
+}
+}
+
+/* Drive data looks invalid.  Could be encrypted and we didn't get key? */
+if (bdrv_is_encrypted(vnvram-bds)) {
+DPRINTF(%s: VNVRAM drive is encrypted\n, __func__);
+}
+
+/* Either fail or reformat the drive. */
+if (fail_on_invalid) {
+qerror_report(ERROR_CLASS_GENERIC_ERROR, VNVRAM drive not valid);
+rc = -EIO;
+goto err_exit;
+}
+
+rc = vnvram_drv_hdr_create_empty(vnvram);
+if (rc != 0) {
+qerror_report(QERR_IO_ERROR);
+goto err_exit;
+}
+
+exit:
+QLIST_INSERT_HEAD(vnvrams, vnvram, list);
+DPRINTF(%s: VNVRAM with drive '%s' created\n, __func__, vnvram-drv_id);
+
+return vnvram;
+
+err_exit:
+if (vnvram) {
+g_free(vnvram-drv_id);
+}
+g_free(vnvram);
+*errcode = rc;
+
+return NULL;
+}
+
+/*
+ * Register a VNVRAM entry
+ *
+ * The entry information will not be flushed to the drive until the next
+ * write.
+ */
+int vnvram_register_entry(VNVRAM *vnvram, const VNVRAMEntryName *entry_name,
+  uint32_t max_size)
+{
+if (!vnvram_exists(vnvram)) {
+qerror_report(ERROR_CLASS_GENERIC_ERROR, VNVRAM has not been 
created);
+return -EPERM;
+}
+
+return vnvram_register_entry_internal(vnvram, entry_name, 0, 0, max_size);
+}
+
+/*
+ * Deregister a VNVRAM entry
+ */
+int vnvram_deregister_entry(VNVRAM *vnvram, const VNVRAMEntryName *entry_name)
+{
+VNVRAMEntry *entry;
+
+if (!vnvram_exists(vnvram)) {
+qerror_report(ERROR_CLASS_GENERIC_ERROR, VNVRAM has not been 
created);
+return -EPERM;
+}
+
+QLIST_FOREACH(entry, vnvram-entries_head, next) {
+if (!strncmp(entry-name, (char *)entry_name, sizeof(*entry_name))) {
+if (entry-cur_size != 0) {
+qerror_report(ERROR_CLASS_GENERIC_ERROR,
+  VNVRAM entry already written to disk);
+return -EPERM;
+}
+QLIST_REMOVE(entry, next

[Qemu-devel] [PATCH 0/7] VNVRAM persistent storage

2013-05-23 Thread Corey Bryant
This patch series provides VNVRAM persistent storage support that
QEMU can use internally.  The initial target user will be a software
vTPM 1.2 backend that needs to store keys in VNVRAM and be able to
reboot/migrate and retain the keys.

This support uses QEMU's block driver to provide persistent storage
by reading/writing VNVRAM data from/to a drive image.  The VNVRAM
drive image is provided with the -drive command line option just like
any other drive image and the vnvram_create() API will find it.

The APIs allow for VNVRAM entries to be registered, one at a time,
each with a maximum blob size.  Entry blobs can then be read/written
from/to an entry on the drive.  Here's an example of usage:

VNVRAM *vnvram;
int errcode
const VNVRAMEntryName entry_name;
const char *blob_w = blob data;
char *blob_r;
uint32_t blob_r_size;

vnvram = vnvram_create(drive-ide0-0-0, false, errcode);
strcpy((char *)entry_name, first-entry);
vnvram_register_entry(vnvram, entry_name, 1024);
vnvram_write_entry(vnvram, entry_name, (char *)blob_w, strlen(blob_w)+1);
vnvram_read_entry(vnvram, entry_name, blob_r, blob_r_size);
vnvram_delete(vnvram);

Thanks,
Corey

Corey Bryant (7):
  vnvram: VNVRAM bdrv support
  vnvram: VNVRAM in-memory support
  vnvram: VNVRAM bottom-half r/w scheduling support
  vnvram: VNVRAM internal APIs
  vnvram: VNVRAM additional debug support
  main: Initialize VNVRAM
  monitor: QMP/HMP support for retrieving VNVRAM details

 Makefile.objs|2 +
 hmp.c|   32 ++
 hmp.h|1 +
 monitor.c|7 +
 qapi-schema.json |   47 ++
 qmp-commands.hx  |   41 ++
 vl.c |6 +
 vnvram.c | 1254 ++
 vnvram.h |   36 ++
 9 files changed, 1426 insertions(+), 0 deletions(-)
 create mode 100644 vnvram.c
 create mode 100644 vnvram.h




[Qemu-devel] [PATCH 3/7] vnvram: VNVRAM bottom-half r/w scheduling support

2013-05-23 Thread Corey Bryant
Provides support that schedules and executes VNVRAM read/write
requests.  A bottom-half is used to perform reads/writes from
the QEMU main thread.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 vnvram.c |  142 ++
 1 files changed, 142 insertions(+), 0 deletions(-)

diff --git a/vnvram.c b/vnvram.c
index 37b7070..4157482 100644
--- a/vnvram.c
+++ b/vnvram.c
@@ -14,6 +14,7 @@
 #include vnvram.h
 #include block/block.h
 #include monitor/monitor.h
+#include qemu/thread.h
 
 /*
 #define VNVRAM_DEBUG
@@ -68,6 +69,30 @@ typedef struct VNVRAMDrvEntry {
 VNVRAM_ENTRY_DATA
 } QEMU_PACKED VNVRAMDrvEntry;
 
+/* Used to pass read/write requests to the bottom-half function */
+typedef struct VNVRAMRWRequest {
+VNVRAM *vnvram;
+VNVRAMEntry *entry;
+bool is_write;
+char **blob_r;
+uint32_t *blob_r_size;
+char *blob_w;
+uint32_t blob_w_size;
+int rc;
+
+QemuMutex completion_mutex;
+QemuCond completion;
+
+QSIMPLEQ_ENTRY(VNVRAMRWRequest) list;
+} VNVRAMRWRequest;
+
+/* A mutex protected queue where read/write requests are stored */
+static QemuMutex vnvram_rwrequests_mutex;
+static QSIMPLEQ_HEAD(, VNVRAMRWRequest) vnvram_rwrequests =
+QSIMPLEQ_HEAD_INITIALIZER(vnvram_rwrequests);
+
+static QEMUBH *vnvram_bh;
+
 static int vnvram_drv_entry_create(VNVRAM *, VNVRAMEntry *, uint64_t, 
uint32_t);
 static int vnvram_drv_entry_update(VNVRAM *, VNVRAMEntry *, uint64_t, 
uint32_t);
 static int vnvram_register_entry_internal(VNVRAM *, const VNVRAMEntryName *,
@@ -679,3 +704,120 @@ static VNVRAMEntry *vnvram_find_entry(VNVRAM *vnvram,
 
 return NULL;
 }
+
+/*** VNVRAM rwrequest /
+/* High-level VNVRAM functions that schedule and kick off read/write */
+/* requests. */
+/*/
+
+/*
+ * VNVRAMRWRequest initialization for read requests
+ */
+static VNVRAMRWRequest *vnvram_rwrequest_init_read(VNVRAM *vnvram,
+   VNVRAMEntry *entry,
+   char **blob,
+   uint32_t *blob_size)
+{
+VNVRAMRWRequest *rwr;
+
+rwr = g_new0(VNVRAMRWRequest, 1);
+
+rwr-is_write = false;
+rwr-entry = entry;
+rwr-vnvram = vnvram;
+rwr-blob_r = blob;
+rwr-blob_r_size = blob_size;
+
+return rwr;
+}
+
+/*
+ * VNVRAMRWRequest initialization for write requests
+ */
+static VNVRAMRWRequest *vnvram_rwrequest_init_write(VNVRAM *vnvram,
+VNVRAMEntry *entry,
+char *blob,
+uint32_t blob_size)
+{
+VNVRAMRWRequest *rwr;
+
+rwr = g_new0(VNVRAMRWRequest, 1);
+
+rwr-is_write = true;
+rwr-entry = entry;
+rwr-vnvram = vnvram;
+rwr-blob_w = blob;
+rwr-blob_w_size = blob_size;
+
+return rwr;
+}
+
+/*
+ * Execute a read or write of blob data based on an VNVRAMRWRequest
+ */
+static int vnvram_rwrequest_exec(VNVRAMRWRequest *rwr)
+{
+int rc = 0;
+
+if (rwr-is_write) {
+rc = vnvram_drv_entry_write_blob(rwr-vnvram, rwr-entry,
+ rwr-blob_w, rwr-blob_w_size);
+} else {
+rc = vnvram_drv_entry_read_blob(rwr-vnvram, rwr-entry,
+rwr-blob_r, rwr-blob_r_size);
+}
+
+rwr-rc = rc;
+
+qemu_mutex_lock(rwr-completion_mutex);
+qemu_cond_signal(rwr-completion);
+qemu_mutex_unlock(rwr-completion_mutex);
+
+return rc;
+}
+
+/*
+ * Bottom-half callback that is invoked by QEMU's main thread to
+ * process VNVRAM read/write requests.
+ */
+static void vnvram_rwrequest_callback(void *opaque)
+{
+VNVRAMRWRequest *rwr, *next;
+
+qemu_mutex_lock(vnvram_rwrequests_mutex);
+
+QSIMPLEQ_FOREACH_SAFE(rwr, vnvram_rwrequests, list, next) {
+QSIMPLEQ_REMOVE(vnvram_rwrequests, rwr, VNVRAMRWRequest, list);
+
+qemu_mutex_unlock(vnvram_rwrequests_mutex);
+
+vnvram_rwrequest_exec(rwr);
+
+qemu_mutex_lock(vnvram_rwrequests_mutex);
+}
+
+qemu_mutex_unlock(vnvram_rwrequests_mutex);
+}
+
+/*
+ * Schedules a bottom-half to read or write a blob to the VNVRAM drive.
+ */
+static int vnvram_rwrequest_schedule(VNVRAMRWRequest *rwr)
+{
+int rc = 0;
+
+qemu_mutex_lock(vnvram_rwrequests_mutex);
+QSIMPLEQ_INSERT_TAIL(vnvram_rwrequests, rwr, list);
+qemu_mutex_unlock(vnvram_rwrequests_mutex);
+
+qemu_bh_schedule(vnvram_bh);
+
+/* All reads/writes are synchronous so we wait for completion */
+qemu_mutex_lock(rwr-completion_mutex);
+qemu_cond_wait(rwr-completion, rwr-completion_mutex);
+qemu_mutex_unlock(rwr-completion_mutex);
+
+rc = rwr-rc;
+
+return rc;
+}
-- 
1.7.1




[Qemu-devel] [PATCH 5/7] vnvram: VNVRAM additional debug support

2013-05-23 Thread Corey Bryant
Provides debug support that dumps the disk and in-memory VNVRAM
contents to stderr.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 vnvram.c |   94 ++
 1 files changed, 94 insertions(+), 0 deletions(-)

diff --git a/vnvram.c b/vnvram.c
index 357923d..9c4f64f 100644
--- a/vnvram.c
+++ b/vnvram.c
@@ -19,6 +19,7 @@
 
 /*
 #define VNVRAM_DEBUG
+#define VNVRAM_DEBUG_DUMP
 */
 
 #ifdef VNVRAM_DEBUG
@@ -134,6 +135,47 @@ static uint64_t vnvram_get_size_kb(VNVRAM *);
 /* entries.  */
 /*/
 
+#ifdef VNVRAM_DEBUG_DUMP
+static void vnvram_drv_dump(void)
+{
+int rc, i, num_entries;
+VNVRAM *vnvram;
+VNVRAMDrvHdr hdr;
+VNVRAMDrvEntry *drv_entries = NULL;
+
+QLIST_FOREACH(vnvram, vnvrams, list) {
+rc = vnvram_drv_hdr_read(vnvram, (hdr));
+if (rc != 0) {
+goto err_exit;
+}
+
+rc = vnvram_drv_entries_get(vnvram, (hdr), drv_entries, 
num_entries);
+if (rc != 0) {
+goto err_exit;
+}
+
+DPRINTF(VNVRAM drv dump:\n);
+DPRINTF(  version = %PRIu16\n, hdr.version);
+DPRINTF(  magic = %PRIu32\n, hdr.magic);
+DPRINTF(  num_entries = %PRIu32\n, hdr.num_entries);
+
+for (i = 0; i  num_entries; i++) {
+DPRINTF(name = %s\n, drv_entries[i].name);
+DPRINTF(blob_offset = %PRIu64\n,
+   drv_entries[i].blob_offset);
+DPRINTF(cur_size = %PRIu32\n, drv_entries[i].cur_size);
+DPRINTF(max_size = %PRIu32\n, drv_entries[i].max_size);
+}
+
+g_free(drv_entries);
+drv_entries = NULL;
+}
+
+err_exit:
+g_free(drv_entries);
+}
+#endif
+
 /*
  * Big-endian conversions
  */
@@ -526,6 +568,28 @@ static bool vnvram_drv_hdr_is_valid(VNVRAM *vnvram, 
VNVRAMDrvHdr *hdr)
 /* High-level VNVRAM functions that work with in-memory entries. */
 /*/
 
+#ifdef VNVRAM_DEBUG_DUMP
+static void vnvram_dump(void)
+{
+VNVRAM *vnvram;
+const VNVRAMEntry *entry;
+
+QLIST_FOREACH(vnvram, vnvrams, list) {
+DPRINTF(VNVRAM dump:\n);
+DPRINTF(  drv_id = %s\n, vnvram-drv_id);
+DPRINTF(  end_offset = %PRIu64\n, vnvram-end_offset);
+DPRINTF(  bds = %p\n, vnvram-bds);
+
+QLIST_FOREACH(entry, vnvram-entries_head, next) {
+DPRINTF(name = %s\n, entry-name);
+DPRINTF(blob_offset = %PRIu64\n, entry-blob_offset);
+DPRINTF(cur_size = %PRIu32\n, entry-cur_size);
+DPRINTF(max_size = %PRIu32\n, entry-max_size);
+}
+}
+}
+#endif
+
 /*
  * Check if the specified vnvram has been created
  */
@@ -626,6 +690,11 @@ static int vnvram_sync_from_drv(VNVRAM *vnvram, 
VNVRAMDrvHdr *hdr)
 int rc = 0, num_entries = 0, i;
 VNVRAMDrvEntry *drv_entries = NULL;
 
+#ifdef VNVRAM_DEBUG_DUMP
+vnvram_dump();
+vnvram_drv_dump();
+#endif
+
 rc = vnvram_drv_entries_get(vnvram, hdr, drv_entries, num_entries);
 if (rc != 0) {
 return rc;
@@ -644,6 +713,11 @@ static int vnvram_sync_from_drv(VNVRAM *vnvram, 
VNVRAMDrvHdr *hdr)
 
 vnvram-end_offset = vnvram_get_size(vnvram);
 
+#ifdef VNVRAM_DEBUG_DUMP
+vnvram_dump();
+vnvram_drv_dump();
+#endif
+
 err_exit:
 g_free(drv_entries);
 
@@ -1007,6 +1081,11 @@ int vnvram_read_entry(VNVRAM *vnvram, const 
VNVRAMEntryName *entry_name,
 VNVRAMEntry *entry;
 VNVRAMRWRequest *rwr;
 
+#ifdef VNVRAM_DEBUG_DUMP
+vnvram_dump();
+vnvram_drv_dump();
+#endif
+
 *blob = NULL;
 *blob_size = 0;
 
@@ -1027,6 +1106,11 @@ int vnvram_read_entry(VNVRAM *vnvram, const 
VNVRAMEntryName *entry_name,
 
 g_free(rwr);
 
+#ifdef VNVRAM_DEBUG_DUMP
+vnvram_dump();
+vnvram_drv_dump();
+#endif
+
 return rc;
 }
 
@@ -1040,6 +1124,11 @@ int vnvram_write_entry(VNVRAM *vnvram, const 
VNVRAMEntryName *entry_name,
 VNVRAMEntry *entry;
 VNVRAMRWRequest *rwr;
 
+#ifdef VNVRAM_DEBUG_DUMP
+vnvram_dump();
+vnvram_drv_dump();
+#endif
+
 if (!vnvram_exists(vnvram)) {
 qerror_report(ERROR_CLASS_GENERIC_ERROR, VNVRAM has not been 
created);
 return -EPERM;
@@ -1057,6 +1146,11 @@ int vnvram_write_entry(VNVRAM *vnvram, const 
VNVRAMEntryName *entry_name,
 
 g_free(rwr);
 
+#ifdef VNVRAM_DEBUG_DUMP
+vnvram_dump();
+vnvram_drv_dump();
+#endif
+
 return rc;
 }
 
-- 
1.7.1




[Qemu-devel] [PATCH 6/7] main: Initialize VNVRAM

2013-05-23 Thread Corey Bryant
Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 vl.c |6 ++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/vl.c b/vl.c
index 59dc0b4..5da88e6 100644
--- a/vl.c
+++ b/vl.c
@@ -171,6 +171,8 @@ int main(int argc, char **argv)
 #include ui/qemu-spice.h
 #include qapi/string-input-visitor.h
 
+#include vnvram.h
+
 //#define DEBUG_NET
 //#define DEBUG_SLIRP
 
@@ -4174,6 +4176,10 @@ int main(int argc, char **argv, char **envp)
 exit(1);
 }
 
+if (vnvram_init()) {
+exit(1);
+}
+
 #ifdef CONFIG_TPM
 if (tpm_init()  0) {
 exit(1);
-- 
1.7.1




[Qemu-devel] [PATCH 7/7] monitor: QMP/HMP support for retrieving VNVRAM details

2013-05-23 Thread Corey Bryant
Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 hmp.c|   32 
 hmp.h|1 +
 monitor.c|7 +
 qapi-schema.json |   47 +++
 qmp-commands.hx  |   41 +++
 vnvram.c |   71 ++
 6 files changed, 199 insertions(+), 0 deletions(-)

diff --git a/hmp.c b/hmp.c
index 4fb76ec..a144f73 100644
--- a/hmp.c
+++ b/hmp.c
@@ -653,6 +653,38 @@ void hmp_info_tpm(Monitor *mon, const QDict *qdict)
 qapi_free_TPMInfoList(info_list);
 }
 
+void hmp_info_vnvram(Monitor *mon, const QDict *dict)
+{
+VNVRAMInfoList *info_list, *info;
+Error *err = NULL;
+unsigned int c = 0;
+
+info_list = qmp_query_vnvram(err);
+if (err) {
+monitor_printf(mon, VNVRAM not found\n);
+error_free(err);
+return;
+}
+
+for (info = info_list; info; info = info-next) {
+VNVRAMInfo *ni = info-value;
+VNVRAMEntryInfoList *einfo_list = ni-entries, *einfo;
+unsigned int d = 0;
+monitor_printf(mon, vnvram%u: drive-id=%s 
+   virtual-disk-size=%PRId64 vnvram-size=%PRIu64\n,
+   c++, ni-drive_id, ni-virtual_disk_size,
+   ni-vnvram_size);
+
+for (einfo = einfo_list; einfo; einfo = einfo-next) {
+VNVRAMEntryInfo *nei = einfo-value;
+monitor_printf(mon,   entry%u: name=%s cur-size=%PRIu64 
+   max-size=%PRIu64\n,
+   d++, nei-name, nei-cur_size, nei-max_size);
+}
+}
+qapi_free_VNVRAMInfoList(info_list);
+}
+
 void hmp_quit(Monitor *mon, const QDict *qdict)
 {
 monitor_suspend(mon);
diff --git a/hmp.h b/hmp.h
index 95fe76e..e26daf2 100644
--- a/hmp.h
+++ b/hmp.h
@@ -37,6 +37,7 @@ void hmp_info_balloon(Monitor *mon, const QDict *qdict);
 void hmp_info_pci(Monitor *mon, const QDict *qdict);
 void hmp_info_block_jobs(Monitor *mon, const QDict *qdict);
 void hmp_info_tpm(Monitor *mon, const QDict *qdict);
+void hmp_info_vnvram(Monitor *mon, const QDict *dict);
 void hmp_quit(Monitor *mon, const QDict *qdict);
 void hmp_stop(Monitor *mon, const QDict *qdict);
 void hmp_system_reset(Monitor *mon, const QDict *qdict);
diff --git a/monitor.c b/monitor.c
index 62aaebe..c10fe15 100644
--- a/monitor.c
+++ b/monitor.c
@@ -2764,6 +2764,13 @@ static mon_cmd_t info_cmds[] = {
 .mhandler.cmd = hmp_info_tpm,
 },
 {
+.name   = vnvram,
+.args_type  = ,
+.params = ,
+.help   = show VNVRAM information,
+.mhandler.cmd = hmp_info_vnvram,
+},
+{
 .name   = NULL,
 },
 };
diff --git a/qapi-schema.json b/qapi-schema.json
index 9302e7d..73d42d6 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -3619,3 +3619,50 @@
 '*cpuid-input-ecx': 'int',
 'cpuid-register': 'X86CPURegister32',
 'features': 'int' } }
+
+# @VNVRAMEntryInfo:
+#
+# Information about an entry in the VNVRAM.
+#
+# @name: name of the entry
+#
+# @cur-size: current size of the entry's blob in bytes
+#
+# @max-size: max size of the entry's blob in bytes
+#
+# Since: 1.6
+#
+##
+{ 'type': 'VNVRAMEntryInfo',
+  'data': {'name': 'str', 'cur-size': 'int', 'max-size': 'int', } }
+
+##
+# @VNVRAMInfo:
+#
+# Information about the VNVRAM device.
+#
+# @drive-id: ID of the VNVRAM (and associated drive)
+#
+# @virtual-disk-size: Virtual size of the associated disk drive in bytes
+#
+# @vnvram-size: Size of the VNVRAM in bytes
+#
+# @entries: Array of @VNVRAMEntryInfo
+#
+# Since: 1.6
+#
+##
+{ 'type': 'VNVRAMInfo',
+  'data': {'drive-id': 'str', 'virtual-disk-size': 'int',
+   'vnvram-size': 'int', 'entries' : ['VNVRAMEntryInfo']} }
+
+##
+# @query-vnvram:
+#
+# Return information about the VNVRAM devices.
+#
+# Returns: @VNVRAMInfo on success
+#
+# Since: 1.6
+##
+{ 'command': 'query-vnvram', 'returns': ['VNVRAMInfo'] }
diff --git a/qmp-commands.hx b/qmp-commands.hx
index ffd130e..56a57b7 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -2932,3 +2932,44 @@ Example:
 - { return: {} }
 
 EQMP
+
+{
+.name   = query-vnvram,
+.args_type  = ,
+.mhandler.cmd_new = qmp_marshal_input_query_vnvram,
+},
+
+SQMP
+query-vnvram
+
+
+Show VNVRAM info.
+
+Return a json-array of json-objects representing VNVRAMs.  Each VNVRAM
+is described by a json-object with the following:
+
+- drive-id: ID of the VNVRAM (json-string)
+- vitual-disk-size: Virtual size of associated disk drive in bytes (json-int)
+- vnvram-size: Size of the VNVRAM in bytes (json-int)
+- entries: json-array of json-objects representing entries
+
+Each entry is described by a json-object with the following:
+
+- name: Name of the entry (json-string)
+- cur-size: Current size of the entry's blob in bytes (json-int)
+- max-size: Max size of the entry's blob in bytes (json

[Qemu-devel] [PATCH 1/7] vnvram: VNVRAM bdrv support

2013-05-23 Thread Corey Bryant
Provides low-level VNVRAM functionality that reads and writes data,
such as an entry's binary blob, to a drive image using the block
driver.

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 Makefile.objs |2 +
 vnvram.c  |  487 +
 vnvram.h  |   22 +++
 3 files changed, 511 insertions(+), 0 deletions(-)
 create mode 100644 vnvram.c
 create mode 100644 vnvram.h

diff --git a/Makefile.objs b/Makefile.objs
index 286ce06..4875a94 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -76,6 +76,8 @@ common-obj-$(CONFIG_SECCOMP) += qemu-seccomp.o
 
 common-obj-$(CONFIG_SMARTCARD_NSS) += $(libcacard-y)
 
+common-obj-y += vnvram.o
+
 ##
 # qapi
 
diff --git a/vnvram.c b/vnvram.c
new file mode 100644
index 000..e467198
--- /dev/null
+++ b/vnvram.c
@@ -0,0 +1,487 @@
+/*
+ * VNVRAM -- stores persistent data in image files
+ *
+ * Copyright (C) 2013 IBM Corporation
+ *
+ * Authors:
+ *  Stefan Bergerstef...@us.ibm.com
+ *  Corey Bryant cor...@linux.vnet.ibm.com
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include vnvram.h
+#include block/block.h
+
+/*
+#define VNVRAM_DEBUG
+*/
+
+#ifdef VNVRAM_DEBUG
+#define DPRINTF(fmt, ...) \
+do { fprintf(stderr, fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+do { } while (0)
+#endif
+
+#define VNVRAM_ENTRY_DATA  \
+VNVRAMEntryName name; /* name of entry */  \
+uint64_t blob_offset; /* start of blob on drive */ \
+uint32_t cur_size;/* current size of blob */   \
+uint32_t max_size;/* max size of blob */
+
+/* The following VNVRAM information is stored in-memory */
+typedef struct VNVRAMEntry {
+VNVRAM_ENTRY_DATA
+QLIST_ENTRY(VNVRAMEntry) next;
+} VNVRAMEntry;
+
+struct VNVRAM {
+char *drv_id;/* corresponds to -drive id= on command line */
+BlockDriverState *bds;   /* bds for the VNVRAM drive */
+uint64_t end_offset; /* offset on drive where next entry will go */
+QLIST_HEAD(entries_head, VNVRAMEntry) entries_head; /* in-memory entries */
+QLIST_ENTRY(VNVRAM) list;
+};
+
+/* There can be multiple VNVRAMS */
+static QLIST_HEAD(, VNVRAM) vnvrams = QLIST_HEAD_INITIALIZER(vnvrams);
+
+#define VNVRAM_VERSION_11
+#define VNVRAM_CURRENT_VERSION  VNVRAM_VERSION_1
+#define VNVRAM_MAGIC0x4E56524D /* NVRM */
+
+/* VNVRAM drive data consists of a header followed by entries and their blobs.
+ * For example:
+ *   | header | entry 1 | entry 1's blob | entry 2 | entry 2's blob | ... |
+ */
+typedef struct VNVRAMDrvHdr {
+uint16_t version;
+uint32_t magic;
+uint32_t num_entries;
+} QEMU_PACKED VNVRAMDrvHdr;
+
+typedef struct VNVRAMDrvEntry {
+VNVRAM_ENTRY_DATA
+} QEMU_PACKED VNVRAMDrvEntry;
+
+static int vnvram_drv_entry_create(VNVRAM *, VNVRAMEntry *, uint64_t, 
uint32_t);
+static int vnvram_drv_entry_update(VNVRAM *, VNVRAMEntry *, uint64_t, 
uint32_t);
+
+/*
+ * Macros for finding entries and their drive offsets
+ */
+#define VNVRAM_FIRST_ENTRY(vnvram) \
+QLIST_FIRST((vnvram)-entries_head)
+
+#define VNVRAM_NEXT_ENTRY(cur_entry) \
+QLIST_NEXT(cur_entry, next)
+
+#define VNVRAM_FIRST_ENTRY_OFFSET() \
+sizeof(VNVRAMDrvHdr)
+
+#define VNVRAM_NEXT_ENTRY_OFFSET(entry) \
+((entry)-blob_offset + (entry)-max_size)
+
+#define VNVRAM_NEXT_AVAIL_BLOB_OFFSET(vnvram) \
+((vnvram)-end_offset + sizeof(VNVRAMDrvEntry))
+
+#define VNVRAM_ENTRY_OFFSET_FROM_BLOB(blob_offset) \
+(blob_offset - sizeof(VNVRAMDrvEntry))
+
+#define VNVRAM_BLOB_OFFSET_FROM_ENTRY(entry_offset) \
+(entry_offset + sizeof(VNVRAMDrvEntry))
+
+/* VNVRAM drv /
+/* Low-level VNVRAM functions that work with the drive header and*/
+/* entries.  */
+/*/
+
+/*
+ * Big-endian conversions
+ */
+static void vnvram_drv_hdr_cpu_to_be(VNVRAMDrvHdr *hdr)
+{
+hdr-version = cpu_to_be16(hdr-version);
+hdr-magic = cpu_to_be32(hdr-magic);
+hdr-num_entries = cpu_to_be32(hdr-num_entries);
+}
+
+static void vnvram_drv_hdr_be_to_cpu(VNVRAMDrvHdr *hdr)
+{
+hdr-version = be16_to_cpu(hdr-version);
+hdr-magic = be32_to_cpu(hdr-magic);
+hdr-num_entries = be32_to_cpu(hdr-num_entries);
+}
+
+static void vnvram_drv_entry_cpu_to_be(VNVRAMDrvEntry *drv_entry)
+{
+drv_entry-blob_offset = cpu_to_be64(drv_entry-blob_offset);
+drv_entry-cur_size = cpu_to_be32(drv_entry-cur_size);
+drv_entry-max_size = cpu_to_be32(drv_entry-max_size);
+}
+
+static void vnvram_drv_entry_be_to_cpu(VNVRAMDrvEntry *drv_entry)
+{
+drv_entry-blob_offset = be64_to_cpu(drv_entry-blob_offset);
+drv_entry-cur_size = be32_to_cpu

Re: [Qemu-devel] [PATCH 7/7] monitor: QMP/HMP support for retrieving VNVRAM details

2013-05-23 Thread Corey Bryant



On 05/23/2013 01:59 PM, Eric Blake wrote:

On 05/23/2013 11:44 AM, Corey Bryant wrote:

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---


Might help to list a sample HMP or QMP usage in the commit message.


+++ b/qapi-schema.json
@@ -3619,3 +3619,50 @@
  '*cpuid-input-ecx': 'int',
  'cpuid-register': 'X86CPURegister32',
  'features': 'int' } }
+
+# @VNVRAMEntryInfo:
+#
+# Information about an entry in the VNVRAM.
+#
+# @name: name of the entry
+#
+# @cur-size: current size of the entry's blob in bytes
+#
+# @max-size: max size of the entry's blob in bytes
+#
+# Since: 1.6
+#
+##
+{ 'type': 'VNVRAMEntryInfo',
+  'data': {'name': 'str', 'cur-size': 'int', 'max-size': 'int', } }


No trailing commas in JSON.  :(



I'll fix that.


+
+##
+# @VNVRAMInfo:
+#
+# Information about the VNVRAM device.
+#
+# @drive-id: ID of the VNVRAM (and associated drive)
+#
+# @virtual-disk-size: Virtual size of the associated disk drive in bytes
+#
+# @vnvram-size: Size of the VNVRAM in bytes
+#
+# @entries: Array of @VNVRAMEntryInfo
+#
+# Since: 1.6
+#
+##
+{ 'type': 'VNVRAMInfo',
+  'data': {'drive-id': 'str', 'virtual-disk-size': 'int',
+   'vnvram-size': 'int', 'entries' : ['VNVRAMEntryInfo']} }
+
+##
+# @query-vnvram:
+#
+# Return information about the VNVRAM devices.
+#
+# Returns: @VNVRAMInfo on success
+#
+# Since: 1.6
+##
+{ 'command': 'query-vnvram', 'returns': ['VNVRAMInfo'] }


Other than that, this looks fine from an interface point of view.  I
haven't closely reviewed code, though.


+
+Example:
+
+- { execute: query-vnvram }
+- {return: [
+  { vnvram-size: 2050, virtual-disk-size: 2000896,
+drive-id: drive-ide0-0-0,
+entries: [
+ { name: this-entry, cur-size: 2048, max-size: 21504 },
+ { name: that-entry, cur-size: 1024, max-size: 21504 },
+ { name: other-entry, cur-size: 4096, max-size: 41472 } ]
+  } ]
+   }


Looks reasonable.



Thanks for the review!

--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage

2013-05-23 Thread Corey Bryant



On 05/23/2013 02:03 PM, Anthony Liguori wrote:

Corey Bryant cor...@linux.vnet.ibm.com writes:


This patch series provides VNVRAM persistent storage support that
QEMU can use internally.  The initial target user will be a software
vTPM 1.2 backend that needs to store keys in VNVRAM and be able to
reboot/migrate and retain the keys.

This support uses QEMU's block driver to provide persistent storage
by reading/writing VNVRAM data from/to a drive image.  The VNVRAM
drive image is provided with the -drive command line option just like
any other drive image and the vnvram_create() API will find it.

The APIs allow for VNVRAM entries to be registered, one at a time,
each with a maximum blob size.  Entry blobs can then be read/written
from/to an entry on the drive.  Here's an example of usage:


I still don't get why this needs to exist.  This doesn't map to any
hardware concept I know of.

Why can't the vTPM manage on it's own how it stores blobs in it's flash
memory?  I think we're adding an unneeded layer of abstraction here.

Regards,

Anthony Liguori



One of the difficulties in virtualizing a TPM is that it doesn't support 
SR-IOV.  So the existing passthrough vTPM can only be used by one guest. 
 We're planning to provide a software emulated vTPM that uses libtpms 
and it needs to store blobs somewhere that is persistent.  We can't 
store blobs in the host TPM's hardware NVRAM.  So we have to virtualize 
it in software.  And we figured we'd provide a persistent storage 
mechanism that other parts of QEMU could use rather than limit it to 
just the vTPM's use.


--
Regards,
Corey Bryant



VNVRAM *vnvram;
int errcode
const VNVRAMEntryName entry_name;
const char *blob_w = blob data;
char *blob_r;
uint32_t blob_r_size;

vnvram = vnvram_create(drive-ide0-0-0, false, errcode);
strcpy((char *)entry_name, first-entry);
vnvram_register_entry(vnvram, entry_name, 1024);
vnvram_write_entry(vnvram, entry_name, (char *)blob_w, strlen(blob_w)+1);
vnvram_read_entry(vnvram, entry_name, blob_r, blob_r_size);
vnvram_delete(vnvram);

Thanks,
Corey

Corey Bryant (7):
   vnvram: VNVRAM bdrv support
   vnvram: VNVRAM in-memory support
   vnvram: VNVRAM bottom-half r/w scheduling support
   vnvram: VNVRAM internal APIs
   vnvram: VNVRAM additional debug support
   main: Initialize VNVRAM
   monitor: QMP/HMP support for retrieving VNVRAM details

  Makefile.objs|2 +
  hmp.c|   32 ++
  hmp.h|1 +
  monitor.c|7 +
  qapi-schema.json |   47 ++
  qmp-commands.hx  |   41 ++
  vl.c |6 +
  vnvram.c | 1254 ++
  vnvram.h |   36 ++
  9 files changed, 1426 insertions(+), 0 deletions(-)
  create mode 100644 vnvram.c
  create mode 100644 vnvram.h








Re: [Qemu-devel] [RFC] Continuous work on sandboxing

2013-05-01 Thread Corey Bryant



On 05/01/2013 10:13 AM, Paul Moore wrote:

On Tuesday, April 30, 2013 04:28:54 PM Corey Bryant wrote:

Just to be clear, I'm thinking you could launch guests in one of two
different seccomp sandboxed environments:

1) Using the existing and more permissive whitelist where every QEMU
feature works:

qemu-kvm -sandbox on,default


In general, I like the comma delimited list of sandbox filters/methods/etc.
but I'm not sure we need to explicitly specify default, it seems like on
would be sufficient.  It also preserved compatibility with what we have now.



Yes, I agree.  This should definitely remain backward compatible.

--
Regards,
Corey Bryant




Re: [Qemu-devel] [RFC] Continuous work on sandboxing

2013-05-01 Thread Corey Bryant



On 05/01/2013 01:25 PM, Eduardo Otubo wrote:



On 04/30/2013 12:24 PM, Paul Moore wrote:

On Monday, April 29, 2013 05:52:10 PM Corey Bryant wrote:

On 04/26/2013 05:07 PM, Paul Moore wrote:

[snip]


3. Debugging and/or learning mode - third party libraries still
have the
problem of interfering in the Qemu's signal mask. According to some
previous discussions, perhaps patch all external libraries that
mass up
with this mask (spice, for example) is a way to solve it. But not
sure
if it worth the time spent. Would like to hear you guys.


I think patching all the libraries is a losing battle, I think we
need to
pursue alternate debugging techniques.


I agree.  It would be nice to have some sort of learning mode that
reported all denied syscalls on a single run, but signal handlers
doesn't seem like the right way.  Maybe we could improve on this
approach, since it never gained traction:
https://lkml.org/lkml/2013/1/7/313

At least we can get a single denied syscall at a time today via the
audit log that the kernel issues.  Eduardo, you may want to see if
there's a good place to document that for QEMU so that people know where
to look.


Lately I've been using the fact that the seccomp BPF filter result
generates
an audit log; it either dumps to syslog or the audit log (depending on
your
configuration) and seems to accomplish most of what we wanted with
SECCOMP_RET_INFO.

I'm always open to new/better ideas, but this has been working
reasonably well
for me for the past few months.


I think this feature would fits well on Qemu if we could have a normal
signal handling. But external libraries interfere a lot on this matter.

Paolo, am I the first one to complain about signal handling on Qemu
(being interfered by other libraries)? I believe this may cause some
trouble in other parts of the project as well. Wouldn't be this a good
time to, perhaps, just think about a signal handling refactoring?



You don't need signal handling to use what Paul was talking about above. 
 I think that should be enough for -sandbox purposes, but perhaps you 
could document it somewhere for QEMU.


--
Regards,
Corey Bryant




Re: [Qemu-devel] [RFC] Continuous work on sandboxing

2013-04-30 Thread Corey Bryant



On 04/30/2013 02:47 PM, Eduardo Otubo wrote:



On 04/29/2013 07:02 PM, Corey Bryant wrote:



On 04/29/2013 02:39 PM, Eduardo Otubo wrote:



On 04/26/2013 06:07 PM, Paul Moore wrote:

On Friday, April 26, 2013 03:39:33 PM Eduardo Otubo wrote:

Hello folks,

Resuming the sandboxing work, I'd  like to ask for comments on the
ideias I have:

1. Reduce whitelist to the optimal subset: Run various tests on Qemu
with different configurations to reduce to the smallest syscall set
possible; test and send a patch weekly (this is already being
performed
and a patch is on the way)


Is this hooked into a testing framework?  While it is always nice to
have
someone verify the correctness, having a simple tool/testsuite what
can run
through things on a regular basis is even better.


Unfortunately it is currently not. I'm running the tests manually, but I
have in mind some ideas to implement a tool for this purpose.



How about testing in KVM autotest?  I assume it would be as simple as
modifying some existing tests to use -sandbox on.  We definitely should
get some automated regression tests running with seccomp on.



Also, looking a bit further ahead, it might be interesting to look at
removing
some of the arch dependent stuff in qemu-seccomp.c.  The latest
version of
libseccomp should remove the need for many, if not all, of the arch
specific
#ifdefs and the next version of libseccomp will add support for x32
and ARM.


Tell me more about this. You're saying I can remove the #ifdefs and keep
the lines like { SCMP_SYS(getresuid32), 241 },  or address these
syscalls in another way?




2. Introduce a second whitelist - the whitelist should be defined in
libvirt and passed on to qemu or just pre defined in Qemu? Also remove
execve() and avoid open() and socket() and its parameters ...


If I'm understanding you correctly, I think what you'll want is a
second
*blacklist*.  We talked about this previously; we currently have a
single
whitelist, and considering how seccomp works, you can really only
further
restrict things after you install a whitelist into the kernel (hence
the
blacklist).


Yes, that's exactly what I'm planning to do.



Hmm, I thought you were going to introduce a completely new whitelist so
that a guest could optionally be run under:
1) the existing sandbox environment where everything in QEMU works,
*or*
2) a new tighter and more restricted sandbox environment where things
like execve() is denied, open() is denied (once the pre-req's are in
place for fd passing), and potentially other dangerous syscalls are
denied.


I think we're talking about the same thing here. I believe the execution


I think so, but I'm not entirely sure.


flow will happen like this: 1) first whitelist installed, only few
syscalls allowed. 2) qemu starts 3) given the current scenario (the
current list of syscalls allowed) the second *blacklist* is installed,
denying execve and open. 4) start guests.


Yes, you could implement the new whitelist this way.



At the end of step 3, we'll have the same environment we have at step 1,
without execve and open. Is that correct?



If the whitelist for #2 was passed from libvirt to qemu then libvirt
could define the syscalls and syscall parameters that are denied.





Just to be clear, I'm thinking you could launch guests in one of two 
different seccomp sandboxed environments:


1) Using the existing and more permissive whitelist where every QEMU 
feature works:


qemu-kvm -sandbox on,default

2) A more restricted whitelist environment that doesn't allow all QEMU 
features to work.  It would be limited to the whitelist in 1 and it 
would also deny things like execve(), open(), socket(), certain ioctl() 
parameters, and may only allow reads/writes to specifc fds, and/or block 
anything else that could be dangerous:


qemu-kvm -sandbox on,restricted

I'm just throwing these command line options and syscalls out there. 
And maybe it makes more sense for libvirt to pass the syscalls and 
parameters to QEMU so that libvirt can determine the parameters to 
restrict, like fd's the guest is allowed to read/write.


Here's another thread where this was discussed:
http://www.redhat.com/archives/libvir-list/2013-April/msg01501.html

--
Regards,
Corey Bryant




Re: [Qemu-devel] [RFC] Continuous work on sandboxing

2013-04-29 Thread Corey Bryant



On 04/26/2013 05:07 PM, Paul Moore wrote:

[snip]


3. Debugging and/or learning mode - third party libraries still have the
problem of interfering in the Qemu's signal mask. According to some
previous discussions, perhaps patch all external libraries that mass up
with this mask (spice, for example) is a way to solve it. But not sure
if it worth the time spent. Would like to hear you guys.

I think patching all the libraries is a losing battle, I think we need to
pursue alternate debugging techniques.

-- paul moore security and virtualization @ redhat



I agree.  It would be nice to have some sort of learning mode that 
reported all denied syscalls on a single run, but signal handlers 
doesn't seem like the right way.  Maybe we could improve on this 
approach, since it never gained traction: https://lkml.org/lkml/2013/1/7/313


At least we can get a single denied syscall at a time today via the 
audit log that the kernel issues.  Eduardo, you may want to see if 
there's a good place to document that for QEMU so that people know where 
to look.


--
Regards,
Corey Bryant




Re: [Qemu-devel] [RFC] Continuous work on sandboxing

2013-04-29 Thread Corey Bryant



On 04/29/2013 02:39 PM, Eduardo Otubo wrote:



On 04/26/2013 06:07 PM, Paul Moore wrote:

On Friday, April 26, 2013 03:39:33 PM Eduardo Otubo wrote:

Hello folks,

Resuming the sandboxing work, I'd  like to ask for comments on the
ideias I have:

1. Reduce whitelist to the optimal subset: Run various tests on Qemu
with different configurations to reduce to the smallest syscall set
possible; test and send a patch weekly (this is already being performed
and a patch is on the way)


Is this hooked into a testing framework?  While it is always nice to have
someone verify the correctness, having a simple tool/testsuite what
can run
through things on a regular basis is even better.


Unfortunately it is currently not. I'm running the tests manually, but I
have in mind some ideas to implement a tool for this purpose.



How about testing in KVM autotest?  I assume it would be as simple as 
modifying some existing tests to use -sandbox on.  We definitely should 
get some automated regression tests running with seccomp on.




Also, looking a bit further ahead, it might be interesting to look at
removing
some of the arch dependent stuff in qemu-seccomp.c.  The latest
version of
libseccomp should remove the need for many, if not all, of the arch
specific
#ifdefs and the next version of libseccomp will add support for x32
and ARM.


Tell me more about this. You're saying I can remove the #ifdefs and keep
the lines like { SCMP_SYS(getresuid32), 241 },  or address these
syscalls in another way?




2. Introduce a second whitelist - the whitelist should be defined in
libvirt and passed on to qemu or just pre defined in Qemu? Also remove
execve() and avoid open() and socket() and its parameters ...


If I'm understanding you correctly, I think what you'll want is a second
*blacklist*.  We talked about this previously; we currently have a single
whitelist, and considering how seccomp works, you can really only further
restrict things after you install a whitelist into the kernel (hence the
blacklist).


Yes, that's exactly what I'm planning to do.



Hmm, I thought you were going to introduce a completely new whitelist so 
that a guest could optionally be run under:

1) the existing sandbox environment where everything in QEMU works,
*or*
2) a new tighter and more restricted sandbox environment where things 
like execve() is denied, open() is denied (once the pre-req's are in 
place for fd passing), and potentially other dangerous syscalls are 
denied.


If the whitelist for #2 was passed from libvirt to qemu then libvirt 
could define the syscalls and syscall parameters that are denied.


--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH v2] tpm: Move TPM passthrough specific command line options to backend structure

2013-04-16 Thread Corey Bryant



On 04/12/2013 05:59 PM, Stefan Berger wrote:

Move the TPM passthrough specific command line options to the passthrough
backend implementation and attach them to the backend's interface structure.

Add code to tpm.c for validating the TPM command line options.

Signed-off-by: Stefan Berger stef...@linux.vnet.ibm.com

---
  tpm/tpm.c |8 
  tpm/tpm_int.h |8 
  tpm/tpm_passthrough.c |   16 
  vl.c  |   16 +---
  4 files changed, 33 insertions(+), 15 deletions(-)

Index: qemu-git.pt/vl.c
===
--- qemu-git.pt.orig/vl.c
+++ qemu-git.pt/vl.c
@@ -502,21 +502,7 @@ static QemuOptsList qemu_tpmdev_opts = {
  .implied_opt_name = type,
  .head = QTAILQ_HEAD_INITIALIZER(qemu_tpmdev_opts.head),
  .desc = {
-{
-.name = type,
-.type = QEMU_OPT_STRING,
-.help = Type of TPM backend,
-},
-{
-.name = cancel-path,
-.type = QEMU_OPT_STRING,
-.help = Sysfs file entry for canceling TPM commands,
-},
-{
-.name = path,
-.type = QEMU_OPT_STRING,
-.help = Path to TPM device on the host,
-},
+/* options are defined in the TPM backends */
  { /* end of list */ }
  },
  };
Index: qemu-git.pt/tpm/tpm.c
===
--- qemu-git.pt.orig/tpm/tpm.c
+++ qemu-git.pt/tpm/tpm.c
@@ -174,6 +174,14 @@ static int configure_tpm(QemuOpts *opts)
  return 1;
  }

+/* validate backend specific opts */
+qemu_opts_validate(opts, be-opts, local_err);
+if (error_is_set(local_err)) {
+qerror_report_err(local_err);
+error_free(local_err);
+return 1;
+}
+
  drv = be-create(opts, id);
  if (!drv) {
  return 1;
Index: qemu-git.pt/tpm/tpm_passthrough.c
===
--- qemu-git.pt.orig/tpm/tpm_passthrough.c
+++ qemu-git.pt/tpm/tpm_passthrough.c
@@ -512,8 +512,24 @@ static void tpm_passthrough_destroy(TPMB
  g_free(tpm_pt-tpm_dev);
  }

+static const QemuOptDesc tpm_passthrough_cmdline_opts[] = {
+TPM_STANDARD_CMDLINE_OPTS,
+{
+.name = cancel-path,
+.type = QEMU_OPT_STRING,
+.help = Sysfs file entry for canceling TPM commands,
+},
+{
+.name = path,
+.type = QEMU_OPT_STRING,
+.help = Path to TPM device on the host,
+},
+{ /* end of list */ },
+};
+
  const TPMDriverOps tpm_passthrough_driver = {
  .type = TPM_TYPE_PASSTHROUGH,
+.opts = tpm_passthrough_cmdline_opts,
  .desc = tpm_passthrough_create_desc,
  .create   = tpm_passthrough_create,
  .destroy  = tpm_passthrough_destroy,
Index: qemu-git.pt/tpm/tpm_int.h
===
--- qemu-git.pt.orig/tpm/tpm_int.h
+++ qemu-git.pt/tpm/tpm_int.h
@@ -35,6 +35,7 @@ struct TPMState {

  struct TPMDriverOps {
  enum TpmType type;
+const QemuOptDesc *opts;
  /* get a descriptive text of the backend to display to the user */
  const char *(*desc)(void);

@@ -59,6 +60,13 @@ struct TPMDriverOps {
  bool (*get_tpm_established_flag)(TPMBackend *t);
  };

+#define TPM_STANDARD_CMDLINE_OPTS \
+{ \
+.name = type, \
+.type = QEMU_OPT_STRING, \
+.help = Type of TPM backend, \
+}
+
  struct tpm_req_hdr {
  uint16_t tag;
  uint32_t len;




Reviewed-by: Corey Bryant cor...@linux.vnet.ibm.com

--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH] Move TPM passthrough specific command line options to backend structure

2013-04-10 Thread Corey Bryant



On 04/02/2013 01:30 PM, Stefan Berger wrote:

Move the TPM passthrough specific command line options to the passthrough
backend implementation and attach them to the backend's interface structure.

Add code to tpm.c for validating the TPM command line options.

Signed-off-by: Stefan Berger stef...@linux.vnet.ibm.com

---
  tpm/tpm.c |   10 +-
  tpm/tpm_int.h |8 
  tpm/tpm_passthrough.c |   16 
  vl.c  |   15 ---
  4 files changed, 33 insertions(+), 16 deletions(-)

Index: qemu-git.pt/vl.c
===
--- qemu-git.pt.orig/vl.c
+++ qemu-git.pt/vl.c
@@ -502,21 +502,6 @@ static QemuOptsList qemu_tpmdev_opts = {
  .implied_opt_name = type,
  .head = QTAILQ_HEAD_INITIALIZER(qemu_tpmdev_opts.head),
  .desc = {
-{
-.name = type,
-.type = QEMU_OPT_STRING,
-.help = Type of TPM backend,
-},
-{
-.name = cancel-path,
-.type = QEMU_OPT_STRING,
-.help = Sysfs file entry for canceling TPM commands,
-},
-{
-.name = path,
-.type = QEMU_OPT_STRING,
-.help = Path to TPM device on the host,
-},
  { /* end of list */ }


Does this need a comment to say where these options are now defined?


  },
  };
Index: qemu-git.pt/tpm/tpm.c
===
--- qemu-git.pt.orig/tpm/tpm.c
+++ qemu-git.pt/tpm/tpm.c
@@ -146,7 +146,7 @@ static int configure_tpm(QemuOpts *opts)
  const char *id;
  const TPMDriverOps *be;
  TPMBackend *drv;
-Error *local_err = NULL;
+Error *local_err = NULL, *errp = NULL;


Can't you use local_err in the new code below instead of creating a new 
variable?




  if (!QLIST_EMPTY(tpm_backends)) {
  error_report(Only one TPM is allowed.\n);
@@ -174,6 +174,14 @@ static int configure_tpm(QemuOpts *opts)
  return 1;
  }

+/* validate backend specific opts */
+qemu_opts_validate(opts, be-opts, errp);
+if (error_is_set(errp)) {
+qerror_report_err(errp);
+error_free(errp);
+return 1;
+}
+


This looks fine to me but I see this is the first call to 
qemu_opts_validate() in QEMU.  I wonder why.



  drv = be-create(opts, id);
  if (!drv) {
  return 1;
Index: qemu-git.pt/tpm/tpm_passthrough.c
===
--- qemu-git.pt.orig/tpm/tpm_passthrough.c
+++ qemu-git.pt/tpm/tpm_passthrough.c
@@ -512,8 +512,24 @@ static void tpm_passthrough_destroy(TPMB
  g_free(tpm_pt-tpm_dev);
  }

+static const QemuOptDesc tpm_passthrough_cmdline_opts[] = {
+TPM_STANDARD_CMDLINE_OPTS,
+{
+.name = cancel-path,
+.type = QEMU_OPT_STRING,
+.help = Sysfs file entry for canceling TPM commands,
+},
+{
+.name = path,
+.type = QEMU_OPT_STRING,
+.help = Path to TPM device on the host,
+},
+{ /* end of list */ },
+};
+
  const TPMDriverOps tpm_passthrough_driver = {
  .type = TPM_TYPE_PASSTHROUGH,
+.opts = tpm_passthrough_cmdline_opts,
  .desc = tpm_passthrough_create_desc,
  .create   = tpm_passthrough_create,
  .destroy  = tpm_passthrough_destroy,
Index: qemu-git.pt/tpm/tpm_int.h
===
--- qemu-git.pt.orig/tpm/tpm_int.h
+++ qemu-git.pt/tpm/tpm_int.h
@@ -40,6 +40,7 @@ typedef void (TPMRecvDataCB)(TPMState *,

  struct TPMDriverOps {
  enum TpmType type;
+const QemuOptDesc *opts;
  /* get a descriptive text of the backend to display to the user */
  const char *(*desc)(void);

@@ -64,6 +65,13 @@ struct TPMDriverOps {
  bool (*get_tpm_established_flag)(TPMBackend *t);
  };

+#define TPM_STANDARD_CMDLINE_OPTS \
+{ \
+.name = type, \
+.type = QEMU_OPT_STRING, \
+.help = Type of TPM backend, \
+}
+
  struct tpm_req_hdr {
  uint16_t tag;
  uint32_t len;




--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH 2/2] Provide ACPI SSDT table for TPM device

2013-04-03 Thread Corey Bryant



On 04/03/2013 06:23 AM, Michael S. Tsirkin wrote:

On Wed, Apr 03, 2013 at 11:54:57AM +0200, Laszlo Ersek wrote:

On 04/02/13 16:34, Corey Bryant wrote:



On 04/01/2013 08:11 PM, Kevin O'Connor wrote:

On Mon, Apr 01, 2013 at 03:05:55PM -0400, Corey Bryant wrote:

On 03/28/2013 05:03 AM, Paolo Bonzini wrote:

There is work on moving ACPI tables to QEMU.  Please work with the
other
developers (Kevin of course, and Michael and Laszlo who I have CCed) on
this.


Kevin, Do you have a preference whether we add these ACPI tables to
SeaBIOS vs QEMU?  It seems there are still a lot of ACPI tables in
SeaBIOS and this adds probably 200 bytes of code and data.
Basically it creates the TPM's SSDT and TCPA (for logging) and
connects them to the RSDT.


The goal is to move all of the ACPI tables from SeaBIOS to QEMU.  So -
yes - my preference would be to add these to QEMU once the transition
is complete.

-Kevin



Ok I'll hold off until the transition is complete.  Thanks.


If that translates to until Laszlo submits patches that are accepted,
then don't wait. I have no idea when I'll manage that and I'd hate to
block your work.

Laszlo


I'm working on that too btw :) Hard to promise a schedule but
it's high on my agenda.



Great, thanks.  I'll watch for patches.  I'd prefer to follow your lead 
on this.


--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH 2/2] Provide ACPI SSDT table for TPM device

2013-04-02 Thread Corey Bryant



On 04/01/2013 08:11 PM, Kevin O'Connor wrote:

On Mon, Apr 01, 2013 at 03:05:55PM -0400, Corey Bryant wrote:

On 03/28/2013 05:03 AM, Paolo Bonzini wrote:

There is work on moving ACPI tables to QEMU.  Please work with the other
developers (Kevin of course, and Michael and Laszlo who I have CCed) on
this.


Kevin, Do you have a preference whether we add these ACPI tables to
SeaBIOS vs QEMU?  It seems there are still a lot of ACPI tables in
SeaBIOS and this adds probably 200 bytes of code and data.
Basically it creates the TPM's SSDT and TCPA (for logging) and
connects them to the RSDT.


The goal is to move all of the ACPI tables from SeaBIOS to QEMU.  So -
yes - my preference would be to add these to QEMU once the transition
is complete.

-Kevin



Ok I'll hold off until the transition is complete.  Thanks.

--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH 2/2] Provide ACPI SSDT table for TPM device

2013-04-01 Thread Corey Bryant



On 03/28/2013 05:03 AM, Paolo Bonzini wrote:

Il 26/03/2013 15:14, Corey Bryant ha scritto:

This patch provides ACPI support for the TPM device. It probes for the TPM
device and only if a TPM device is found then the TPM's SSDT and TCPA table
are created. This patch also connects them to the RSDT.

Since the logging area in the TCPA table requires 64kb, the memory reserved
for ACPI tables (config.h) is increased to 96kb.

The IRQ description in the TPM's SSDT is commented since it will be
'safer' to run the TPM in polling mode - the Linux TPM TIS driver for example
has too many issues when run in interrupt mode.

The description of the TCPA (client) table can be found here:

http://www.trustedcomputinggroup.org/resources/server_work_group_acpi_general_specification_version_10

The compiled SSDT description is also part of this patch.


There is work on moving ACPI tables to QEMU.  Please work with the other
developers (Kevin of course, and Michael and Laszlo who I have CCed) on
this.

Paolo



Kevin, Do you have a preference whether we add these ACPI tables to 
SeaBIOS vs QEMU?  It seems there are still a lot of ACPI tables in 
SeaBIOS and this adds probably 200 bytes of code and data.  Basically it 
creates the TPM's SSDT and TCPA (for logging) and connects them to the RSDT.


--
Regards,
Corey Bryant


Signed-off-by: Stefan Berger stef...@linux.vnet.ibm.com
Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
Version history from prior patch submission:

v6:
   - following Andreas Niederl's suggestion: enclosing Device(TPM) in
 Scope(\_SB) { ... }  to have Linux 2.6.33 recognize the device properly;
 seems to work fine with 3.0.0

v2:
   - Increasing the CONFIG_MAX_HIGHTABLE to 96kb
   - Adding cut-down tcgbios.c|h to keep SeaBIOS compiling
   - Build tpm_drivers.c and tcgbios.c
 - TPM's SSDT and TCPA tables are now visible in Linux
---
  Makefile  |9 +-
  src/acpi-tpm-ssdt.dsl |   24 +
  src/acpi-tpm-ssdt.hex |   27 +++
  src/acpi.c|   41 
  src/acpi.h|   20 ++
  src/config.h  |2 +-
  src/tcgbios.c |   70 +
  src/tcgbios.h |   57 +++
  8 files changed, 248 insertions(+), 2 deletions(-)
  create mode 100644 src/acpi-tpm-ssdt.dsl
  create mode 100644 src/acpi-tpm-ssdt.hex
  create mode 100644 src/tcgbios.c
  create mode 100644 src/tcgbios.h

diff --git a/Makefile b/Makefile
index 759..2807010 100644
--- a/Makefile
+++ b/Makefile
@@ -18,7 +18,7 @@ SRC16=$(SRCBOTH) system.c disk.c font.c
  SRC32FLAT=$(SRCBOTH) post.c shadow.c memmap.c pmm.c coreboot.c boot.c \
  acpi.c smm.c mptable.c pirtable.c smbios.c pciinit.c optionroms.c mtrr.c \
  lzmadecode.c bootsplash.c jpeg.c usb-hub.c paravirt.c \
-biostables.c xen.c bmp.c romfile.c csm.c
+biostables.c xen.c bmp.c romfile.c csm.c tcgbios.c tpm_drivers.c
  SRC32SEG=util.c output.c pci.c pcibios.c apm.c stacks.c

  # Default compiler flags
@@ -215,6 +215,13 @@ $(OUT)vgabios.bin: $(OUT)vgabios.bin.raw tools/buildrom.py
  iasl-option=$(shell if test -z `$(1) $(2) 21  /dev/null` \
  ; then echo $(2); else echo $(3); fi ;)

+src/acpi-tpm-ssdt.hex: src/acpi-tpm-ssdt.dsl
+   @echo Compiling TPM SSDT
+   $(Q)cpp -P $  $(OUT)$*.dsl.i
+   $(Q)iasl -tc -p $(OUT)$* $(OUT)$*.dsl.i
+   $(Q)cp $(OUT)$*.hex $@
+   $(Q)sed -i 's/AmlCode/AmlCode_TPM/' $@
+
  $(OUT)%.hex: src/%.dsl ./tools/acpi_extract_preprocess.py 
./tools/acpi_extract.py
@echo   Compiling IASL $@
$(Q)$(CPP) $(CPPFLAGS) $ -o $(OUT)$*.dsl.i.orig
diff --git a/src/acpi-tpm-ssdt.dsl b/src/acpi-tpm-ssdt.dsl
new file mode 100644
index 000..080bae4
--- /dev/null
+++ b/src/acpi-tpm-ssdt.dsl
@@ -0,0 +1,24 @@
+DefinitionBlock (
+acpi-tpm-ssdt.aml,// Output Filename
+SSDT, // Signature
+0x01,   // SSDT Compliance Revision
+BXPC, // OEMID
+BXSSDT,   // TABLE ID
+0x1 // OEM Revision
+)
+{
+Scope(\_SB) {
+/* TPM with emulated TPM TIS interface */
+Device (TPM) {
+Name (_HID, EisaID (PNP0C31))
+Name (_CRS, ResourceTemplate ()
+{
+Memory32Fixed (ReadWrite, 0xFED4, 0x5000)
+//IRQNoFlags () {5}
+})
+Method (_STA, 0, NotSerialized) {
+Return (0x0F)
+}
+}
+}
+}
diff --git a/src/acpi-tpm-ssdt.hex b/src/acpi-tpm-ssdt.hex
new file mode 100644
index 000..acbb78f
--- /dev/null
+++ b/src/acpi-tpm-ssdt.hex
@@ -0,0 +1,27 @@
+/*
+ *
+ * Intel ACPI Component Architecture
+ * ASL Optimizing Compiler version 20101013-64 [Nov 21 2010]
+ * Copyright (c) 2000 - 2010 Intel Corporation
+ *
+ * Compilation of out/.dsl.i - Mon Jan 30 16:03:18 2012
+ *
+ * C source code output
+ * AML code block

Re: [Qemu-devel] vNVRAM / blobstore design

2013-03-27 Thread Corey Bryant



On 03/25/2013 06:20 PM, Stefan Berger wrote:

On 03/25/2013 06:05 PM, Anthony Liguori wrote:

Stefan Berger stef...@linux.vnet.ibm.com writes:


[argh, just posted this to qemu-trivial -- it's not trivial]


Hello!

I am posting this message to revive the previous discussions about the
design of vNVRAM / blobstore cc'ing (at least) those that participated
in this discussion 'back then'.

The first goal of the implementation is to provide an vNVRAM storage for
a software implementation of a TPM to store its different blobs into.
Some of the data that the TPM writes into persistent memory needs to
survive a power down / power up cycle of a virtual machine, therefore
this type of persistent storage is needed. For the vNVRAM not to become
a road-block for VM migration, we would make use of block device
migration and layer the vNVRAM on top of the block device, therefore
using virtual machine images for storing the vNVRAM data.

Besides the TPM blobs the vNVRAM should of course also be able able to
accommodate other use cases where persistent data is stored into
NVRAM,

Well let's focus more on the blob store.  What are the semantics of
this?  Is there a max number of blobs?  Are the sizes fixed or variable?
How often are new blobs added/removed?


The max number of blobs and frequency of usage depends on the usage 
scenario and NVRAM size.  But that's probably obvious.


I think we should focus on worst case scenarios where NVRAM is filled up 
and used frequently.


One example is that an application can use TSS APIs to define, undefine, 
read, and write to the TPM's NVRAM storage.  (The TPM owner password is 
required to define NVRAM data.)  An application could potentially fill 
up NVRAM and frequently store, change, retrieve data in various places 
within NVRAM.  And the data could have various sizes.


For an example of total NVRAM size, Infineon's TPM has 16K of NVRAM.

--
Regards,
Corey Bryant



In case of TPM 1.2 there are 3 blobs that can be written at different
times for different reasons.

Examples: As with a real-world TPM users loading an owner-evict key into
the TPM will cause the TPM to write that owner-evict key into is own
NVRAM. This key survives a power-off of the machine. Further, the TPM
models its own NVRAM slots. Someone writing into this type of memory
will cause data to be written into the NVRAM. There are other commands
that the TPM offers that will cause data to be written into NVRAM which
users can invoke at any time.

The sizes of the NVRAM blobs of the TPM at least vary in size but I
handle this in the TPM emulation to pad them to fixed size. Depending on
how many such owner-evict keys are loaded into the TPM its permanent
state blob size may vary. Other devices may act differently.

We have a-priori knowledge about  the 3 different types of blobs the TPM
device produces. They are 'registered' once at the beginning (see API)
and are not 'removed' as such.

Regards,
 Stefan






Re: [Qemu-devel] vNVRAM / blobstore design

2013-03-27 Thread Corey Bryant



On 03/27/2013 11:17 AM, Corey Bryant wrote:



On 03/25/2013 06:20 PM, Stefan Berger wrote:

On 03/25/2013 06:05 PM, Anthony Liguori wrote:

Stefan Berger stef...@linux.vnet.ibm.com writes:


[argh, just posted this to qemu-trivial -- it's not trivial]


Hello!

I am posting this message to revive the previous discussions about the
design of vNVRAM / blobstore cc'ing (at least) those that participated
in this discussion 'back then'.

The first goal of the implementation is to provide an vNVRAM storage
for
a software implementation of a TPM to store its different blobs into.
Some of the data that the TPM writes into persistent memory needs to
survive a power down / power up cycle of a virtual machine, therefore
this type of persistent storage is needed. For the vNVRAM not to become
a road-block for VM migration, we would make use of block device
migration and layer the vNVRAM on top of the block device, therefore
using virtual machine images for storing the vNVRAM data.

Besides the TPM blobs the vNVRAM should of course also be able able to
accommodate other use cases where persistent data is stored into
NVRAM,

Well let's focus more on the blob store.  What are the semantics of
this?  Is there a max number of blobs?  Are the sizes fixed or variable?
How often are new blobs added/removed?


The max number of blobs and frequency of usage depends on the usage
scenario and NVRAM size.  But that's probably obvious.

I think we should focus on worst case scenarios where NVRAM is filled up
and used frequently.

One example is that an application can use TSS APIs to define, undefine,
read, and write to the TPM's NVRAM storage.  (The TPM owner password is
required to define NVRAM data.)  An application could potentially fill
up NVRAM and frequently store, change, retrieve data in various places
within NVRAM.  And the data could have various sizes.

For an example of total NVRAM size, Infineon's TPM has 16K of NVRAM.

--
Regards,
Corey Bryant



I just wanted to add that we could really use some direction on which 
way the community would prefer we go with this.  The 2 options that are 
on the table at the moment for encoding/decoding the vNVRAM byte stream 
are BER or JSON visitors.


--
Regards,
Corey Bryant



In case of TPM 1.2 there are 3 blobs that can be written at different
times for different reasons.

Examples: As with a real-world TPM users loading an owner-evict key into
the TPM will cause the TPM to write that owner-evict key into is own
NVRAM. This key survives a power-off of the machine. Further, the TPM
models its own NVRAM slots. Someone writing into this type of memory
will cause data to be written into the NVRAM. There are other commands
that the TPM offers that will cause data to be written into NVRAM which
users can invoke at any time.

The sizes of the NVRAM blobs of the TPM at least vary in size but I
handle this in the TPM emulation to pad them to fixed size. Depending on
how many such owner-evict keys are loaded into the TPM its permanent
state blob size may vary. Other devices may act differently.

We have a-priori knowledge about  the 3 different types of blobs the TPM
device produces. They are 'registered' once at the beginning (see API)
and are not 'removed' as such.

Regards,
 Stefan








[Qemu-devel] [PATCH 2/2] Provide ACPI SSDT table for TPM device

2013-03-26 Thread Corey Bryant
This patch provides ACPI support for the TPM device. It probes for the TPM
device and only if a TPM device is found then the TPM's SSDT and TCPA table
are created. This patch also connects them to the RSDT.

Since the logging area in the TCPA table requires 64kb, the memory reserved
for ACPI tables (config.h) is increased to 96kb.

The IRQ description in the TPM's SSDT is commented since it will be
'safer' to run the TPM in polling mode - the Linux TPM TIS driver for example
has too many issues when run in interrupt mode.

The description of the TCPA (client) table can be found here:

http://www.trustedcomputinggroup.org/resources/server_work_group_acpi_general_specification_version_10

The compiled SSDT description is also part of this patch.

Signed-off-by: Stefan Berger stef...@linux.vnet.ibm.com
Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
Version history from prior patch submission:

v6:
  - following Andreas Niederl's suggestion: enclosing Device(TPM) in
Scope(\_SB) { ... }  to have Linux 2.6.33 recognize the device properly;
seems to work fine with 3.0.0

v2:
  - Increasing the CONFIG_MAX_HIGHTABLE to 96kb
  - Adding cut-down tcgbios.c|h to keep SeaBIOS compiling
  - Build tpm_drivers.c and tcgbios.c
- TPM's SSDT and TCPA tables are now visible in Linux
---
 Makefile  |9 +-
 src/acpi-tpm-ssdt.dsl |   24 +
 src/acpi-tpm-ssdt.hex |   27 +++
 src/acpi.c|   41 
 src/acpi.h|   20 ++
 src/config.h  |2 +-
 src/tcgbios.c |   70 +
 src/tcgbios.h |   57 +++
 8 files changed, 248 insertions(+), 2 deletions(-)
 create mode 100644 src/acpi-tpm-ssdt.dsl
 create mode 100644 src/acpi-tpm-ssdt.hex
 create mode 100644 src/tcgbios.c
 create mode 100644 src/tcgbios.h

diff --git a/Makefile b/Makefile
index 759..2807010 100644
--- a/Makefile
+++ b/Makefile
@@ -18,7 +18,7 @@ SRC16=$(SRCBOTH) system.c disk.c font.c
 SRC32FLAT=$(SRCBOTH) post.c shadow.c memmap.c pmm.c coreboot.c boot.c \
 acpi.c smm.c mptable.c pirtable.c smbios.c pciinit.c optionroms.c mtrr.c \
 lzmadecode.c bootsplash.c jpeg.c usb-hub.c paravirt.c \
-biostables.c xen.c bmp.c romfile.c csm.c
+biostables.c xen.c bmp.c romfile.c csm.c tcgbios.c tpm_drivers.c
 SRC32SEG=util.c output.c pci.c pcibios.c apm.c stacks.c
 
 # Default compiler flags
@@ -215,6 +215,13 @@ $(OUT)vgabios.bin: $(OUT)vgabios.bin.raw tools/buildrom.py
 iasl-option=$(shell if test -z `$(1) $(2) 21  /dev/null` \
 ; then echo $(2); else echo $(3); fi ;)
 
+src/acpi-tpm-ssdt.hex: src/acpi-tpm-ssdt.dsl
+   @echo Compiling TPM SSDT
+   $(Q)cpp -P $  $(OUT)$*.dsl.i
+   $(Q)iasl -tc -p $(OUT)$* $(OUT)$*.dsl.i
+   $(Q)cp $(OUT)$*.hex $@
+   $(Q)sed -i 's/AmlCode/AmlCode_TPM/' $@
+
 $(OUT)%.hex: src/%.dsl ./tools/acpi_extract_preprocess.py 
./tools/acpi_extract.py
@echo   Compiling IASL $@
$(Q)$(CPP) $(CPPFLAGS) $ -o $(OUT)$*.dsl.i.orig
diff --git a/src/acpi-tpm-ssdt.dsl b/src/acpi-tpm-ssdt.dsl
new file mode 100644
index 000..080bae4
--- /dev/null
+++ b/src/acpi-tpm-ssdt.dsl
@@ -0,0 +1,24 @@
+DefinitionBlock (
+acpi-tpm-ssdt.aml,// Output Filename
+SSDT, // Signature
+0x01,   // SSDT Compliance Revision
+BXPC, // OEMID
+BXSSDT,   // TABLE ID
+0x1 // OEM Revision
+)
+{
+Scope(\_SB) {
+/* TPM with emulated TPM TIS interface */
+Device (TPM) {
+Name (_HID, EisaID (PNP0C31))
+Name (_CRS, ResourceTemplate ()
+{
+Memory32Fixed (ReadWrite, 0xFED4, 0x5000)
+//IRQNoFlags () {5}
+})
+Method (_STA, 0, NotSerialized) {
+Return (0x0F)
+}
+}
+}
+}
diff --git a/src/acpi-tpm-ssdt.hex b/src/acpi-tpm-ssdt.hex
new file mode 100644
index 000..acbb78f
--- /dev/null
+++ b/src/acpi-tpm-ssdt.hex
@@ -0,0 +1,27 @@
+/*
+ *
+ * Intel ACPI Component Architecture
+ * ASL Optimizing Compiler version 20101013-64 [Nov 21 2010]
+ * Copyright (c) 2000 - 2010 Intel Corporation
+ *
+ * Compilation of out/.dsl.i - Mon Jan 30 16:03:18 2012
+ *
+ * C source code output
+ * AML code block contains 0x5D bytes
+ *
+ */
+unsigned char AmlCode_TPM[] =
+{
+0x53,0x53,0x44,0x54,0x5D,0x00,0x00,0x00,  /* SSDT]... */
+0x01,0x15,0x42,0x58,0x50,0x43,0x00,0x00,  /* 0008..BXPC.. */
+0x42,0x58,0x53,0x53,0x44,0x54,0x00,0x00,  /* 0010BXSSDT.. */
+0x01,0x00,0x00,0x00,0x49,0x4E,0x54,0x4C,  /* 0018INTL */
+0x13,0x10,0x10,0x20,0x10,0x38,0x5C,0x5F,  /* 0020... .8\_ */
+0x53,0x42,0x5F,0x5B,0x82,0x30,0x54,0x50,  /* 0028SB_[.0TP */
+0x4D,0x5F,0x08,0x5F,0x48,0x49,0x44,0x0C,  /* 0030M_._HID. */
+0x41,0xD0,0x0C

[Qemu-devel] [PATCH 1/2] Add an implementation of a TPM TIS driver

2013-03-26 Thread Corey Bryant
This patch adds an implementation of a TPM TIS driver for the TPM TIS
emulation supported by QEMU (patches for passthrough vTPM are upstream in QEMU).
Usage of the driver is broken up into several functions. The driver is cleanly
separated from the rest of the code through an interface holding pointers to the
driver's functions. A client using this driver first probes whether the TPM TIS
interface is available (probe function) and then invokes the interface
functions to initialize the interface and send requests and receive responses.

Possible future extensions *could* include a virtio interface for the TPM
with a corresponding driver here.

Signed-off-by: Stefan Berger stef...@linux.vnet.ibm.com
Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
Version history from prior patch submission:

v7:
  - moving declaration of tpm_drivers[] into tpm_drivers.h

v6:
  - reworked timeouts; not hardcoded anymore

v5:
  - introducing a configurable threashold as part of the driver interface
structure below which the TPM is used for calculating the sha1

v2:
  - adapted tpm_drivers.c to be under LGPLv3
---
 src/tpm_drivers.c |  258 +
 src/tpm_drivers.h |   90 +++
 2 files changed, 348 insertions(+), 0 deletions(-)
 create mode 100644 src/tpm_drivers.c
 create mode 100644 src/tpm_drivers.h

diff --git a/src/tpm_drivers.c b/src/tpm_drivers.c
new file mode 100644
index 000..edf192d
--- /dev/null
+++ b/src/tpm_drivers.c
@@ -0,0 +1,258 @@
+// Implementation of a TPM driver for the TPM TIS interface
+//
+// Copyright (C) 2006-2013 IBM Corporation
+//
+// Authors:
+// Stefan Berger stef...@linux.vnet.ibm.com
+//
+// This file may be distributed under the terms of the GNU LGPLv3 license.
+
+#include config.h
+#include util.h
+#include tpm_drivers.h
+#include tcgbios.h
+
+static u32 tis_default_timeouts[4] = {
+TIS_DEFAULT_TIMEOUT_A,
+TIS_DEFAULT_TIMEOUT_B,
+TIS_DEFAULT_TIMEOUT_C,
+TIS_DEFAULT_TIMEOUT_D,
+};
+
+static u32 tpm_default_durations[3] = {
+TPM_DEFAULT_DURATION_SHORT,
+TPM_DEFAULT_DURATION_MEDIUM,
+TPM_DEFAULT_DURATION_LONG,
+};
+
+
+/* if device is not there, return '0', '1' otherwise */
+static u32 tis_probe(void)
+{
+u32 rc = 0;
+u32 didvid = readl(TIS_REG(0, TIS_REG_DID_VID));
+
+if ((didvid != 0)  (didvid != 0x))
+rc = 1;
+
+return rc;
+}
+
+static u32 tis_init(void)
+{
+writeb(TIS_REG(0, TIS_REG_INT_ENABLE), 0);
+
+if (tpm_drivers[TIS_DRIVER_IDX].durations == NULL) {
+u32 *durations = malloc_low(sizeof(tpm_default_durations));
+if (durations)
+memcpy(durations, tpm_default_durations,
+   sizeof(tpm_default_durations));
+else
+durations = tpm_default_durations;
+tpm_drivers[TIS_DRIVER_IDX].durations = durations;
+}
+
+if (tpm_drivers[TIS_DRIVER_IDX].timeouts == NULL) {
+u32 *timeouts = malloc_low(sizeof(tis_default_timeouts));
+if (timeouts)
+memcpy(timeouts, tis_default_timeouts,
+   sizeof(tis_default_timeouts));
+else
+timeouts = tis_default_timeouts;
+tpm_drivers[TIS_DRIVER_IDX].timeouts = timeouts;
+}
+
+return 1;
+}
+
+
+static void set_timeouts(u32 timeouts[4], u32 durations[3])
+{
+u32 *tos = tpm_drivers[TIS_DRIVER_IDX].timeouts;
+u32 *dus = tpm_drivers[TIS_DRIVER_IDX].durations;
+
+if (tos  tos != tis_default_timeouts  timeouts)
+memcpy(tos, timeouts, 4 * sizeof(u32));
+if (dus  dus != tpm_default_durations  durations)
+memcpy(dus, durations, 3 * sizeof(u32));
+}
+
+
+static u32 tis_wait_sts(u8 locty, u32 time, u8 mask, u8 expect)
+{
+u32 rc = 1;
+
+while (time  0) {
+u8 sts = readb(TIS_REG(locty, TIS_REG_STS));
+if ((sts  mask) == expect) {
+rc = 0;
+break;
+}
+msleep(1);
+time--;
+}
+return rc;
+}
+
+static u32 tis_activate(u8 locty)
+{
+u32 rc = 0;
+u8 acc;
+int l;
+u32 timeout_a = tpm_drivers[TIS_DRIVER_IDX].timeouts[TIS_TIMEOUT_TYPE_A];
+
+if (!(readb(TIS_REG(locty, TIS_REG_ACCESS)) 
+  TIS_ACCESS_ACTIVE_LOCALITY)) {
+/* release locality in use top-downwards */
+for (l = 4; l = 0; l--)
+writeb(TIS_REG(l, TIS_REG_ACCESS),
+   TIS_ACCESS_ACTIVE_LOCALITY);
+}
+
+/* request access to locality */
+writeb(TIS_REG(locty, TIS_REG_ACCESS), TIS_ACCESS_REQUEST_USE);
+
+acc = readb(TIS_REG(locty, TIS_REG_ACCESS));
+if ((acc  TIS_ACCESS_ACTIVE_LOCALITY)) {
+writeb(TIS_REG(locty, TIS_REG_STS), TIS_STS_COMMAND_READY);
+rc = tis_wait_sts(locty, timeout_a,
+  TIS_STS_COMMAND_READY, TIS_STS_COMMAND_READY);
+}
+
+return rc;
+}
+
+static u32 tis_find_active_locality(void)
+{
+u8 locty;
+
+for (locty = 0; locty = 4; locty++) {
+if ((readb

[Qemu-devel] [PATCH 0/2] Add TPM driver and ACPI support to SeaBIOS

2013-03-26 Thread Corey Bryant
The following set of patches is being resubmitted to add TPM support to
SeaBIOS.  This series only includes a subset of the total seabios TPM
support that is planned.  The patches included in this series provide
initial foundational support that make sense to include at this point
now that a passthrough vTPM is available in QEMU.

In particular, these patches add:
- a TPM driver for QEMU's TPM TIS emulation
- ACPI support for the TPM device (SSDT table)
- ACPI support for measurement logging (TCPA table)

Corey Bryant (2):
  Add an implementation of a TPM TIS driver
  Provide ACPI SSDT table for TPM device

 Makefile  |9 ++-
 src/acpi-tpm-ssdt.dsl |   24 +
 src/acpi-tpm-ssdt.hex |   27 +
 src/acpi.c|   41 
 src/acpi.h|   20 
 src/config.h  |2 +-
 src/tcgbios.c |   70 +
 src/tcgbios.h |   57 +++
 src/tpm_drivers.c |  258 +
 src/tpm_drivers.h |   90 +
 10 files changed, 596 insertions(+), 2 deletions(-)
 create mode 100644 src/acpi-tpm-ssdt.dsl
 create mode 100644 src/acpi-tpm-ssdt.hex
 create mode 100644 src/tcgbios.c
 create mode 100644 src/tcgbios.h
 create mode 100644 src/tpm_drivers.c
 create mode 100644 src/tpm_drivers.h




Re: [Qemu-devel] [PATCH] qemu-bridge-helper: force usage of a very high MAC address for the bridge

2013-03-25 Thread Corey Bryant



On 03/22/2013 06:09 PM, Paolo Bonzini wrote:

Il 22/03/2013 22:37, Corey Bryant ha scritto:

Is it desirable to change a mac address under the covers?


This is the TAP mac address.  It is unrelated to the guest's MAC
address.  It is a random link-local address, all this patch does is make
it less random.



Ah, okay then my concern isn't a concern any longer.

--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH] qemu-bridge-helper: force usage of a very high MAC address for the bridge

2013-03-25 Thread Corey Bryant



On 03/22/2013 12:57 PM, Paolo Bonzini wrote:

Linux uses the lowest enslaved MAC address as the MAC address of
the bridge.  Set MAC address to a high value so that it does not
affect the MAC address of the bridge.

Changing the MAC address of the bridge could cause a few seconds
of network downtime.

Cc: qemu-sta...@nongnu.org
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
  qemu-bridge-helper.c | 18 ++
  1 file changed, 18 insertions(+)

diff --git a/qemu-bridge-helper.c b/qemu-bridge-helper.c
index 287bfd5..6a0974e 100644
--- a/qemu-bridge-helper.c
+++ b/qemu-bridge-helper.c
@@ -367,6 +367,24 @@ int main(int argc, char **argv)
  goto cleanup;
  }

+/* Linux uses the lowest enslaved MAC address as the MAC address of
+ * the bridge.  Set MAC address to a high value so that it doesn't
+ * affect the MAC address of the bridge.
+ */
+if (ioctl(ctlfd, SIOCGIFHWADDR, ifr)  0) {
+fprintf(stderr, failed to get MAC address of device `%s': %s\n,
+iface, strerror(errno));
+ret = EXIT_FAILURE;
+goto cleanup;
+}
+ifr.ifr_hwaddr.sa_data[0] = 0xFE;
+if (ioctl(ctlfd, SIOCSIFHWADDR, ifr)  0) {
+fprintf(stderr, failed to set MAC address of device `%s': %s\n,
+iface, strerror(errno));
+ret = EXIT_FAILURE;
+goto cleanup;
+}
+
  /* add the interface to the bridge */
  prep_ifreq(ifr, bridge);
  ifindex = if_nametoindex(iface);



Reviewed-by: Corey Bryant cor...@linux.vnet.ibm.com

--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH] qemu-bridge-helper: force usage of a very high MAC address for the bridge

2013-03-22 Thread Corey Bryant



On 03/22/2013 12:57 PM, Paolo Bonzini wrote:

Linux uses the lowest enslaved MAC address as the MAC address of
the bridge.  Set MAC address to a high value so that it does not
affect the MAC address of the bridge.

Changing the MAC address of the bridge could cause a few seconds
of network downtime.

Cc: qemu-sta...@nongnu.org
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
  qemu-bridge-helper.c | 18 ++
  1 file changed, 18 insertions(+)

diff --git a/qemu-bridge-helper.c b/qemu-bridge-helper.c
index 287bfd5..6a0974e 100644
--- a/qemu-bridge-helper.c
+++ b/qemu-bridge-helper.c
@@ -367,6 +367,24 @@ int main(int argc, char **argv)
  goto cleanup;
  }

+/* Linux uses the lowest enslaved MAC address as the MAC address of
+ * the bridge.  Set MAC address to a high value so that it doesn't
+ * affect the MAC address of the bridge.
+ */
+if (ioctl(ctlfd, SIOCGIFHWADDR, ifr)  0) {
+fprintf(stderr, failed to get MAC address of device `%s': %s\n,
+iface, strerror(errno));
+ret = EXIT_FAILURE;
+goto cleanup;
+}
+ifr.ifr_hwaddr.sa_data[0] = 0xFE;
+if (ioctl(ctlfd, SIOCSIFHWADDR, ifr)  0) {
+fprintf(stderr, failed to set MAC address of device `%s': %s\n,
+iface, strerror(errno));
+ret = EXIT_FAILURE;
+goto cleanup;
+}
+
  /* add the interface to the bridge */
  prep_ifreq(ifr, bridge);
  ifindex = if_nametoindex(iface);



Is it desirable to change a mac address under the covers?  I know you 
mentioned that libvirt does something like this already.


Also it seems like this might be better if it was optional. 
qemu-bridge-helper can take command line options like this:


-net tap,helper=/usr/local/libexec/qemu-bridge-helper --br=br0

Perhaps adding a --macaddr option is a better approach?

--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH] QMP: TPM QMP and man page documentation updates

2013-03-20 Thread Corey Bryant



On 03/20/2013 08:32 AM, Markus Armbruster wrote:

Corey Bryant cor...@linux.vnet.ibm.com writes:


On 03/19/2013 03:26 AM, Markus Armbruster wrote:

[Note cc: Anthony for QAPI schema expertise]

Stefan Berger stef...@linux.vnet.ibm.com writes:


On 03/18/2013 12:16 PM, Markus Armbruster wrote:

Corey Bryant cor...@linux.vnet.ibm.com writes:


Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
qemu-options.hx |  3 ++-
qmp-commands.hx | 59 
+
2 files changed, 61 insertions(+), 1 deletion(-)

diff --git a/qemu-options.hx b/qemu-options.hx
index 30fb85d..3b3cd0f 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -2237,7 +2237,8 @@ Backend type must be:
@option{passthrough}.
  The specific backend type will determine the applicable
options.
-The @code{-tpmdev} option requires a @code{-device} option.
+The @code{-tpmdev} option creates the TPM backend and requires a
+@code{-device} option that specifies the TPM frontend interface model.
  Options to each backend are described below.
diff --git a/qmp-commands.hx b/qmp-commands.hx
index b370060..4eda5ea 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -2721,18 +2721,77 @@ EQMP
.mhandler.cmd_new = qmp_marshal_input_query_tpm,
},
+SQMP
+query-tpm
+-
+
+Return information about the TPM device.
+
+Arguments: None
+
+Example:
+
+- { execute: query-tpm }
+- { return:
+ [
+   { model: tpm-tis,
+ tpm-options:
+   { type: tpm-passthrough-options,
+ data:
+   { cancel-path: /sys/class/misc/tpm0/device/cancel,
+ path: /dev/tpm0
+   }
+   },
+ type: passthrough,
+ id: tpm0
+   }
+ ]
+   }
+
+EQMP
+

tpm-options is a discriminated union.  How is its discriminator type
(here: tpm-passthrough-options) related to the outer type (here:
passthrough)?


It gives you similar information twice. So there is a direct
relationship between the two types.


Awkward and undocumented.

Relevant parts of qapi-schema.json:

  { 'enum': 'TpmType', 'data': [ 'passthrough' ] }

  { 'union': 'TpmTypeOptions',
 'data': { 'tpm-passthrough-options' : 'TPMPassthroughOptions' } }

  { 'type': 'TPMInfo',
'data': {'id': 'str',
 'model': 'TpmModel',
 'type': 'TpmType',
 'tpm-options': 'TpmTypeOptions' } }

Type Netdev solves the same problem more elegantly:

  { 'union': 'NetClientOptions',
'data': {
  'none': 'NetdevNoneOptions',
  'nic':  'NetLegacyNicOptions',
  'user': 'NetdevUserOptions',
  'tap':  'NetdevTapOptions',
  'socket':   'NetdevSocketOptions',
  'vde':  'NetdevVdeOptions',
  'dump': 'NetdevDumpOptions',
  'bridge':   'NetdevBridgeOptions',
  'hubport':  'NetdevHubPortOptions' } }

  { 'type': 'Netdev',
'data': {
  'id':   'str',
  'opts': 'NetClientOptions' } }

Uses the union's discriminator.  Straightforward.

Following Netdev precedence, we get:

  { 'union': 'TpmTypeOptions',
'data': { 'passthrough' : 'TPMPassthroughOptions' } }

  { 'type': 'TPMInfo',
'data': {'id': 'str',
 'model': 'TpmModel',
 'opts': 'TpmTypeOptions' } }



I can send a patch for this update if you'd like.


Yes, please!



Will do.


Duplication of type is gone.  No need for documentation.

Since enum TpmType is used elsewhere, it still gets duplicated in the
union's discriminator.  Anthony, is there a way to name the implicit
discriminator enum type for reference elsewhere in the schema?



Are you saying it still gets duplicated with this fix?  I'm not sure
what you mean.


A union in the schema implicitely defines an C enumeration type to be
used for its discriminator.  For instance, union TpmTypeOptions
implicitely defines this one:

 typedef enum TpmTypeOptionsKind
 {
 TPM_TYPE_OPTIONS_KIND_TPM_PASSTHROUGH_OPTIONS = 0,
 TPM_TYPE_OPTIONS_KIND_MAX = 1,
 } TpmTypeOptionsKind;

QAPI's discriminated union becomes a C struct containing the
discriminator and the union of the members:

 struct TpmTypeOptions
 {
 TpmTypeOptionsKind kind;
 union {
 void *data;
 TPMPassthroughOptions * tpm_passthrough_options;
 };
 };

Enum TpmType and type TpmInfo become:

 typedef enum TpmType
 {
 TPM_TYPE_PASSTHROUGH = 0,
 TPM_TYPE_MAX = 1,
 } TpmType;

 struct TPMInfo
 {
 char * id;
 TpmModel model;
 TpmType type;
 TpmTypeOptions * tpm_options;
 };

With the change I propose, TPM_TYPE_OPTIONS_KIND_TPM_PASSTHROUGH_OPTIONS
becomes TPM_TYPE_OPTIONS_KIND_TPM_PASSTHROUGH, and TPMInfo member type
goes away.

Because TpmType is still used elsewhere, it doesn't go away, thus still

Re: [Qemu-devel] [PATCH 04/35] tpm: reorganize headers and split hardware part

2013-03-20 Thread Corey Bryant



On 03/18/2013 01:34 PM, Paolo Bonzini wrote:

The TPM subsystem does not have a good front-end/back-end separation.


I think it has very good front-end/back-end separation, but perhaps you 
mean the code could be moved around and better organized.



However, we can at least try to split the user interface (tpm.c) from
the implementation (hw/tpm).


Here's a general break-down of front-end/back-end/general code

tpm_tis.c = TIS front-end (the only front-end at this time)
tpm_passthrough.c = Passthrough backend (the only back-end at this time)
tpm.c = general code
tpm_backend.c = backend thread pool functions (could use a better file 
name?)




The patches makes tpm.c not include tpm_int.h; instead it moves more
stuff to tpm_backend.h.

Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
  Makefile.objs  |  2 +-
  default-configs/i386-softmmu.mak   |  2 +-
  default-configs/x86_64-softmmu.mak |  2 +-
  hw/Makefile.objs   |  1 +
  {tpm = hw/tpm}/Makefile.objs  |  3 +-
  {tpm = hw/tpm}/tpm_backend.c  | 14 ++
  {tpm = hw/tpm}/tpm_int.h  | 55 ++--
  {tpm = hw/tpm}/tpm_passthrough.c  |  0
  {tpm = hw/tpm}/tpm_tis.c  |  0
  {tpm = hw/tpm}/tpm_tis.h  |  5 
  {tpm = include/tpm}/tpm_backend.h | 57 ++
  tpm/tpm.c = tpm.c | 18 ++--
  12 files changed, 80 insertions(+), 79 deletions(-)
  rename {tpm = hw/tpm}/Makefile.objs (61%)
  rename {tpm = hw/tpm}/tpm_backend.c (82%)
  rename {tpm = hw/tpm}/tpm_int.h (49%)
  rename {tpm = hw/tpm}/tpm_passthrough.c (100%)
  rename {tpm = hw/tpm}/tpm_tis.c (100%)
  rename {tpm = hw/tpm}/tpm_tis.h (94%)
  rename {tpm = include/tpm}/tpm_backend.h (50%)
  rename tpm/tpm.c = tpm.c (93%)

diff --git a/Makefile.objs b/Makefile.objs
index f99841c..ff3a6b3 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -74,7 +74,7 @@ common-obj-y += bt-host.o bt-vhci.o

  common-obj-y += dma-helpers.o
  common-obj-y += vl.o
-common-obj-y += tpm/
+common-obj-y += tpm.o

  common-obj-$(CONFIG_SLIRP) += slirp/

diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak
index 7d8908f..f70594d 100644
--- a/default-configs/i386-softmmu.mak
+++ b/default-configs/i386-softmmu.mak
@@ -26,4 +26,4 @@ CONFIG_HPET=y
  CONFIG_APPLESMC=y
  CONFIG_I8259=y
  CONFIG_PFLASH_CFI01=y
-CONFIG_TPM_TIS=$(CONFIG_TPM)
+CONFIG_TPM_TIS=y
diff --git a/default-configs/x86_64-softmmu.mak 
b/default-configs/x86_64-softmmu.mak
index e87e644..66c4855 100644
--- a/default-configs/x86_64-softmmu.mak
+++ b/default-configs/x86_64-softmmu.mak
@@ -26,4 +26,4 @@ CONFIG_HPET=y
  CONFIG_APPLESMC=y
  CONFIG_I8259=y
  CONFIG_PFLASH_CFI01=y
-CONFIG_TPM_TIS=$(CONFIG_TPM)
+CONFIG_TPM_TIS=y
diff --git a/hw/Makefile.objs b/hw/Makefile.objs
index 09fea2c..5626292 100644
--- a/hw/Makefile.objs
+++ b/hw/Makefile.objs
@@ -25,6 +25,7 @@ devices-dirs-$(CONFIG_SOFTMMU) += scsi/
  devices-dirs-$(CONFIG_SOFTMMU) += sd/
  devices-dirs-$(CONFIG_SOFTMMU) += ssi/
  devices-dirs-$(CONFIG_SOFTMMU) += timer/
+devices-dirs-$(CONFIG_TPM) += tpm/
  devices-dirs-$(CONFIG_SOFTMMU) += usb/
  devices-dirs-$(CONFIG_SOFTMMU) += virtio/
  devices-dirs-$(CONFIG_SOFTMMU) += watchdog/
diff --git a/tpm/Makefile.objs b/hw/tpm/Makefile.objs
similarity index 61%
rename from tpm/Makefile.objs
rename to hw/tpm/Makefile.objs
index 366e4a7..8bbed7a 100644
--- a/tpm/Makefile.objs
+++ b/hw/tpm/Makefile.objs
@@ -1,4 +1,3 @@
-common-obj-y = tpm.o
-common-obj-$(CONFIG_TPM) += tpm_backend.o
+common-obj-y += tpm_backend.o
  common-obj-$(CONFIG_TPM_TIS) += tpm_tis.o
  common-obj-$(CONFIG_TPM_PASSTHROUGH) += tpm_passthrough.o
diff --git a/tpm/tpm_backend.c b/hw/tpm/tpm_backend.c
similarity index 82%
rename from tpm/tpm_backend.c
rename to hw/tpm/tpm_backend.c
index 4144ef7..31d833c 100644
--- a/tpm/tpm_backend.c
+++ b/hw/tpm/tpm_backend.c
@@ -56,3 +56,17 @@ void tpm_backend_thread_tpm_reset(TPMBackendThread *tbt,
 NULL);
  }
  }
+
+/*
+ * Write an error message in the given output buffer.
+ */
+void tpm_write_fatal_error_response(uint8_t *out, uint32_t out_len)
+{
+if (out_len = sizeof(struct tpm_resp_hdr)) {
+struct tpm_resp_hdr *resp = (struct tpm_resp_hdr *)out;
+
+resp-tag = cpu_to_be16(TPM_TAG_RSP_COMMAND);
+resp-len = cpu_to_be32(sizeof(struct tpm_resp_hdr));
+resp-errcode = cpu_to_be32(TPM_FAIL);
+}
+}


I don't think moving this from tpm.c to tpm_backend.c helps anything. 
Maybe just renaming some of the files mentioned above might make the 
front-end vs back-end vs general code more intuitive.


--
Regards,
Corey Bryant


diff --git a/tpm/tpm_int.h b/hw/tpm/tpm_int.h
similarity index 49%
rename from tpm/tpm_int.h
rename to hw/tpm/tpm_int.h
index f705643..d5f7bb8 100644
--- a/tpm/tpm_int.h
+++ b/hw/tpm/tpm_int.h
@@ -15,27 +15,8 @@
  #include exec/memory.h
  #include tpm/tpm_tis.h

-struct TPMDriverOps;
-typedef struct

[Qemu-devel] [PATCH 2/2] QMP: TPM QMP and man page documentation updates

2013-03-20 Thread Corey Bryant

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 qemu-options.hx |3 +-
 qmp-commands.hx |   58 +++
 2 files changed, 60 insertions(+), 1 deletions(-)

diff --git a/qemu-options.hx b/qemu-options.hx
index 06dd565..b4e3a2d 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -2241,7 +2241,8 @@ Backend type must be:
 @option{passthrough}.
 
 The specific backend type will determine the applicable options.
-The @code{-tpmdev} option requires a @code{-device} option.
+The @code{-tpmdev} option creates the TPM backend and requires a
+@code{-device} option that specifies the TPM frontend interface model.
 
 Options to each backend are described below.
 
diff --git a/qmp-commands.hx b/qmp-commands.hx
index b370060..3aa6bd1 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -2721,18 +2721,76 @@ EQMP
 .mhandler.cmd_new = qmp_marshal_input_query_tpm,
 },
 
+SQMP
+query-tpm
+-
+
+Return information about the TPM device.
+
+Arguments: None
+
+Example:
+
+- { execute: query-tpm }
+- { return:
+ [
+   { model: tpm-tis,
+ options:
+   { type: passthrough,
+ data:
+   { cancel-path: /sys/class/misc/tpm0/device/cancel,
+ path: /dev/tpm0
+   }
+   },
+ id: tpm0
+   }
+ ]
+   }
+
+EQMP
+
 {
 .name   = query-tpm-models,
 .args_type  = ,
 .mhandler.cmd_new = qmp_marshal_input_query_tpm_models,
 },
 
+SQMP
+query-tpm-models
+
+
+Return a list of supported TPM models.
+
+Arguments: None
+
+Example:
+
+- { execute: query-tpm-models }
+- { return: [ tpm-tis ] }
+
+EQMP
+
 {
 .name   = query-tpm-types,
 .args_type  = ,
 .mhandler.cmd_new = qmp_marshal_input_query_tpm_types,
 },
 
+SQMP
+query-tpm-types
+---
+
+Return a list of supported TPM types.
+
+Arguments: None
+
+Example:
+
+- { execute: query-tpm-types }
+- { return: [ passthrough ] }
+
+EQMP
+
 {
 .name   = chardev-add,
 .args_type  = id:s,backend:q,
-- 
1.7.1




[Qemu-devel] [PATCH 1/2] QMP: Remove duplicate TPM type from query-tpm

2013-03-20 Thread Corey Bryant

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 hmp.c|8 
 qapi-schema.json |   12 
 tpm/tpm.c|9 -
 3 files changed, 12 insertions(+), 17 deletions(-)

diff --git a/hmp.c b/hmp.c
index b0a861c..d319897 100644
--- a/hmp.c
+++ b/hmp.c
@@ -631,11 +631,11 @@ void hmp_info_tpm(Monitor *mon, const QDict *qdict)
c, TpmModel_lookup[ti-model]);
 
 monitor_printf(mon,   \\ %s: type=%s,
-   ti-id, TpmType_lookup[ti-type]);
+   ti-id, TpmTypeOptionsKind_lookup[ti-options-kind]);
 
-switch (ti-tpm_options-kind) {
-case TPM_TYPE_OPTIONS_KIND_TPM_PASSTHROUGH_OPTIONS:
-tpo = ti-tpm_options-tpm_passthrough_options;
+switch (ti-options-kind) {
+case TPM_TYPE_OPTIONS_KIND_PASSTHROUGH:
+tpo = ti-options-passthrough;
 monitor_printf(mon, %s%s%s%s,
tpo-has_path ? ,path= : ,
tpo-has_path ? tpo-path : ,
diff --git a/qapi-schema.json b/qapi-schema.json
index fdaa9da..5505bbf 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -3403,13 +3403,12 @@
 #
 # A union referencing different TPM backend types' configuration options
 #
-# @tpm-passthough-options: TPMPassthroughOptions describing the TPM
-#  passthrough configuration options
+# @passthough: The configuration options for the TPM passthrough type
 #
 # Since: 1.5
 ##
 { 'union': 'TpmTypeOptions',
-   'data': { 'tpm-passthrough-options' : 'TPMPassthroughOptions' } }
+   'data': { 'passthrough' : 'TPMPassthroughOptions' } }
 
 ##
 # @TpmInfo:
@@ -3420,17 +3419,14 @@
 #
 # @model: The TPM frontend model
 #
-# @type: The TPM (backend) type being used
-#
-# @tpm-options: The TPM (backend) type configuration options
+# @options: The TPM (backend) type configuration options
 #
 # Since: 1.5
 ##
 { 'type': 'TPMInfo',
   'data': {'id': 'str',
'model': 'TpmModel',
-   'type': 'TpmType',
-   'tpm-options': 'TpmTypeOptions' } }
+   'options': 'TpmTypeOptions' } }
 
 ##
 # @query-tpm:
diff --git a/tpm/tpm.c b/tpm/tpm.c
index ffd2495..ae00eae 100644
--- a/tpm/tpm.c
+++ b/tpm/tpm.c
@@ -257,14 +257,13 @@ static TPMInfo *qmp_query_tpm_inst(TPMBackend *drv)
 
 res-id = g_strdup(drv-id);
 res-model = drv-fe_model;
-res-type = drv-ops-type;
-res-tpm_options = g_new0(TpmTypeOptions, 1);
+res-options = g_new0(TpmTypeOptions, 1);
 
-switch (res-type) {
+switch (drv-ops-type) {
 case TPM_TYPE_PASSTHROUGH:
-res-tpm_options-kind = TPM_TYPE_OPTIONS_KIND_TPM_PASSTHROUGH_OPTIONS;
+res-options-kind = TPM_TYPE_OPTIONS_KIND_PASSTHROUGH;
 tpo = g_new0(TPMPassthroughOptions, 1);
-res-tpm_options-tpm_passthrough_options = tpo;
+res-options-passthrough = tpo;
 if (drv-path) {
 tpo-path = g_strdup(drv-path);
 tpo-has_path = true;
-- 
1.7.1




Re: [Qemu-devel] [PATCH] QMP: TPM QMP and man page documentation updates

2013-03-20 Thread Corey Bryant
I just resubmitted this patch as part of another series, so please 
ignore this one.


--
Regards,
Corey Bryant

On 03/15/2013 01:17 PM, Corey Bryant wrote:

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
  qemu-options.hx |  3 ++-
  qmp-commands.hx | 59 +
  2 files changed, 61 insertions(+), 1 deletion(-)

diff --git a/qemu-options.hx b/qemu-options.hx
index 30fb85d..3b3cd0f 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -2237,7 +2237,8 @@ Backend type must be:
  @option{passthrough}.

  The specific backend type will determine the applicable options.
-The @code{-tpmdev} option requires a @code{-device} option.
+The @code{-tpmdev} option creates the TPM backend and requires a
+@code{-device} option that specifies the TPM frontend interface model.

  Options to each backend are described below.

diff --git a/qmp-commands.hx b/qmp-commands.hx
index b370060..4eda5ea 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -2721,18 +2721,77 @@ EQMP
  .mhandler.cmd_new = qmp_marshal_input_query_tpm,
  },

+SQMP
+query-tpm
+-
+
+Return information about the TPM device.
+
+Arguments: None
+
+Example:
+
+- { execute: query-tpm }
+- { return:
+ [
+   { model: tpm-tis,
+ tpm-options:
+   { type: tpm-passthrough-options,
+ data:
+   { cancel-path: /sys/class/misc/tpm0/device/cancel,
+ path: /dev/tpm0
+   }
+   },
+ type: passthrough,
+ id: tpm0
+   }
+ ]
+   }
+
+EQMP
+
  {
  .name   = query-tpm-models,
  .args_type  = ,
  .mhandler.cmd_new = qmp_marshal_input_query_tpm_models,
  },

+SQMP
+query-tpm-models
+
+
+Return a list of supported TPM models.
+
+Arguments: None
+
+Example:
+
+- { execute: query-tpm-models }
+- { return: [ tpm-tis ] }
+
+EQMP
+
  {
  .name   = query-tpm-types,
  .args_type  = ,
  .mhandler.cmd_new = qmp_marshal_input_query_tpm_types,
  },

+SQMP
+query-tpm-types
+---
+
+Return a list of supported TPM types.
+
+Arguments: None
+
+Example:
+
+- { execute: query-tpm-types }
+- { return: [ passthrough ] }
+
+EQMP
+
  {
  .name   = chardev-add,
  .args_type  = id:s,backend:q,






Re: [Qemu-devel] [PATCH] QMP: TPM QMP and man page documentation updates

2013-03-19 Thread Corey Bryant



On 03/19/2013 03:26 AM, Markus Armbruster wrote:

[Note cc: Anthony for QAPI schema expertise]

Stefan Berger stef...@linux.vnet.ibm.com writes:


On 03/18/2013 12:16 PM, Markus Armbruster wrote:

Corey Bryant cor...@linux.vnet.ibm.com writes:


Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
   qemu-options.hx |  3 ++-
   qmp-commands.hx | 59 
+
   2 files changed, 61 insertions(+), 1 deletion(-)

diff --git a/qemu-options.hx b/qemu-options.hx
index 30fb85d..3b3cd0f 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -2237,7 +2237,8 @@ Backend type must be:
   @option{passthrough}.
 The specific backend type will determine the applicable
options.
-The @code{-tpmdev} option requires a @code{-device} option.
+The @code{-tpmdev} option creates the TPM backend and requires a
+@code{-device} option that specifies the TPM frontend interface model.
 Options to each backend are described below.
   diff --git a/qmp-commands.hx b/qmp-commands.hx
index b370060..4eda5ea 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -2721,18 +2721,77 @@ EQMP
   .mhandler.cmd_new = qmp_marshal_input_query_tpm,
   },
   +SQMP
+query-tpm
+-
+
+Return information about the TPM device.
+
+Arguments: None
+
+Example:
+
+- { execute: query-tpm }
+- { return:
+ [
+   { model: tpm-tis,
+ tpm-options:
+   { type: tpm-passthrough-options,
+ data:
+   { cancel-path: /sys/class/misc/tpm0/device/cancel,
+ path: /dev/tpm0
+   }
+   },
+ type: passthrough,
+ id: tpm0
+   }
+ ]
+   }
+
+EQMP
+

tpm-options is a discriminated union.  How is its discriminator type
(here: tpm-passthrough-options) related to the outer type (here:
passthrough)?


It gives you similar information twice. So there is a direct
relationship between the two types.


Awkward and undocumented.

Relevant parts of qapi-schema.json:

 { 'enum': 'TpmType', 'data': [ 'passthrough' ] }

 { 'union': 'TpmTypeOptions',
'data': { 'tpm-passthrough-options' : 'TPMPassthroughOptions' } }

 { 'type': 'TPMInfo',
   'data': {'id': 'str',
'model': 'TpmModel',
'type': 'TpmType',
'tpm-options': 'TpmTypeOptions' } }

Type Netdev solves the same problem more elegantly:

 { 'union': 'NetClientOptions',
   'data': {
 'none': 'NetdevNoneOptions',
 'nic':  'NetLegacyNicOptions',
 'user': 'NetdevUserOptions',
 'tap':  'NetdevTapOptions',
 'socket':   'NetdevSocketOptions',
 'vde':  'NetdevVdeOptions',
 'dump': 'NetdevDumpOptions',
 'bridge':   'NetdevBridgeOptions',
 'hubport':  'NetdevHubPortOptions' } }

 { 'type': 'Netdev',
   'data': {
 'id':   'str',
 'opts': 'NetClientOptions' } }

Uses the union's discriminator.  Straightforward.

Following Netdev precedence, we get:

 { 'union': 'TpmTypeOptions',
   'data': { 'passthrough' : 'TPMPassthroughOptions' } }

 { 'type': 'TPMInfo',
   'data': {'id': 'str',
'model': 'TpmModel',
'opts': 'TpmTypeOptions' } }



I can send a patch for this update if you'd like.


Duplication of type is gone.  No need for documentation.

Since enum TpmType is used elsewhere, it still gets duplicated in the
union's discriminator.  Anthony, is there a way to name the implicit
discriminator enum type for reference elsewhere in the schema?



Are you saying it still gets duplicated with this fix?  I'm not sure 
what you mean.


--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH] QMP: TPM QMP and man page documentation updates

2013-03-18 Thread Corey Bryant



On 03/18/2013 01:46 PM, Stefan Berger wrote:

On 03/18/2013 12:16 PM, Markus Armbruster wrote:

Corey Bryant cor...@linux.vnet.ibm.com writes:


Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
  qemu-options.hx |  3 ++-
  qmp-commands.hx | 59
+
  2 files changed, 61 insertions(+), 1 deletion(-)

diff --git a/qemu-options.hx b/qemu-options.hx
index 30fb85d..3b3cd0f 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -2237,7 +2237,8 @@ Backend type must be:
  @option{passthrough}.
  The specific backend type will determine the applicable options.
-The @code{-tpmdev} option requires a @code{-device} option.
+The @code{-tpmdev} option creates the TPM backend and requires a
+@code{-device} option that specifies the TPM frontend interface model.
  Options to each backend are described below.
diff --git a/qmp-commands.hx b/qmp-commands.hx
index b370060..4eda5ea 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -2721,18 +2721,77 @@ EQMP
  .mhandler.cmd_new = qmp_marshal_input_query_tpm,
  },
+SQMP
+query-tpm
+-
+
+Return information about the TPM device.
+
+Arguments: None
+
+Example:
+
+- { execute: query-tpm }
+- { return:
+ [
+   { model: tpm-tis,
+ tpm-options:
+   { type: tpm-passthrough-options,
+ data:
+   { cancel-path: /sys/class/misc/tpm0/device/cancel,
+ path: /dev/tpm0
+   }
+   },
+ type: passthrough,
+ id: tpm0
+   }
+ ]
+   }
+
+EQMP
+

tpm-options is a discriminated union.  How is its discriminator type
(here: tpm-passthrough-options) related to the outer type (here:
passthrough)?


It gives you similar information twice. So there is a direct
relationship between the two types.



The sample above could be the result when the following command line 
options are in effect:


qemu-system-x86_64
  -tpmdev 
passthrough,id=tpm0,path=/dev/tpm0,cancel-path=/sys/class/misc/tpm0/device/cancel

  -device tpm-tis,tpmdev=tpm0

--
Regards,
Corey Bryant




Re: [Qemu-devel] [PATCH V27 1/7] Support for TPM command line options

2013-03-15 Thread Corey Bryant



On 03/15/2013 03:36 AM, Markus Armbruster wrote:

I missed this one, because it wasn't cc'ed to QMP maintainers, the
subject mentions only command line, not QMP, and even the body talks
only about the human monitor command, not QMP.  Noticed it only when
git-pull touched qapi-schema.json.  Please try harder to help Luiz and
me keep track of QMP changes.

I gave the QMP interface and its documentation a look-over now.  It's
just a look-over, because passthrough requires a box with TPM enabled,
which I don't have handy, so I can't test anything.

A few comments inline.

Stefan Berger stef...@linux.vnet.ibm.com writes:


This patch adds support for TPM command line options.
The command line options supported here are

./qemu-... -tpmdev passthrough,path=path to TPM device,id=id
-device tpm-tis,tpmdev=id,id=other id

and

./qemu-... -tpmdev help

where the latter works similar to -soundhw help and shows a list of
available TPM backends (for example 'passthrough').

Using the type parameter, the backend is chosen, i.e., 'passthrough' for the
passthrough driver. The interpretation of the other parameters along
with determining whether enough parameters were provided is pushed into
the backend driver, which needs to implement the interface function
'create' and return a TPMDriverOpts structure if the VM can be started or
'NULL' if not enough or bad parameters were provided.

Monitor support for 'info tpm' has been added. It for example prints the
following:

(qemu) info tpm
TPM devices:
  tpm0: model=tpm-tis
   \ tpm0: 
type=passthrough,path=/dev/tpm0,cancel-path=/sys/devices/pnp0/00:09/cancel

Signed-off-by: Stefan Berger stef...@linux.vnet.ibm.com
Reviewed-by: Corey Bryant cor...@linux.vnet.ibm.com
---
  Makefile.objs |   1 +
  hmp-commands.hx   |   2 +
  hmp.c |  44 +++
  hmp.h |   1 +
  include/tpm/tpm.h |  21 
  monitor.c |   8 ++
  qapi-schema.json  | 104 +
  qemu-options.hx   |  33 ++
  qmp-commands.hx   |  18 +++
  tpm/Makefile.objs |   1 +
  tpm/tpm.c | 343 ++
  tpm/tpm_int.h |  83 +
  tpm/tpm_tis.h |  80 +
  vl.c  |  37 ++
  14 files changed, 776 insertions(+)
  create mode 100644 include/tpm/tpm.h
  create mode 100644 tpm/Makefile.objs
  create mode 100644 tpm/tpm.c
  create mode 100644 tpm/tpm_int.h
  create mode 100644 tpm/tpm_tis.h

diff --git a/Makefile.objs b/Makefile.objs
index a68cdac..047f9c1 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -73,6 +73,7 @@ common-obj-y += bt-host.o bt-vhci.o

  common-obj-y += dma-helpers.o
  common-obj-y += vl.o
+common-obj-y += tpm/

  common-obj-$(CONFIG_SLIRP) += slirp/

diff --git a/hmp-commands.hx b/hmp-commands.hx
index cef7708..9f2c093 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -1642,6 +1642,8 @@ show device tree
  show qdev device model list
  @item info roms
  show roms
+@item info tpm
+show the TPM device
  @end table
  ETEXI

diff --git a/hmp.c b/hmp.c
index 2f47a8a..b0a861c 100644
--- a/hmp.c
+++ b/hmp.c
@@ -607,6 +607,50 @@ void hmp_info_block_jobs(Monitor *mon, const QDict *qdict)
  }
  }

+void hmp_info_tpm(Monitor *mon, const QDict *qdict)
+{
+TPMInfoList *info_list, *info;
+Error *err = NULL;
+unsigned int c = 0;
+TPMPassthroughOptions *tpo;
+
+info_list = qmp_query_tpm(err);
+if (err) {
+monitor_printf(mon, TPM device not supported\n);
+error_free(err);
+return;
+}
+
+if (info_list) {
+monitor_printf(mon, TPM device:\n);
+}
+
+for (info = info_list; info; info = info-next) {
+TPMInfo *ti = info-value;
+monitor_printf(mon,  tpm%d: model=%s\n,
+   c, TpmModel_lookup[ti-model]);
+
+monitor_printf(mon,   \\ %s: type=%s,
+   ti-id, TpmType_lookup[ti-type]);
+
+switch (ti-tpm_options-kind) {
+case TPM_TYPE_OPTIONS_KIND_TPM_PASSTHROUGH_OPTIONS:
+tpo = ti-tpm_options-tpm_passthrough_options;
+monitor_printf(mon, %s%s%s%s,
+   tpo-has_path ? ,path= : ,
+   tpo-has_path ? tpo-path : ,
+   tpo-has_cancel_path ? ,cancel-path= : ,
+   tpo-has_cancel_path ? tpo-cancel_path : );
+break;
+case TPM_TYPE_OPTIONS_KIND_MAX:
+break;
+}
+monitor_printf(mon, \n);
+c++;
+}
+qapi_free_TPMInfoList(info_list);
+}
+
  void hmp_quit(Monitor *mon, const QDict *qdict)
  {
  monitor_suspend(mon);
diff --git a/hmp.h b/hmp.h
index 30b3c20..95fe76e 100644
--- a/hmp.h
+++ b/hmp.h
@@ -36,6 +36,7 @@ void hmp_info_spice(Monitor *mon, const QDict *qdict);
  void hmp_info_balloon(Monitor *mon, const QDict *qdict);
  void hmp_info_pci(Monitor *mon, const QDict *qdict);
  void hmp_info_block_jobs(Monitor *mon, const QDict *qdict);
+void hmp_info_tpm

[Qemu-devel] [PATCH] QMP: TPM QMP and man page documentation updates

2013-03-15 Thread Corey Bryant

Signed-off-by: Corey Bryant cor...@linux.vnet.ibm.com
---
 qemu-options.hx |  3 ++-
 qmp-commands.hx | 59 +
 2 files changed, 61 insertions(+), 1 deletion(-)

diff --git a/qemu-options.hx b/qemu-options.hx
index 30fb85d..3b3cd0f 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -2237,7 +2237,8 @@ Backend type must be:
 @option{passthrough}.
 
 The specific backend type will determine the applicable options.
-The @code{-tpmdev} option requires a @code{-device} option.
+The @code{-tpmdev} option creates the TPM backend and requires a
+@code{-device} option that specifies the TPM frontend interface model.
 
 Options to each backend are described below.
 
diff --git a/qmp-commands.hx b/qmp-commands.hx
index b370060..4eda5ea 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -2721,18 +2721,77 @@ EQMP
 .mhandler.cmd_new = qmp_marshal_input_query_tpm,
 },
 
+SQMP
+query-tpm
+-
+
+Return information about the TPM device.
+
+Arguments: None
+
+Example:
+
+- { execute: query-tpm }
+- { return:
+ [
+   { model: tpm-tis,
+ tpm-options:
+   { type: tpm-passthrough-options,
+ data:
+   { cancel-path: /sys/class/misc/tpm0/device/cancel,
+ path: /dev/tpm0
+   }
+   },
+ type: passthrough,
+ id: tpm0
+   }
+ ]
+   }
+
+EQMP
+
 {
 .name   = query-tpm-models,
 .args_type  = ,
 .mhandler.cmd_new = qmp_marshal_input_query_tpm_models,
 },
 
+SQMP
+query-tpm-models
+
+
+Return a list of supported TPM models.
+
+Arguments: None
+
+Example:
+
+- { execute: query-tpm-models }
+- { return: [ tpm-tis ] }
+
+EQMP
+
 {
 .name   = query-tpm-types,
 .args_type  = ,
 .mhandler.cmd_new = qmp_marshal_input_query_tpm_types,
 },
 
+SQMP
+query-tpm-types
+---
+
+Return a list of supported TPM types.
+
+Arguments: None
+
+Example:
+
+- { execute: query-tpm-types }
+- { return: [ passthrough ] }
+
+EQMP
+
 {
 .name   = chardev-add,
 .args_type  = id:s,backend:q,
-- 
1.8.1.4




  1   2   3   4   5   6   >