Re: [PATCH-v2 1/2] virtio-scsi: create VirtIOSCSICommon

2013-04-09 Thread Paolo Bonzini
Il 08/04/2013 23:59, Anthony Liguori ha scritto:
  This patch refactors existing virtio-scsi code into VirtIOSCSICommon
  in order to allow virtio_scsi_init_common() to be used by both internal
  virtio_scsi_init() and external vhost-scsi-pci code.
 
  Changes in Patch-v2:
 - Move -get_features() assignment to virtio_scsi_init() instead of
   virtio_scsi_init_common()
 
 Any reason we're not doing this as a QOM base class?
 
 Similiar to how the in-kernel PIT/PIC work using a common base class...

Because when the patch was written virtio-scsi was not a QOM class.

Paolo

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: Bug in SeaBIOS virtio-ring handling bug with vhost-scsi-pci

2013-04-09 Thread Paolo Bonzini
Il 09/04/2013 06:33, Nicholas A. Bellinger ha scritto:
  Nicholas, where is the latest v3 code. Can you push it to your tree. 
  
 Sure.  Just pushed to:
 
 http://git.kernel.org/cgit/virt/kvm/nab/qemu-kvm.git/log/?h=vhost-scsi-for-1.4
 
 and should be appearing momentarily.

I'm going to test my rebase today.

Paolo
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v5 0/3] tcm_vhost hotplug

2013-04-09 Thread Asias He
Asias He (3):
  tcm_vhost: Introduce tcm_vhost_check_feature()
  tcm_vhost: Add helper to check if endpoint is setup
  tcm_vhost: Add hotplug/hotunplug support

 drivers/vhost/tcm_vhost.c | 236 +-
 drivers/vhost/tcm_vhost.h |  10 ++
 2 files changed, 242 insertions(+), 4 deletions(-)

-- 
1.8.1.4

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v5 1/3] tcm_vhost: Introduce tcm_vhost_check_feature()

2013-04-09 Thread Asias He
This helper is useful to check if a feature is supported.

Signed-off-by: Asias He as...@redhat.com
Reviewed-by: Stefan Hajnoczi stefa...@redhat.com
---
 drivers/vhost/tcm_vhost.c | 12 
 1 file changed, 12 insertions(+)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index c127731..f0189bc 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -99,6 +99,18 @@ static int iov_num_pages(struct iovec *iov)
   ((unsigned long)iov-iov_base  PAGE_MASK))  PAGE_SHIFT;
 }
 
+static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature)
+{
+   bool ret = false;
+
+   mutex_lock(vs-dev.mutex);
+   if (vhost_has_feature(vs-dev, feature))
+   ret = true;
+   mutex_unlock(vs-dev.mutex);
+
+   return ret;
+}
+
 static int tcm_vhost_check_true(struct se_portal_group *se_tpg)
 {
return 1;
-- 
1.8.1.4

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v5 2/3] tcm_vhost: Add helper to check if endpoint is setup

2013-04-09 Thread Asias He
Signed-off-by: Asias He as...@redhat.com
---
 drivers/vhost/tcm_vhost.c | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index f0189bc..7069881 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -111,6 +111,24 @@ static bool tcm_vhost_check_feature(struct vhost_scsi *vs, 
int feature)
return ret;
 }
 
+static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq)
+{
+   bool ret = false;
+
+   /*
+   * We can handle the vq only after the endpoint is setup by calling the
+   * VHOST_SCSI_SET_ENDPOINT ioctl.
+   *
+   * TODO: Check that we are running from vhost_worker which acts
+   * as read-side critical section for vhost kind of RCU.
+   * See the comments in struct vhost_virtqueue in drivers/vhost/vhost.h
+   */
+   if (rcu_dereference_check(vq-private_data, 1))
+   ret = true;
+
+   return ret;
+}
+
 static int tcm_vhost_check_true(struct se_portal_group *se_tpg)
 {
return 1;
-- 
1.8.1.4

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v5 3/3] tcm_vhost: Add hotplug/hotunplug support

2013-04-09 Thread Asias He
In commit 365a7150094 ([SCSI] virtio-scsi: hotplug support for
virtio-scsi), hotplug support is added to virtio-scsi.

This patch adds hotplug and hotunplug support to tcm_vhost.

You can create or delete a LUN in targetcli to hotplug or hotunplug a
LUN in guest.

Changes in v5:
- Switch to int from u64 to vs_events_nr
- Set s-vs_events_dropped flag in tcm_vhost_allocate_evt
- Do not nest dev mutex within vq mutex
- Use vs_events_lock to protect vs_events_dropped and vs_events_nr
- Rebase to target/master

Changes in v4:
- Drop tcm_vhost_check_endpoint in tcm_vhost_send_evt
- Add tcm_vhost_check_endpoint in vhost_scsi_evt_handle_kick

Changes in v3:
- Separate the bug fix to another thread

Changes in v2:
- Remove code duplication in tcm_vhost_{hotplug,hotunplug}
- Fix racing of vs_events_nr
- Add flush fix patch to this series

Signed-off-by: Asias He as...@redhat.com
Reviewed-by: Stefan Hajnoczi stefa...@redhat.com
---
 drivers/vhost/tcm_vhost.c | 206 +-
 drivers/vhost/tcm_vhost.h |  10 +++
 2 files changed, 212 insertions(+), 4 deletions(-)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index 7069881..3351ed3 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -66,11 +66,13 @@ enum {
  * TODO: debug and remove the workaround.
  */
 enum {
-   VHOST_SCSI_FEATURES = VHOST_FEATURES  (~VIRTIO_RING_F_EVENT_IDX)
+   VHOST_SCSI_FEATURES = (VHOST_FEATURES  (~VIRTIO_RING_F_EVENT_IDX)) |
+ (1ULL  VIRTIO_SCSI_F_HOTPLUG)
 };
 
 #define VHOST_SCSI_MAX_TARGET  256
 #define VHOST_SCSI_MAX_VQ  128
+#define VHOST_SCSI_MAX_EVENT   128
 
 struct vhost_scsi {
/* Protected by vhost_scsi-dev.mutex */
@@ -82,6 +84,13 @@ struct vhost_scsi {
 
struct vhost_work vs_completion_work; /* cmd completion work item */
struct llist_head vs_completion_list; /* cmd completion queue */
+
+   struct vhost_work vs_event_work; /* evt injection work item */
+   struct llist_head vs_event_list; /* evt injection queue */
+
+   struct mutex vs_events_lock; /* protect vs_events_dropped,events_nr */
+   bool vs_events_dropped; /* any missed events */
+   int vs_events_nr; /* num of pending events */
 };
 
 /* Local pointer to allocated TCM configfs fabric module */
@@ -129,6 +138,17 @@ static bool tcm_vhost_check_endpoint(struct 
vhost_virtqueue *vq)
return ret;
 }
 
+static bool tcm_vhost_check_events_dropped(struct vhost_scsi *vs)
+{
+   bool ret;
+
+   mutex_lock(vs-vs_events_lock);
+   ret = vs-vs_events_dropped;
+   mutex_unlock(vs-vs_events_lock);
+
+   return ret;
+}
+
 static int tcm_vhost_check_true(struct se_portal_group *se_tpg)
 {
return 1;
@@ -379,6 +399,37 @@ static int tcm_vhost_queue_tm_rsp(struct se_cmd *se_cmd)
return 0;
 }
 
+static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt 
*evt)
+{
+   mutex_lock(vs-vs_events_lock);
+   vs-vs_events_nr--;
+   kfree(evt);
+   mutex_unlock(vs-vs_events_lock);
+}
+
+static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs,
+   u32 event, u32 reason)
+{
+   struct tcm_vhost_evt *evt;
+
+   mutex_lock(vs-vs_events_lock);
+   if (vs-vs_events_nr  VHOST_SCSI_MAX_EVENT) {
+   vs-vs_events_dropped = true;
+   mutex_unlock(vs-vs_events_lock);
+   return NULL;
+   }
+
+   evt = kzalloc(sizeof(*evt), GFP_KERNEL);
+   if (evt) {
+   evt-event.event = event;
+   evt-event.reason = reason;
+   vs-vs_events_nr++;
+   }
+   mutex_unlock(vs-vs_events_lock);
+
+   return evt;
+}
+
 static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd)
 {
struct se_cmd *se_cmd = tv_cmd-tvc_se_cmd;
@@ -397,6 +448,74 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd 
*tv_cmd)
kfree(tv_cmd);
 }
 
+static void tcm_vhost_do_evt_work(struct vhost_scsi *vs,
+   struct virtio_scsi_event *event)
+{
+   struct vhost_virtqueue *vq = vs-vqs[VHOST_SCSI_VQ_EVT];
+   struct virtio_scsi_event __user *eventp;
+   unsigned out, in;
+   int head, ret;
+
+   if (!tcm_vhost_check_endpoint(vq))
+   return;
+
+   mutex_lock(vs-vs_events_lock);
+   mutex_lock(vq-mutex);
+again:
+   vhost_disable_notify(vs-dev, vq);
+   head = vhost_get_vq_desc(vs-dev, vq, vq-iov,
+   ARRAY_SIZE(vq-iov), out, in,
+   NULL, NULL);
+   if (head  0) {
+   vs-vs_events_dropped = true;
+   goto out;
+   }
+   if (head == vq-num) {
+   if (vhost_enable_notify(vs-dev, vq))
+   goto again;
+   vs-vs_events_dropped = true;
+   goto out;
+   }
+
+   if ((vq-iov[out].iov_len != sizeof(struct virtio_scsi_event))) {
+   vq_err(vq, Expecting virtio_scsi_event, got %zu 

[PATCH] tcm_vhost: Fix tv_cmd leak in vhost_scsi_handle_vq

2013-04-09 Thread Asias He
If we fail to submit the allocated tv_vmd to tcm_vhost_submission_work,
we will leak the tv_vmd. Free tv_vmd on fail path.

Signed-off-by: Asias He as...@redhat.com
---
 drivers/vhost/tcm_vhost.c | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index 3351ed3..1f9116c 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -860,7 +860,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
vq_err(vq, Expecting virtio_scsi_cmd_resp, got %zu
 bytes, out: %d, in: %d\n,
vq-iov[out].iov_len, out, in);
-   break;
+   goto err;
}
 
tv_cmd-tvc_resp = vq-iov[out].iov_base;
@@ -882,7 +882,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
 exceeds SCSI_MAX_VARLEN_CDB_SIZE: %d\n,
scsi_command_size(tv_cmd-tvc_cdb),
TCM_VHOST_MAX_CDB_SIZE);
-   break; /* TODO */
+   goto err;
}
tv_cmd-tvc_lun = ((v_req.lun[2]  8) | v_req.lun[3])  0x3FFF;
 
@@ -895,7 +895,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
data_direction == DMA_TO_DEVICE);
if (unlikely(ret)) {
vq_err(vq, Failed to map iov to sgl\n);
-   break; /* TODO */
+   goto err;
}
}
 
@@ -916,6 +916,11 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
}
 
mutex_unlock(vq-mutex);
+   return;
+
+err:
+   vhost_scsi_free_cmd(tv_cmd);
+   mutex_unlock(vq-mutex);
 }
 
 static void vhost_scsi_ctl_handle_kick(struct vhost_work *work)
-- 
1.8.1.4

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] x86: make IDT read-only

2013-04-09 Thread Eric W. Biederman
H. Peter Anvin h...@zytor.com writes:

 On 04/08/2013 03:43 PM, Kees Cook wrote:
 This makes the IDT unconditionally read-only. This primarily removes
 the IDT from being a target for arbitrary memory write attacks. It has
 an added benefit of also not leaking (via the sidt instruction) the
 kernel base offset, if it has been relocated.
 
 Signed-off-by: Kees Cook keesc...@chromium.org
 Cc: Eric Northup digitale...@google.com

 Also, tglx: does this interfere with your per-cpu IDT efforts?

Given that we don't change any IDT entries why would anyone want a
per-cpu IDT?  The cache lines should easily be shared accross all
processors.

Or are there some giant NUMA machines that trigger cache misses when
accessing the IDT and the penalty for pulling the cache line across
the NUMA fabric is prohibitive?

Eric
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] x86: make IDT read-only

2013-04-09 Thread Thomas Gleixner
On Mon, 8 Apr 2013, H. Peter Anvin wrote:

 On 04/08/2013 03:43 PM, Kees Cook wrote:
  This makes the IDT unconditionally read-only. This primarily removes
  the IDT from being a target for arbitrary memory write attacks. It has
  an added benefit of also not leaking (via the sidt instruction) the
  kernel base offset, if it has been relocated.
  
  Signed-off-by: Kees Cook keesc...@chromium.org
  Cc: Eric Northup digitale...@google.com
 
 Also, tglx: does this interfere with your per-cpu IDT efforts?

I don't think so. And it's on the backburner at the moment.

Thanks,

tglx
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v2] x86: use fixed read-only IDT

2013-04-09 Thread Kees Cook
Make a copy of the IDT (as seen via the sidt instruction) read-only.
This primarily removes the IDT from being a target for arbitrary memory
write attacks, and has the added benefit of also not leaking the kernel
base offset, if it has been relocated.

Signed-off-by: Kees Cook keesc...@chromium.org
Cc: Eric Northup digitale...@google.com
---
v2:
 - clarify commit and comments
---
 arch/x86/include/asm/fixmap.h |4 +---
 arch/x86/kernel/cpu/intel.c   |   18 +-
 arch/x86/kernel/traps.c   |8 
 arch/x86/xen/mmu.c|4 +---
 4 files changed, 11 insertions(+), 23 deletions(-)

diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
index a09c285..51b9e32 100644
--- a/arch/x86/include/asm/fixmap.h
+++ b/arch/x86/include/asm/fixmap.h
@@ -104,9 +104,7 @@ enum fixed_addresses {
FIX_LI_PCIA,/* Lithium PCI Bridge A */
FIX_LI_PCIB,/* Lithium PCI Bridge B */
 #endif
-#ifdef CONFIG_X86_F00F_BUG
-   FIX_F00F_IDT,   /* Virtual mapping for IDT */
-#endif
+   FIX_RO_IDT, /* Virtual mapping for read-only IDT */
 #ifdef CONFIG_X86_CYCLONE_TIMER
FIX_CYCLONE_TIMER, /*cyclone timer register*/
 #endif
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 1905ce9..7170024 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -164,20 +164,6 @@ int __cpuinit ppro_with_ram_bug(void)
return 0;
 }
 
-#ifdef CONFIG_X86_F00F_BUG
-static void __cpuinit trap_init_f00f_bug(void)
-{
-   __set_fixmap(FIX_F00F_IDT, __pa_symbol(idt_table), PAGE_KERNEL_RO);
-
-   /*
-* Update the IDT descriptor and reload the IDT so that
-* it uses the read-only mapped virtual address.
-*/
-   idt_descr.address = fix_to_virt(FIX_F00F_IDT);
-   load_idt(idt_descr);
-}
-#endif
-
 static void __cpuinit intel_smp_check(struct cpuinfo_x86 *c)
 {
/* calling is from identify_secondary_cpu() ? */
@@ -206,8 +192,7 @@ static void __cpuinit intel_workarounds(struct cpuinfo_x86 
*c)
/*
 * All current models of Pentium and Pentium with MMX technology CPUs
 * have the F0 0F bug, which lets nonprivileged users lock up the
-* system.
-* Note that the workaround only should be initialized once...
+* system. Announce that the fault handler will be checking for it.
 */
c-f00f_bug = 0;
if (!paravirt_enabled()  c-x86 == 5) {
@@ -215,7 +200,6 @@ static void __cpuinit intel_workarounds(struct cpuinfo_x86 
*c)
 
c-f00f_bug = 1;
if (!f00f_workaround_enabled) {
-   trap_init_f00f_bug();
printk(KERN_NOTICE Intel Pentium with F0 0F bug - 
workaround enabled.\n);
f00f_workaround_enabled = 1;
}
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 68bda7a..a2a9b78 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -753,6 +753,14 @@ void __init trap_init(void)
 #endif
 
/*
+* Set the IDT descriptor to a fixed read-only location, so that the
+* sidt instruction will not leak the location of the kernel, and
+* to defend the IDT against arbitrary memory write vulnerabilities.
+* It will be reloaded in cpu_init() */
+   __set_fixmap(FIX_RO_IDT, __pa_symbol(idt_table), PAGE_KERNEL_RO);
+   idt_descr.address = fix_to_virt(FIX_RO_IDT);
+
+   /*
 * Should be a barrier for any external CPU state:
 */
cpu_init();
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 6afbb2c..8bc4dec 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2039,9 +2039,7 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t 
phys, pgprot_t prot)
 
switch (idx) {
case FIX_BTMAP_END ... FIX_BTMAP_BEGIN:
-#ifdef CONFIG_X86_F00F_BUG
-   case FIX_F00F_IDT:
-#endif
+   case FIX_RO_IDT:
 #ifdef CONFIG_X86_32
case FIX_WP_TEST:
case FIX_VDSO:
-- 
1.7.9.5


-- 
Kees Cook
Chrome OS Security
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [kernel-hardening] Re: [PATCH] x86: make IDT read-only

2013-04-09 Thread Kees Cook
On Tue, Apr 9, 2013 at 2:23 AM, Thomas Gleixner t...@linutronix.de wrote:
 On Mon, 8 Apr 2013, H. Peter Anvin wrote:

 On 04/08/2013 03:43 PM, Kees Cook wrote:
  This makes the IDT unconditionally read-only. This primarily removes
  the IDT from being a target for arbitrary memory write attacks. It has
  an added benefit of also not leaking (via the sidt instruction) the
  kernel base offset, if it has been relocated.
 
  Signed-off-by: Kees Cook keesc...@chromium.org
  Cc: Eric Northup digitale...@google.com

 Also, tglx: does this interfere with your per-cpu IDT efforts?

 I don't think so. And it's on the backburner at the moment.

What would be a good way to do something similar for the GDT? sgdt
leaks GDT location as well, and even though it's percpu, it should be
trivial to figure out a kernel base address, IIUC.

$ ./sgdt
88001fc04000
# cat /sys/kernel/debug/kernel_page_tables
...
---[ Low Kernel Mapping ]---
...
0x880001e0-0x88001fe0 480M RW PSE GLB NX pmd

With the IDT patch, things look good for sidt:

$ ./sidt
ff579000
# cat /sys/kernel/debug/kernel_page_tables
...
---[ End Modules ]---
0xff579000-0xff57a000   4K ro GLB NX pte

Can we create a RO fixed per-cpu area?

-Kees

--
Kees Cook
Chrome OS Security
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [kernel-hardening] Re: [PATCH] x86: make IDT read-only

2013-04-09 Thread H. Peter Anvin
On 04/09/2013 11:22 AM, Kees Cook wrote:
 
 $ ./sgdt
 88001fc04000
 # cat /sys/kernel/debug/kernel_page_tables
 ...
 ---[ Low Kernel Mapping ]---
 ...
 0x880001e0-0x88001fe0 480M RW PSE GLB NX 
 pmd
 

That is the 1:1 memory map area...

-hpa
-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [kernel-hardening] Re: [PATCH] x86: make IDT read-only

2013-04-09 Thread Kees Cook
On Tue, Apr 9, 2013 at 11:26 AM, H. Peter Anvin h...@zytor.com wrote:
 On 04/09/2013 11:22 AM, Kees Cook wrote:

 $ ./sgdt
 88001fc04000
 # cat /sys/kernel/debug/kernel_page_tables
 ...
 ---[ Low Kernel Mapping ]---
 ...
 0x880001e0-0x88001fe0 480M RW PSE GLB NX 
 pmd


 That is the 1:1 memory map area...

Meaning what?

-Kees

--
Kees Cook
Chrome OS Security
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [kernel-hardening] Re: [PATCH] x86: make IDT read-only

2013-04-09 Thread H. Peter Anvin
On 04/09/2013 11:31 AM, Kees Cook wrote:
 ...
 0x880001e0-0x88001fe0 480M RW PSE GLB 
 NX pmd


 That is the 1:1 memory map area...
 
 Meaning what?
 
 -Kees
 

That's the area in which we just map 1:1 to memory.  Anything allocated
with e.g. kmalloc() ends up with those addresses.

-hpa


-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [kernel-hardening] Re: [PATCH] x86: make IDT read-only

2013-04-09 Thread Kees Cook
On Tue, Apr 9, 2013 at 11:39 AM, H. Peter Anvin h...@zytor.com wrote:
 On 04/09/2013 11:31 AM, Kees Cook wrote:
 ...
 0x880001e0-0x88001fe0 480M RW PSE GLB 
 NX pmd


 That is the 1:1 memory map area...

 Meaning what?

 -Kees


 That's the area in which we just map 1:1 to memory.  Anything allocated
 with e.g. kmalloc() ends up with those addresses.

Ah-ha! Yes, I see now when comparing the debug/kernel_page_tables
reports. It's just the High Kernel Mapping that we care about.
Addresses outside that range are less of a leak. Excellent, then GDT
may not be a problem. Whew.

Does the v2 IDT patch look okay, BTW?

-Kees

--
Kees Cook
Chrome OS Security
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [kernel-hardening] Re: [PATCH] x86: make IDT read-only

2013-04-09 Thread Kees Cook
On Tue, Apr 9, 2013 at 11:50 AM, H. Peter Anvin h...@zytor.com wrote:
 On 04/09/2013 11:46 AM, Kees Cook wrote:

 Ah-ha! Yes, I see now when comparing the debug/kernel_page_tables
 reports. It's just the High Kernel Mapping that we care about.
 Addresses outside that range are less of a leak. Excellent, then GDT
 may not be a problem. Whew.


 It does beg the question if we need to randomize kmalloc... which could
 have issues by itself.

Agreed, but this should be a separate issue. As is the fact that GDT
is writable and a discoverable target.

-Kees

--
Kees Cook
Chrome OS Security
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [kernel-hardening] Re: [PATCH] x86: make IDT read-only

2013-04-09 Thread H. Peter Anvin
On 04/09/2013 11:46 AM, Kees Cook wrote:
 
 Ah-ha! Yes, I see now when comparing the debug/kernel_page_tables
 reports. It's just the High Kernel Mapping that we care about.
 Addresses outside that range are less of a leak. Excellent, then GDT
 may not be a problem. Whew.
 

It does beg the question if we need to randomize kmalloc... which could
have issues by itself.

-hpa



-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [kernel-hardening] Re: [PATCH] x86: make IDT read-only

2013-04-09 Thread H. Peter Anvin
On 04/09/2013 11:54 AM, Eric Northup wrote:
 
 The GDT is a problem if the address returned by 'sgdt' is
 kernel-writable - it doesn't necessarily reveal the random offset, but
 I'm pretty sure that writing to the GDT could cause privilege
 escalation.
 

That is a pretty safe assumption...

-hpa


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCHv2 virtio-next] remoteproc: Add support for host virtio rings (vringh)

2013-04-09 Thread Sjur Brændeland
Implement the vringh callback functions in order
to manage host virito rings and handle kicks.
This allows virtio device to request host-virtio-rings.

Signed-off-by: Sjur Brændeland sjur.brandel...@stericsson.com
---

Hi Ohad and Rusty,

This v2 version is simpler, more readable and verbose (+50 lines)
compared to the previous patch.

This patch probably should to go via Rusty's tree due to the
vringh dependencies. Ohad could you please review this and let
us know what you think.

Thanks,
Sjur

 drivers/remoteproc/remoteproc_virtio.c |  208 +++-
 include/linux/remoteproc.h |   22 
 2 files changed, 225 insertions(+), 5 deletions(-)

diff --git a/drivers/remoteproc/remoteproc_virtio.c 
b/drivers/remoteproc/remoteproc_virtio.c
index afed9b7..d01bec4 100644
--- a/drivers/remoteproc/remoteproc_virtio.c
+++ b/drivers/remoteproc/remoteproc_virtio.c
@@ -41,6 +41,18 @@ static void rproc_virtio_notify(struct virtqueue *vq)
rproc-ops-kick(rproc, notifyid);
 }
 
+/* kick the remote processor, and let it know which vring to poke at */
+static void rproc_virtio_vringh_notify(struct vringh *vrh)
+{
+   struct rproc_vring *rvring = vringh_to_rvring(vrh);
+   struct rproc *rproc = rvring-rvdev-rproc;
+   int notifyid = rvring-notifyid;
+
+   dev_dbg(rproc-dev, kicking vq index: %d\n, notifyid);
+
+   rproc-ops-kick(rproc, notifyid);
+}
+
 /**
  * rproc_vq_interrupt() - tell remoteproc that a virtqueue is interrupted
  * @rproc: handle to the remote processor
@@ -60,10 +72,18 @@ irqreturn_t rproc_vq_interrupt(struct rproc *rproc, int 
notifyid)
dev_dbg(rproc-dev, vq index %d is interrupted\n, notifyid);
 
rvring = idr_find(rproc-notifyids, notifyid);
-   if (!rvring || !rvring-vq)
+   if (!rvring)
return IRQ_NONE;
 
-   return vring_interrupt(0, rvring-vq);
+   if (rvring-rvringh  rvring-rvringh-vringh_cb) {
+   rvring-rvringh-vringh_cb(rvring-rvdev-vdev,
+   rvring-rvringh-vrh);
+   return IRQ_HANDLED;
+   } else if (rvring-vq) {
+   return vring_interrupt(0, rvring-vq);
+   } else {
+   return IRQ_NONE;
+   }
 }
 EXPORT_SYMBOL(rproc_vq_interrupt);
 
@@ -78,7 +98,7 @@ static struct virtqueue *rp_find_vq(struct virtio_device 
*vdev,
struct rproc_vring *rvring;
struct virtqueue *vq;
void *addr;
-   int len, size, ret;
+   int len, size, ret, i;
 
/* we're temporarily limited to two virtqueues per rvdev */
if (id = ARRAY_SIZE(rvdev-vring))
@@ -87,11 +107,26 @@ static struct virtqueue *rp_find_vq(struct virtio_device 
*vdev,
if (!name)
return NULL;
 
-   ret = rproc_alloc_vring(rvdev, id);
+   /* Find available vring for a new vq */
+   for (i = id; i  ARRAY_SIZE(rvdev-vring); i++) {
+   rvring = rvdev-vring[i];
+
+   /* Calling find_vqs twice is bad */
+   if (rvring-vq)
+   return ERR_PTR(-EINVAL);
+
+   /* Use vring not already in use */
+   if (!rvring-rvringh)
+   break;
+   }
+
+   if (i == ARRAY_SIZE(rvdev-vring))
+   return ERR_PTR(-ENODEV);
+
+   ret = rproc_alloc_vring(rvdev, i);
if (ret)
return ERR_PTR(ret);
 
-   rvring = rvdev-vring[id];
addr = rvring-va;
len = rvring-len;
 
@@ -222,6 +257,168 @@ static void rproc_virtio_finalize_features(struct 
virtio_device *vdev)
rvdev-gfeatures = vdev-features[0];
 }
 
+/* Helper function that creates and initializes the host virtio ring */
+static struct vringh *rproc_create_new_vringh(struct rproc_vring *rvring,
+   unsigned int index,
+   vrh_callback_t callback)
+{
+   struct rproc_vringh *rvrh = NULL;
+   struct rproc_vdev *rvdev = rvring-rvdev;
+   int err;
+
+   rvrh = kzalloc(sizeof(*rvrh), GFP_KERNEL);
+   err = -ENOMEM;
+   if (!rvrh)
+   goto err;
+
+   /* initialize the host virtio ring */
+   rvrh-vringh_cb = callback;
+   rvrh-vrh.notify = rproc_virtio_vringh_notify;
+   memset(rvring-va, 0, vring_size(rvring-len, rvring-align));
+   vring_init(rvrh-vrh.vring, rvring-len, rvring-va, rvring-align);
+
+   /*
+* Create the new vring host, and tell we're not interested in
+* the 'weak' smp barriers, since we're talking with a real device.
+*/
+   err = vringh_init_kern(rvrh-vrh,
+   rproc_virtio_get_features(rvdev-vdev),
+   rvring-len,
+   false,
+   rvrh-vrh.vring.desc,
+   rvrh-vrh.vring.avail,
+   rvrh-vrh.vring.used);
+   if (err) {
+   

Re: [kernel-hardening] Re: [PATCH] x86: make IDT read-only

2013-04-09 Thread H. Peter Anvin
On 04/09/2013 11:22 AM, Kees Cook wrote:
 
 Can we create a RO fixed per-cpu area?
 

Fixed and percpu are mutually exclusive...

-hpa


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2] x86: use fixed read-only IDT

2013-04-09 Thread H. Peter Anvin
On 04/09/2013 09:39 AM, Kees Cook wrote:
 -
  static void __cpuinit intel_smp_check(struct cpuinfo_x86 *c)
  {
   /* calling is from identify_secondary_cpu() ? */
 @@ -206,8 +192,7 @@ static void __cpuinit intel_workarounds(struct 
 cpuinfo_x86 *c)
   /*
* All current models of Pentium and Pentium with MMX technology CPUs
* have the F0 0F bug, which lets nonprivileged users lock up the
 -  * system.
 -  * Note that the workaround only should be initialized once...
 +  * system. Announce that the fault handler will be checking for it.
*/
   c-f00f_bug = 0;
   if (!paravirt_enabled()  c-x86 == 5) {
 @@ -215,7 +200,6 @@ static void __cpuinit intel_workarounds(struct 
 cpuinfo_x86 *c)
  
   c-f00f_bug = 1;
   if (!f00f_workaround_enabled) {
 - trap_init_f00f_bug();
   printk(KERN_NOTICE Intel Pentium with F0 0F bug - 
 workaround enabled.\n);
   f00f_workaround_enabled = 1;
   }

Why do we care about this message anymore?  It provides no relevant user
information, the flag itself is already in /proc/cpuinfo, and the
message is likely to be wrong since all it does is look for an Intel CPU
with family == 5.

-hpa


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2] x86: use fixed read-only IDT

2013-04-09 Thread Kees Cook
On Tue, Apr 9, 2013 at 5:14 PM, H. Peter Anvin h...@zytor.com wrote:
 On 04/09/2013 09:39 AM, Kees Cook wrote:
 -
  static void __cpuinit intel_smp_check(struct cpuinfo_x86 *c)
  {
   /* calling is from identify_secondary_cpu() ? */
 @@ -206,8 +192,7 @@ static void __cpuinit intel_workarounds(struct 
 cpuinfo_x86 *c)
   /*
* All current models of Pentium and Pentium with MMX technology CPUs
* have the F0 0F bug, which lets nonprivileged users lock up the
 -  * system.
 -  * Note that the workaround only should be initialized once...
 +  * system. Announce that the fault handler will be checking for it.
*/
   c-f00f_bug = 0;
   if (!paravirt_enabled()  c-x86 == 5) {
 @@ -215,7 +200,6 @@ static void __cpuinit intel_workarounds(struct 
 cpuinfo_x86 *c)

   c-f00f_bug = 1;
   if (!f00f_workaround_enabled) {
 - trap_init_f00f_bug();
   printk(KERN_NOTICE Intel Pentium with F0 0F bug - 
 workaround enabled.\n);
   f00f_workaround_enabled = 1;
   }

 Why do we care about this message anymore?  It provides no relevant user
 information, the flag itself is already in /proc/cpuinfo, and the
 message is likely to be wrong since all it does is look for an Intel CPU
 with family == 5.

I have no objection to removing it, but with CONFIG_F00F_BUG, the trap
handler does still do some checking, and I figured this message was
there to notify people about it.

-Kees

--
Kees Cook
Chrome OS Security
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Readonly GDT

2013-04-09 Thread H. Peter Anvin
OK, thinking about the GDT here.

The GDT is quite small -- 256 bytes on i386, 128 bytes on x86-64.  As
such, we probably don't want to allocate a full page to it for only
that.  This means that in order to create a readonly mapping we have to
pack GDTs from different CPUs together in the same pages, *or* we
tolerate that other things on the same page gets reflected in the same
mapping.

However, the packing solution has the advantage of reducing address
space consumption which matters on 32 bits: even on i386 we can easily
burn a megabyte of address space for 4096 processors, but burning 16
megabytes starts to hurt.

It would be important to measure the performance impact on task switch,
though.

-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: Readonly GDT

2013-04-09 Thread Steven Rostedt
On Tue, 2013-04-09 at 17:43 -0700, H. Peter Anvin wrote:
 OK, thinking about the GDT here.
 
 The GDT is quite small -- 256 bytes on i386, 128 bytes on x86-64.  As
 such, we probably don't want to allocate a full page to it for only
 that.  This means that in order to create a readonly mapping we have to
 pack GDTs from different CPUs together in the same pages, *or* we
 tolerate that other things on the same page gets reflected in the same
 mapping.

What about grouping via nodes?

 
 However, the packing solution has the advantage of reducing address
 space consumption which matters on 32 bits: even on i386 we can easily
 burn a megabyte of address space for 4096 processors, but burning 16
 megabytes starts to hurt.

Having 4096 32 bit processors, you deserve what you get. ;-)

-- Steve

 
 It would be important to measure the performance impact on task switch,
 though.


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: Readonly GDT

2013-04-09 Thread H. Peter Anvin
On 04/09/2013 05:53 PM, Steven Rostedt wrote:
 On Tue, 2013-04-09 at 17:43 -0700, H. Peter Anvin wrote:
 OK, thinking about the GDT here.

 The GDT is quite small -- 256 bytes on i386, 128 bytes on x86-64.  As
 such, we probably don't want to allocate a full page to it for only
 that.  This means that in order to create a readonly mapping we have to
 pack GDTs from different CPUs together in the same pages, *or* we
 tolerate that other things on the same page gets reflected in the same
 mapping.
 
 What about grouping via nodes?
 

Would be nicer for locality, although probably adds [even] more complexity.

We don't really care about 32-bit NUMA anymore -- it keeps getting
suggested for deletion, even.  For 64-bit it might make sense to just
reflect out of the percpu area even though it munches address space.


 However, the packing solution has the advantage of reducing address
 space consumption which matters on 32 bits: even on i386 we can easily
 burn a megabyte of address space for 4096 processors, but burning 16
 megabytes starts to hurt.
 
 Having 4096 32 bit processors, you deserve what you get. ;-)
 

Well, the main problem is that it might get difficult to make this a
runtime thing; it more likely ends up being a compile-time bit.

-hpa


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 0/3] tcm_vhost fix cmd leak and bad target

2013-04-09 Thread Asias He
Asias He (3):
  tcm_vhost: Fix tv_cmd leak in vhost_scsi_handle_vq
  tcm_vhost: Add vhost_scsi_send_bad_target() helper
  tcm_vhost: Send bad target to guest when cmd fails

 drivers/vhost/tcm_vhost.c | 44 
 1 file changed, 28 insertions(+), 16 deletions(-)

-- 
1.8.1.4

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 2/3] tcm_vhost: Add vhost_scsi_send_bad_target() helper

2013-04-09 Thread Asias He
Share the send bad target code with other use cases.

Signed-off-by: Asias He as...@redhat.com
---
 drivers/vhost/tcm_vhost.c | 31 ++-
 1 file changed, 18 insertions(+), 13 deletions(-)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index e8d1a1f..1c719ed 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -569,6 +569,23 @@ static void tcm_vhost_submission_work(struct work_struct 
*work)
}
 }
 
+static void vhost_scsi_send_bad_target(struct vhost_scsi *vs,
+   struct vhost_virtqueue *vq, int head, unsigned out)
+{
+   struct virtio_scsi_cmd_resp __user *resp;
+   struct virtio_scsi_cmd_resp rsp;
+   int ret;
+
+   memset(rsp, 0, sizeof(rsp));
+   rsp.response = VIRTIO_SCSI_S_BAD_TARGET;
+   resp = vq-iov[out].iov_base;
+   ret = __copy_to_user(resp, rsp, sizeof(rsp));
+   if (!ret)
+   vhost_add_used_and_signal(vs-dev, vq, head, 0);
+   else
+   pr_err(Faulted on virtio_scsi_cmd_resp\n);
+}
+
 static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
struct vhost_virtqueue *vq)
 {
@@ -664,19 +681,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
 
/* Target does not exist, fail the request */
if (unlikely(!tv_tpg)) {
-   struct virtio_scsi_cmd_resp __user *resp;
-   struct virtio_scsi_cmd_resp rsp;
-
-   memset(rsp, 0, sizeof(rsp));
-   rsp.response = VIRTIO_SCSI_S_BAD_TARGET;
-   resp = vq-iov[out].iov_base;
-   ret = __copy_to_user(resp, rsp, sizeof(rsp));
-   if (!ret)
-   vhost_add_used_and_signal(vs-dev,
- vq, head, 0);
-   else
-   pr_err(Faulted on virtio_scsi_cmd_resp\n);
-
+   vhost_scsi_send_bad_target(vs, vq, out, head);
continue;
}
 
-- 
1.8.1.4

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 3/3] tcm_vhost: Send bad target to guest when cmd fails

2013-04-09 Thread Asias He
Send bad target to guest in case:
1) we can not allocate the cmd
2) fail to submit the cmd

Signed-off-by: Asias He as...@redhat.com
---
 drivers/vhost/tcm_vhost.c | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index 1c719ed..4dc6f2d 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -694,7 +694,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
if (IS_ERR(tv_cmd)) {
vq_err(vq, vhost_scsi_allocate_cmd failed %ld\n,
PTR_ERR(tv_cmd));
-   break;
+   goto err_cmd;
}
pr_debug(Allocated tv_cmd: %p exp_data_len: %d, data_direction
: %d\n, tv_cmd, exp_data_len, data_direction);
@@ -720,7 +720,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
 exceeds SCSI_MAX_VARLEN_CDB_SIZE: %d\n,
scsi_command_size(tv_cmd-tvc_cdb),
TCM_VHOST_MAX_CDB_SIZE);
-   goto err;
+   goto err_free;
}
tv_cmd-tvc_lun = ((v_req.lun[2]  8) | v_req.lun[3])  0x3FFF;
 
@@ -733,7 +733,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
data_direction == DMA_TO_DEVICE);
if (unlikely(ret)) {
vq_err(vq, Failed to map iov to sgl\n);
-   goto err;
+   goto err_free;
}
}
 
@@ -756,8 +756,10 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
mutex_unlock(vq-mutex);
return;
 
-err:
+err_free:
vhost_scsi_free_cmd(tv_cmd);
+err_cmd:
+   vhost_scsi_send_bad_target(vs, vq, out, head);
mutex_unlock(vq-mutex);
 }
 
-- 
1.8.1.4

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] tcm_vhost: Fix tv_cmd leak in vhost_scsi_handle_vq

2013-04-09 Thread Asias He
On Tue, Apr 09, 2013 at 08:46:42AM -0700, Nicholas A. Bellinger wrote:
 On Tue, 2013-04-09 at 17:16 +0800, Asias He wrote:
  If we fail to submit the allocated tv_vmd to tcm_vhost_submission_work,
  we will leak the tv_vmd. Free tv_vmd on fail path.
  
  Signed-off-by: Asias He as...@redhat.com
  ---
   drivers/vhost/tcm_vhost.c | 11 ---
   1 file changed, 8 insertions(+), 3 deletions(-)
  
  diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
  index 3351ed3..1f9116c 100644
  --- a/drivers/vhost/tcm_vhost.c
  +++ b/drivers/vhost/tcm_vhost.c
  @@ -860,7 +860,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
  vq_err(vq, Expecting virtio_scsi_cmd_resp, got %zu
   bytes, out: %d, in: %d\n,
  vq-iov[out].iov_len, out, in);
  -   break;
  +   goto err;
  }
   
  tv_cmd-tvc_resp = vq-iov[out].iov_base;
  @@ -882,7 +882,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
   exceeds SCSI_MAX_VARLEN_CDB_SIZE: %d\n,
  scsi_command_size(tv_cmd-tvc_cdb),
  TCM_VHOST_MAX_CDB_SIZE);
  -   break; /* TODO */
  +   goto err;
  }
  tv_cmd-tvc_lun = ((v_req.lun[2]  8) | v_req.lun[3])  0x3FFF;
   
  @@ -895,7 +895,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
  data_direction == DMA_TO_DEVICE);
  if (unlikely(ret)) {
  vq_err(vq, Failed to map iov to sgl\n);
  -   break; /* TODO */
  +   goto err;
  }
  }
   
 
 Mmmm, I think these cases also require a VIRTIO_SCSI_S_BAD_TARGET +
 __copy_to_user + vhost_add_used_and_signal similar to how !tv_tpg is
 handled..  Otherwise virtio-scsi will end up in scsi timeout - abort,
 no..?
 
 Ditto for the vhost_scsi_allocate_cmd failure case..

Sent out new patches.

 vhost-net uses vhost_discard_vq_desc for some failure cases,  is that
 needed here for the failure cases before __copy_from_user is called..?

I don't think it is useful. vhost_discard_vq_desc reverse the effect of
vhost_get_vq_desc. If we put it back in the queue and next time we will
still fail.

  @@ -916,6 +916,11 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
  }
   
  mutex_unlock(vq-mutex);
  +   return;
  +
  +err:
  +   vhost_scsi_free_cmd(tv_cmd);
  +   mutex_unlock(vq-mutex);
   }
   
   static void vhost_scsi_ctl_handle_kick(struct vhost_work *work)
 
 

-- 
Asias
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization