[Xen-devel] [PATCH v4 1/2] AMD IOMMU: Removing currently non-functioning guest iommu feature
From: Suravee SuthikulpanitThe guest IOMMU feature is currently not functioning. However, the current guest_iommu_init() is causing issue when it tries to register mmio handler because the it is currently called by the following code path: arch/x86/domain.c: arch_domain_create() ]- drivers/passthrough/iommu.c: iommu_domain_init() |- drivers/passthrough/amd/pci_amd_iommu.c: amd_iommu_domain_init(); |- drivers/passthrough/amd/iommu_guest.c: guest_iommu_init() At this point, the hvm_domain_initialised() has not been called. So register_mmio_handler() in guest_iommu_init() silently fails. This patch removes the guest IOMMU feature for now until we can properly support it. Signed-off-by: Suravee Suthikulpanit --- xen/drivers/passthrough/amd/pci_amd_iommu.c | 4 1 file changed, 4 deletions(-) diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c index 70b7475..fce9827 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -272,9 +272,6 @@ static int amd_iommu_domain_init(struct domain *d) hd->arch.paging_mode = is_hvm_domain(d) ? IOMMU_PAGING_MODE_LEVEL_2 : get_paging_mode(max_page); - -guest_iommu_init(d); - return 0; } @@ -474,7 +471,6 @@ static void deallocate_iommu_page_tables(struct domain *d) static void amd_iommu_domain_destroy(struct domain *d) { -guest_iommu_destroy(d); deallocate_iommu_page_tables(d); amd_iommu_flush_all_pages(d); } -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] [PATCH v4 2/2] x86/hvm: Add check when register io handler
From: Suravee SuthikulpanitAt the time of registering HVM I/O handler, the HVM domain might not have been initialized, which means the hvm_domain.io_handler would be NULL. In the hvm_next_io_handler(), this should be asserted. Reviewed-by: Paul Durrant Signed-off-by: Suravee Suthikulpanit --- xen/arch/x86/hvm/intercept.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c index fc757d0..bf141c9 100644 --- a/xen/arch/x86/hvm/intercept.c +++ b/xen/arch/x86/hvm/intercept.c @@ -258,6 +258,8 @@ struct hvm_io_handler *hvm_next_io_handler(struct domain *d) { unsigned int i = d->arch.hvm_domain.io_handler_count++; +ASSERT(d->arch.hvm_domain.io_handler); + if ( i == NR_IO_HANDLERS ) { domain_crash(d); -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] [PATCH v4 0/2] Fix xen crash when starting HVM guest due to missing io handler
From: Suravee SuthikulpanitHi All, Changes from V3: * Remove calls to guest_iommu_init()/destroy() for now since the guest iommu feature is not functing and causing breakage. * Do not change the ordering of the iommu_domain_init() and hvm_domain_init() for now until we agree on proper ordering. OVERVIEW: On systems with iommu v2 enabled, the hypervisor crashes when trying to start up an HVM guest. Investigating shows that the guest_iommu_init() is called before the HVM domain is initialized. It then tries to register_mmio_handler() causing the hvm_next_io_handler() to increment the io_handler_count. However, the registration fails silently and left the I/O handler uninitialized. At later time, hvm_find_io_handler() is called and iterate through the registered handlered, but then resulting in referencing NULL pointers. This patch series proposes workaround for this issue. Thanks, Suravee Suravee Suthikulpanit (2): AMD IOMMU: Removing currently non-functioning guest iommu feature x86/hvm: Add check when register io handler xen/arch/x86/hvm/intercept.c| 2 ++ xen/drivers/passthrough/amd/pci_amd_iommu.c | 4 2 files changed, 2 insertions(+), 4 deletions(-) -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] [PATCH v4] AMD IOMMU: Introduce support for IVHD block type 11h
From: Suravee SuthikulpanitAlong with the IVHD block type 10h, newer AMD platforms also come with types 11h, which is a superset of the older one. Having multiple IVHD block types in the same platform allows backward compatibility of newer systems to work with existing drivers. The driver should only parse the highest-level (newest) type of IVHD block that it can support. However, the current driver returns error when encounters with unknown IVHD block type. This causes existing driver to unnecessarily fail IOMMU initialization on new systems. This patch introduces a new logic, which scans through IVRS table looking for the highest-level supporsted IVHD block type. It also adds support for the new IVHD block type 11h. More information about the IVHD type 11h can be found in the AMD I/O Virtualization Technology (IOMMU) Specification rev 2.62. http://support.amd.com/TechDocs/48882_IOMMU.pdf Reviewed-by: Jan Beulich Reviewed-by: Andrew Cooper Signed-off-by: Suravee Suthikulpanit --- Changes from V3: * Minor stray space changes. * Getting rid of unnecessary default case in swtich statement. * Adding reviewed by Jan and Andrew. Thanks all for review comments. Suravee xen/drivers/passthrough/amd/iommu_acpi.c | 134 +- xen/drivers/passthrough/amd/iommu_init.c | 10 +- xen/include/acpi/actbl2.h | 5 +- xen/include/asm-x86/amd-iommu.h | 1 + xen/include/asm-x86/hvm/svm/amd-iommu-proto.h | 1 + 5 files changed, 106 insertions(+), 45 deletions(-) diff --git a/xen/drivers/passthrough/amd/iommu_acpi.c b/xen/drivers/passthrough/amd/iommu_acpi.c index 603c02e..200db2d 100644 --- a/xen/drivers/passthrough/amd/iommu_acpi.c +++ b/xen/drivers/passthrough/amd/iommu_acpi.c @@ -821,13 +821,27 @@ static u16 __init parse_ivhd_device_special( return dev_length; } +static inline size_t +get_ivhd_header_size(const struct acpi_ivrs_hardware *ivhd_block) +{ +switch ( ivhd_block->header.type ) +{ +case ACPI_IVRS_TYPE_HARDWARE: +return offsetof(struct acpi_ivrs_hardware, efr_image); +case ACPI_IVRS_TYPE_HARDWARE_11H: +return sizeof(struct acpi_ivrs_hardware); +} +return 0; +} + static int __init parse_ivhd_block(const struct acpi_ivrs_hardware *ivhd_block) { const union acpi_ivhd_device *ivhd_device; u16 block_length, dev_length; +size_t hdr_size = get_ivhd_header_size(ivhd_block) ; struct amd_iommu *iommu; -if ( ivhd_block->header.length < sizeof(*ivhd_block) ) +if ( ivhd_block->header.length < hdr_size ) { AMD_IOMMU_DEBUG("IVHD Error: Invalid Block Length!\n"); return -ENODEV; @@ -845,7 +859,7 @@ static int __init parse_ivhd_block(const struct acpi_ivrs_hardware *ivhd_block) } /* parse Device Entries */ -block_length = sizeof(*ivhd_block); +block_length = hdr_size; while ( ivhd_block->header.length >= (block_length + sizeof(struct acpi_ivrs_de_header)) ) { @@ -914,34 +928,6 @@ static int __init parse_ivhd_block(const struct acpi_ivrs_hardware *ivhd_block) return 0; } -static int __init parse_ivrs_block(const struct acpi_ivrs_header *ivrs_block) -{ -const struct acpi_ivrs_hardware *ivhd_block; -const struct acpi_ivrs_memory *ivmd_block; - -switch ( ivrs_block->type ) -{ -case ACPI_IVRS_TYPE_HARDWARE: -ivhd_block = container_of(ivrs_block, const struct acpi_ivrs_hardware, - header); -return parse_ivhd_block(ivhd_block); - -case ACPI_IVRS_TYPE_MEMORY_ALL: -case ACPI_IVRS_TYPE_MEMORY_ONE: -case ACPI_IVRS_TYPE_MEMORY_RANGE: -case ACPI_IVRS_TYPE_MEMORY_IOMMU: -ivmd_block = container_of(ivrs_block, const struct acpi_ivrs_memory, - header); -return parse_ivmd_block(ivmd_block); - -default: -AMD_IOMMU_DEBUG("IVRS Error: Invalid Block Type!\n"); -return -ENODEV; -} - -return 0; -} - static void __init dump_acpi_table_header(struct acpi_table_header *table) { int i; @@ -978,6 +964,25 @@ static void __init dump_acpi_table_header(struct acpi_table_header *table) } +#define to_ivhd_block(hdr) \ +container_of(hdr, const struct acpi_ivrs_hardware, header) +#define to_ivmd_block(hdr) \ +container_of(hdr, const struct acpi_ivrs_memory, header) + +static inline bool_t is_ivhd_block(u8 type) +{ +return (type == ACPI_IVRS_TYPE_HARDWARE || +type == ACPI_IVRS_TYPE_HARDWARE_11H); +} + +static inline bool_t is_ivmd_block(u8 type) +{ +return (type == ACPI_IVRS_TYPE_MEMORY_ALL || +type == ACPI_IVRS_TYPE_MEMORY_ONE || +type == ACPI_IVRS_TYPE_MEMORY_RANGE || +type == ACPI_IVRS_TYPE_MEMORY_IOMMU); +} + static int __init parse_ivrs_table(struct acpi_table_header *table)
[Xen-devel] [PATCH v3] AMD IOMMU: Introduce support for IVHD block type 11h
From: Suravee SuthikulpanitAlong with the IVHD block type 10h, newer AMD platforms also come with types 11h, which is a superset of the older one. Having multiple IVHD block types in the same platform allows backward compatibility of newer systems to work with existing drivers. The driver should only parse the highest-level (newest) type of IVHD block that it can support. However, the current driver returns error when encounters with unknown IVHD block type. This causes existing driver to unnecessarily fail IOMMU initialization on new systems. This patch introduces a new logic, which scans through IVRS table looking for the highest-level supporsted IVHD block type. It also adds support for the new IVHD block type 11h. More information about the IVHD type 11h can be found in the AMD I/O Virtualization Technology (IOMMU) Specification rev 2.62. http://support.amd.com/TechDocs/48882_IOMMU.pdf Signed-off-by: Suravee Suthikulpanit --- Changes from V2: * Remove redundant IVHD_HIGHEST_SUPPORT_TYPE. * Misc clean up suggestions from Andrew. xen/drivers/passthrough/amd/iommu_acpi.c | 136 ++ xen/drivers/passthrough/amd/iommu_init.c | 10 +- xen/include/acpi/actbl2.h | 5 +- xen/include/asm-x86/amd-iommu.h | 1 + xen/include/asm-x86/hvm/svm/amd-iommu-proto.h | 1 + 5 files changed, 108 insertions(+), 45 deletions(-) diff --git a/xen/drivers/passthrough/amd/iommu_acpi.c b/xen/drivers/passthrough/amd/iommu_acpi.c index 79c1f8c..8bad2bc 100644 --- a/xen/drivers/passthrough/amd/iommu_acpi.c +++ b/xen/drivers/passthrough/amd/iommu_acpi.c @@ -821,13 +821,29 @@ static u16 __init parse_ivhd_device_special( return dev_length; } +static inline size_t +get_ivhd_header_size(const struct acpi_ivrs_hardware *ivhd_block) +{ +switch ( ivhd_block->header.type ) +{ +case ACPI_IVRS_TYPE_HARDWARE: +return offsetof(struct acpi_ivrs_hardware, efr_image); +case ACPI_IVRS_TYPE_HARDWARE_11H: +return sizeof(struct acpi_ivrs_hardware); +default: +break; +} +return 0; +} + static int __init parse_ivhd_block(const struct acpi_ivrs_hardware *ivhd_block) { const union acpi_ivhd_device *ivhd_device; u16 block_length, dev_length; +size_t hdr_size = get_ivhd_header_size(ivhd_block) ; struct amd_iommu *iommu; -if ( ivhd_block->header.length < sizeof(*ivhd_block) ) +if ( ivhd_block->header.length < hdr_size ) { AMD_IOMMU_DEBUG("IVHD Error: Invalid Block Length!\n"); return -ENODEV; @@ -845,7 +861,7 @@ static int __init parse_ivhd_block(const struct acpi_ivrs_hardware *ivhd_block) } /* parse Device Entries */ -block_length = sizeof(*ivhd_block); +block_length = hdr_size; while ( ivhd_block->header.length >= (block_length + sizeof(struct acpi_ivrs_de_header)) ) { @@ -914,34 +930,6 @@ static int __init parse_ivhd_block(const struct acpi_ivrs_hardware *ivhd_block) return 0; } -static int __init parse_ivrs_block(const struct acpi_ivrs_header *ivrs_block) -{ -const struct acpi_ivrs_hardware *ivhd_block; -const struct acpi_ivrs_memory *ivmd_block; - -switch ( ivrs_block->type ) -{ -case ACPI_IVRS_TYPE_HARDWARE: -ivhd_block = container_of(ivrs_block, const struct acpi_ivrs_hardware, - header); -return parse_ivhd_block(ivhd_block); - -case ACPI_IVRS_TYPE_MEMORY_ALL: -case ACPI_IVRS_TYPE_MEMORY_ONE: -case ACPI_IVRS_TYPE_MEMORY_RANGE: -case ACPI_IVRS_TYPE_MEMORY_IOMMU: -ivmd_block = container_of(ivrs_block, const struct acpi_ivrs_memory, - header); -return parse_ivmd_block(ivmd_block); - -default: -AMD_IOMMU_DEBUG("IVRS Error: Invalid Block Type!\n"); -return -ENODEV; -} - -return 0; -} - static void __init dump_acpi_table_header(struct acpi_table_header *table) { int i; @@ -978,6 +966,25 @@ static void __init dump_acpi_table_header(struct acpi_table_header *table) } +#define to_ivhd_block(hdr) \ +container_of(hdr, const struct acpi_ivrs_hardware, header) +#define to_ivmd_block(hdr) \ +container_of(hdr, const struct acpi_ivrs_memory, header) + +static inline bool_t is_ivhd_block(u8 type) +{ +return ( type == ACPI_IVRS_TYPE_HARDWARE || + type == ACPI_IVRS_TYPE_HARDWARE_11H ); +} + +static inline bool_t is_ivmd_block(u8 type) \ +{ +return ( type == ACPI_IVRS_TYPE_MEMORY_ALL || + type == ACPI_IVRS_TYPE_MEMORY_ONE || + type == ACPI_IVRS_TYPE_MEMORY_RANGE || + type == ACPI_IVRS_TYPE_MEMORY_IOMMU ); +} + static int __init parse_ivrs_table(struct acpi_table_header *table) { const struct acpi_ivrs_header *ivrs_block; @@ -1010,7 +1017,10 @@ static int __init parse_ivrs_table(struct acpi_table_header *table)
[Xen-devel] [PATCH v2] AMD IOMMU: Introduce support for IVHD block type 11h
From: Suravee SuthikulpanitAlong with the IVHD block type 10h, newer AMD platforms also come with types 11h, which is a superset of the older one. Having multiple IVHD block types in the same platform allows backward compatibility of newer systems to work with existing drivers. The driver should only parse the highest-level (newest) type of IVHD block that it can support. However, the current driver returns error when encounters with unknown IVHD block type. This causes existing driver to unnecessarily fail IOMMU initialization on new systems. This patch introduces a new logic, which scans through IVRS table looking for the highest-level supporsted IVHD block type. It also adds support for the new IVHD block type 11h. More information about the IVHD type 11h can be found in the AMD I/O Virtualization Technology (IOMMU) Specification rev 2.62. http://support.amd.com/TechDocs/48882_IOMMU.pdf Signed-off-by: Suravee Suthikulpanit --- Changes from V1: * Update get_ivhd_header_size() to use switch statment. * Update logic in get_supported_ivhd_type(). * Move ACPI_FADT_NO_MSI flag check to amd_iommu_init() since it should be checked before any IVRS parsing. * Clean up various styling issues. xen/drivers/passthrough/amd/iommu_acpi.c | 137 ++ xen/drivers/passthrough/amd/iommu_init.c | 10 +- xen/include/acpi/actbl2.h | 7 +- xen/include/asm-x86/amd-iommu.h | 1 + xen/include/asm-x86/hvm/svm/amd-iommu-proto.h | 1 + 5 files changed, 111 insertions(+), 45 deletions(-) diff --git a/xen/drivers/passthrough/amd/iommu_acpi.c b/xen/drivers/passthrough/amd/iommu_acpi.c index 79c1f8c..c4eec50 100644 --- a/xen/drivers/passthrough/amd/iommu_acpi.c +++ b/xen/drivers/passthrough/amd/iommu_acpi.c @@ -821,13 +821,33 @@ static u16 __init parse_ivhd_device_special( return dev_length; } +static inline unsigned int +get_ivhd_header_size(const struct acpi_ivrs_hardware *ivhd_block) +{ +int ret = 0; + +switch ( ivhd_block->header.type ) +{ +case ACPI_IVRS_TYPE_HARDWARE: +ret = offsetof(struct acpi_ivrs_hardware, efr_image); +break; +case ACPI_IVRS_TYPE_HARDWARE_11H: +ret = sizeof(struct acpi_ivrs_hardware); +break; +default: +break; +} +return ret; +} + static int __init parse_ivhd_block(const struct acpi_ivrs_hardware *ivhd_block) { const union acpi_ivhd_device *ivhd_device; u16 block_length, dev_length; +unsigned int hdr_size = get_ivhd_header_size(ivhd_block) ; struct amd_iommu *iommu; -if ( ivhd_block->header.length < sizeof(*ivhd_block) ) +if ( ivhd_block->header.length < hdr_size ) { AMD_IOMMU_DEBUG("IVHD Error: Invalid Block Length!\n"); return -ENODEV; @@ -845,7 +865,7 @@ static int __init parse_ivhd_block(const struct acpi_ivrs_hardware *ivhd_block) } /* parse Device Entries */ -block_length = sizeof(*ivhd_block); +block_length = hdr_size; while ( ivhd_block->header.length >= (block_length + sizeof(struct acpi_ivrs_de_header)) ) { @@ -914,34 +934,6 @@ static int __init parse_ivhd_block(const struct acpi_ivrs_hardware *ivhd_block) return 0; } -static int __init parse_ivrs_block(const struct acpi_ivrs_header *ivrs_block) -{ -const struct acpi_ivrs_hardware *ivhd_block; -const struct acpi_ivrs_memory *ivmd_block; - -switch ( ivrs_block->type ) -{ -case ACPI_IVRS_TYPE_HARDWARE: -ivhd_block = container_of(ivrs_block, const struct acpi_ivrs_hardware, - header); -return parse_ivhd_block(ivhd_block); - -case ACPI_IVRS_TYPE_MEMORY_ALL: -case ACPI_IVRS_TYPE_MEMORY_ONE: -case ACPI_IVRS_TYPE_MEMORY_RANGE: -case ACPI_IVRS_TYPE_MEMORY_IOMMU: -ivmd_block = container_of(ivrs_block, const struct acpi_ivrs_memory, - header); -return parse_ivmd_block(ivmd_block); - -default: -AMD_IOMMU_DEBUG("IVRS Error: Invalid Block Type!\n"); -return -ENODEV; -} - -return 0; -} - static void __init dump_acpi_table_header(struct acpi_table_header *table) { int i; @@ -978,6 +970,22 @@ static void __init dump_acpi_table_header(struct acpi_table_header *table) } +#define to_ivhd_block(hdr) \ +container_of(hdr, const struct acpi_ivrs_hardware, header) +#define to_ivmd_block(hdr) \ +container_of(hdr, const struct acpi_ivrs_memory, header) + +#define is_ivhd_block(x) \ +(( x <= IVHD_HIGHEST_SUPPORT_TYPE ) && \ + (( x == ACPI_IVRS_TYPE_HARDWARE ) || \ + ( x == ACPI_IVRS_TYPE_HARDWARE_11H ))) + +#define is_ivmd_block(x) \ +(( x == ACPI_IVRS_TYPE_MEMORY_ALL ) || \ + ( x == ACPI_IVRS_TYPE_MEMORY_ONE ) || \ + ( x == ACPI_IVRS_TYPE_MEMORY_RANGE ) || \ + ( x == ACPI_IVRS_TYPE_MEMORY_IOMMU )) + static int __init
[Xen-devel] [PATCH v3 3/3] AMD IOMMU: Check io_handler before registering mmio handler
From: Suravee Suthikulpanitguest_iommu_init tries to register mmio handler before HVM domain is initialized. This cause registration to silently failing. This patch adds a sanitiy check and puts out error message. Signed-off-by: Suravee Suthikulapanit --- xen/drivers/passthrough/amd/iommu_guest.c | 6 ++ 1 file changed, 6 insertions(+) diff --git a/xen/drivers/passthrough/amd/iommu_guest.c b/xen/drivers/passthrough/amd/iommu_guest.c index f96fbf4..49f00de 100644 --- a/xen/drivers/passthrough/amd/iommu_guest.c +++ b/xen/drivers/passthrough/amd/iommu_guest.c @@ -890,6 +890,12 @@ int guest_iommu_init(struct domain* d) !has_viommu(d) ) return 0; +if ( d->arch.hvm_domain.io_handler == NULL ) +{ +AMD_IOMMU_DEBUG("Error: uninitalized hvm io handler\n"); +return 1; +} + iommu = xzalloc(struct guest_iommu); if ( !iommu ) { -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] [PATCH v3 0/3] Fix xen crash when starting HVM guest due to missing io handler
From: Suravee SuthikulpanitHi All, Changes from V2: * Use assert instead of sanity check before count increment in the hvm_next_io_handler(). * Post-pone iommu_domain_init() and add proper error handling code to destroy hvm in case of failure. * Split out sanity check in guest_iommu_init() into a separate patch. OVERVIEW: On systems with iommu v2 enabled, the hypervisor crashes when trying to start up an HVM guest. Investigating shows that the guest_iommu_init() is called before the HVM domain is initialized. It then tries to register_mmio_handler() causing the hvm_next_io_handler() to increment the io_handler_count. However, the registration fails silently and left the I/O handler uninitialized. At later time, hvm_find_io_handler() is called and iterate through the registered handlered, but then resulting in referencing NULL pointers. This patch series proposes fix for this issue. Thanks, Suravee Suravee Suthikulpanit (3): x86/hvm: Add check when register io handler svm: iommu: Only call guest_iommu_init() after initialized HVM domain AMD IOMMU: Check io_handler before registering mmio handler xen/arch/x86/domain.c | 9 ++--- xen/arch/x86/hvm/intercept.c | 2 ++ xen/drivers/passthrough/amd/iommu_guest.c | 6 ++ 3 files changed, 14 insertions(+), 3 deletions(-) -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] [PATCH v3 1/3] x86/hvm: Add check when register io handler
From: Suravee SuthikulpanitAt the time of registering HVM I/O handler, the HVM domain might not have been initialized, which means the hvm_domain.io_handler would be NULL. In the hvm_next_io_handler(), this should be asserted. Signed-off-by: Suravee Suthikulpanit --- xen/arch/x86/hvm/intercept.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c index fc757d0..2f8d57f 100644 --- a/xen/arch/x86/hvm/intercept.c +++ b/xen/arch/x86/hvm/intercept.c @@ -258,6 +258,8 @@ struct hvm_io_handler *hvm_next_io_handler(struct domain *d) { unsigned int i = d->arch.hvm_domain.io_handler_count++; +ASSERT( d->arch.hvm_domain.io_handler ); + if ( i == NR_IO_HANDLERS ) { domain_crash(d); -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] [PATCH v3 2/3] svm: iommu: Only call guest_iommu_init() after initialized HVM domain
From: Suravee SuthikulpanitThe guest_iommu_init() is currently called by the following code path: arch/x86/domain.c: arch_domain_create() ]- drivers/passthrough/iommu.c: iommu_domain_init() |- drivers/passthrough/amd/pci_amd_iommu.c: amd_iommu_domain_init(); |- drivers/passthrough/amd/iommu_guest.c: guest_iommu_init() At this point, the hvm_domain_initialised() has not been called. So register_mmio_handler() in guest_iommu_init() silently fails. This patch moves the iommu_domain_init() to a later point after the hvm_domain_intialise() instead. Signed-off-by: Suravee Suthikulpanit --- xen/arch/x86/domain.c | 9 ++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 5af2cc5..0260e01 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -642,9 +642,6 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags, if ( (rc = init_domain_irq_mapping(d)) != 0 ) goto fail; - -if ( (rc = iommu_domain_init(d)) != 0 ) -goto fail; } spin_lock_init(>arch.e820_lock); @@ -660,6 +657,9 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags, /* 64-bit PV guest by default. */ d->arch.is_32bit_pv = d->arch.has_32bit_shinfo = 0; +if ( !is_idle_domain(d) && (rc = iommu_domain_init(d)) != 0 ) +goto fail_1; + /* initialize default tsc behavior in case tools don't */ tsc_set_info(d, TSC_MODE_DEFAULT, 0UL, 0, 0); spin_lock_init(>arch.vtsc_lock); @@ -675,6 +675,9 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags, return 0; + fail_1: +if ( has_hvm_container_domain(d) ) +hvm_domain_destroy(d); fail: d->is_dying = DOMDYING_dead; psr_domain_free(d); -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] [PATCH v3 1/3] x86/hvm: Add check when register io handler
From: Suravee SuthikulpanitAt the time of registering HVM I/O handler, the HVM domain might not have been initialized, which means the hvm_domain.io_handler would be NULL. In the hvm_next_io_handler(), this should be asserted. Signed-off-by: Suravee Suthikulpanit --- xen/arch/x86/hvm/intercept.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c index fc757d0..2f8d57f 100644 --- a/xen/arch/x86/hvm/intercept.c +++ b/xen/arch/x86/hvm/intercept.c @@ -258,6 +258,8 @@ struct hvm_io_handler *hvm_next_io_handler(struct domain *d) { unsigned int i = d->arch.hvm_domain.io_handler_count++; +ASSERT( d->arch.hvm_domain.io_handler ); + if ( i == NR_IO_HANDLERS ) { domain_crash(d); -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] [PATCH v3 2/3] svm: iommu: Only call guest_iommu_init() after initialized HVM domain
From: Suravee SuthikulpanitThe guest_iommu_init() is currently called by the following code path: arch/x86/domain.c: arch_domain_create() ]- drivers/passthrough/iommu.c: iommu_domain_init() |- drivers/passthrough/amd/pci_amd_iommu.c: amd_iommu_domain_init(); |- drivers/passthrough/amd/iommu_guest.c: guest_iommu_init() At this point, the hvm_domain_initialised() has not been called. So register_mmio_handler() in guest_iommu_init() silently fails. This patch moves the iommu_domain_init() to a later point after the hvm_domain_intialise() instead. Signed-off-by: Suravee Suthikulpanit --- xen/arch/x86/domain.c | 9 ++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 5af2cc5..0260e01 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -642,9 +642,6 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags, if ( (rc = init_domain_irq_mapping(d)) != 0 ) goto fail; - -if ( (rc = iommu_domain_init(d)) != 0 ) -goto fail; } spin_lock_init(>arch.e820_lock); @@ -660,6 +657,9 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags, /* 64-bit PV guest by default. */ d->arch.is_32bit_pv = d->arch.has_32bit_shinfo = 0; +if ( !is_idle_domain(d) && (rc = iommu_domain_init(d)) != 0 ) +goto fail_1; + /* initialize default tsc behavior in case tools don't */ tsc_set_info(d, TSC_MODE_DEFAULT, 0UL, 0, 0); spin_lock_init(>arch.vtsc_lock); @@ -675,6 +675,9 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags, return 0; + fail_1: +if ( has_hvm_container_domain(d) ) +hvm_domain_destroy(d); fail: d->is_dying = DOMDYING_dead; psr_domain_free(d); -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] [PATCH v3 0/3] Fix xen crash when starting HVM guest due to missing io handler
From: Suravee SuthikulpanitHi All, Changes from V2: * Use assert instead of sanity check before count increment in the hvm_next_io_handler(). * Post-pone iommu_domain_init() and add proper error handling code to destroy hvm in case of failure. * Split out sanity check in guest_iommu_init() into a separate patch. OVERVIEW: On systems with iommu v2 enabled, the hypervisor crashes when trying to start up an HVM guest. Investigating shows that the guest_iommu_init() is called before the HVM domain is initialized. It then tries to register_mmio_handler() causing the hvm_next_io_handler() to increment the io_handler_count. However, the registration fails silently and left the I/O handler uninitialized. At later time, hvm_find_io_handler() is called and iterate through the registered handlered, but then resulting in referencing NULL pointers. This patch series proposes fix for this issue. Thanks, Suravee Suravee Suthikulpanit (3): x86/hvm: Add check when register io handler svm: iommu: Only call guest_iommu_init() after initialized HVM domain AMD IOMMU: Check io_handler before registering mmio handler xen/arch/x86/domain.c | 9 ++--- xen/arch/x86/hvm/intercept.c | 2 ++ xen/drivers/passthrough/amd/iommu_guest.c | 6 ++ 3 files changed, 14 insertions(+), 3 deletions(-) -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] [PATCH] AMD IOMMU: Introduce support for IVHD block type 11h
From: Suravee SuthikulpanitAlong with the IVHD block type 10h, newer AMD platforms also come with types 11h, which is a superset of the older one. Having multiple IVHD block types in the same platform allows backward compatibility of newer systems to work with existing drivers. The driver should only parse the highest-level (newest) type of IVHD block that it can support. However, the current driver returns error when encounters with unknown IVHD block type. This causes existing driver to unnecessarily fail IOMMU initialization on new systems. This patch introduces a new logic, which scans through IVRS table looking for the highest-level supporsted IVHD block type. It also adds support for the new IVHD block type 11h. More information about the IVHD type 11h can be found in the AMD I/O Virtualization Technology (IOMMU) Specification rev 2.62. http://support.amd.com/TechDocs/48882_IOMMU.pdf Signed-off-by: Suravee Suthikulpanit --- xen/drivers/passthrough/amd/iommu_acpi.c | 133 +- xen/drivers/passthrough/amd/iommu_init.c | 6 +- xen/include/acpi/actbl2.h | 7 +- xen/include/asm-x86/amd-iommu.h | 1 + xen/include/asm-x86/hvm/svm/amd-iommu-proto.h | 1 + 5 files changed, 103 insertions(+), 45 deletions(-) diff --git a/xen/drivers/passthrough/amd/iommu_acpi.c b/xen/drivers/passthrough/amd/iommu_acpi.c index 79c1f8c..2a2d61b 100644 --- a/xen/drivers/passthrough/amd/iommu_acpi.c +++ b/xen/drivers/passthrough/amd/iommu_acpi.c @@ -821,13 +821,23 @@ static u16 __init parse_ivhd_device_special( return dev_length; } +static inline int get_ivhd_header_size(const struct acpi_ivrs_hardware *ivhd_block) +{ +if ( ivhd_block->header.type == ACPI_IVRS_TYPE_HARDWARE) +return 24; +if ( ivhd_block->header.type == ACPI_IVRS_TYPE_HARDWARE_11H) +return 40; +return 0; +} + static int __init parse_ivhd_block(const struct acpi_ivrs_hardware *ivhd_block) { const union acpi_ivhd_device *ivhd_device; u16 block_length, dev_length; +int hdr_size = get_ivhd_header_size(ivhd_block) ; struct amd_iommu *iommu; -if ( ivhd_block->header.length < sizeof(*ivhd_block) ) +if ( ivhd_block->header.length < hdr_size ) { AMD_IOMMU_DEBUG("IVHD Error: Invalid Block Length!\n"); return -ENODEV; @@ -845,7 +855,7 @@ static int __init parse_ivhd_block(const struct acpi_ivrs_hardware *ivhd_block) } /* parse Device Entries */ -block_length = sizeof(*ivhd_block); +block_length = hdr_size; while ( ivhd_block->header.length >= (block_length + sizeof(struct acpi_ivrs_de_header)) ) { @@ -901,7 +911,7 @@ static int __init parse_ivhd_block(const struct acpi_ivrs_hardware *ivhd_block) ivhd_block->header.length, block_length, iommu); break; default: -AMD_IOMMU_DEBUG("IVHD Error: Invalid Device Type!\n"); +AMD_IOMMU_DEBUG("IVHD Error: %s: Invalid Device Type!\n", __func__); dev_length = 0; break; } @@ -914,34 +924,6 @@ static int __init parse_ivhd_block(const struct acpi_ivrs_hardware *ivhd_block) return 0; } -static int __init parse_ivrs_block(const struct acpi_ivrs_header *ivrs_block) -{ -const struct acpi_ivrs_hardware *ivhd_block; -const struct acpi_ivrs_memory *ivmd_block; - -switch ( ivrs_block->type ) -{ -case ACPI_IVRS_TYPE_HARDWARE: -ivhd_block = container_of(ivrs_block, const struct acpi_ivrs_hardware, - header); -return parse_ivhd_block(ivhd_block); - -case ACPI_IVRS_TYPE_MEMORY_ALL: -case ACPI_IVRS_TYPE_MEMORY_ONE: -case ACPI_IVRS_TYPE_MEMORY_RANGE: -case ACPI_IVRS_TYPE_MEMORY_IOMMU: -ivmd_block = container_of(ivrs_block, const struct acpi_ivrs_memory, - header); -return parse_ivmd_block(ivmd_block); - -default: -AMD_IOMMU_DEBUG("IVRS Error: Invalid Block Type!\n"); -return -ENODEV; -} - -return 0; -} - static void __init dump_acpi_table_header(struct acpi_table_header *table) { int i; @@ -978,6 +960,21 @@ static void __init dump_acpi_table_header(struct acpi_table_header *table) } +#define to_ivhd_block(hdr) \ +( container_of(hdr, const struct acpi_ivrs_hardware, header) ) +#define to_ivmd_block(hdr) \ +(container_of(hdr, const struct acpi_ivrs_memory, header) ) + +#define is_ivhd_block(x) \ +( x == ACPI_IVRS_TYPE_HARDWARE || \ + x == ACPI_IVRS_TYPE_HARDWARE_11H ) + +#define is_ivmd_block(x) \ +( x == ACPI_IVRS_TYPE_MEMORY_ALL || \ + x == ACPI_IVRS_TYPE_MEMORY_ONE || \ + x == ACPI_IVRS_TYPE_MEMORY_RANGE || \ + x == ACPI_IVRS_TYPE_MEMORY_IOMMU ) + static int __init parse_ivrs_table(struct acpi_table_header *table) { const struct acpi_ivrs_header
[Xen-devel] [PATCH V2 2/2] svm: iommu: Only call guest_iommu_init() after initialized HVM domain
From: Suravee SuthikulpanitThe guest_iommu_init() is currently called by the following code path: arch/x86/domain.c: arch_domain_create() ]- drivers/passthrough/iommu.c: iommu_domain_init() |- drivers/passthrough/amd/pci_amd_iommu.c: amd_iommu_domain_init(); |- drivers/passthrough/amd/iommu_guest.c: guest_iommu_init() At this point, the hvm_domain_initialised() has not been called. So register_mmio_handler(), in guest_iommu_init(), silently fails. This patch moves the call to guest_iommu_init/destroy() into the svm_domain_intialise/_destroy() instead. Signed-off-by: Suravee Suthikulpanit --- xen/arch/x86/hvm/svm/svm.c | 10 ++ xen/drivers/passthrough/amd/iommu_guest.c | 6 ++ xen/drivers/passthrough/amd/pci_amd_iommu.c | 4 3 files changed, 16 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index e62dfa1..0c4affc 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -45,6 +45,7 @@ #include #include #include +#include #include #include #include @@ -1176,11 +1177,20 @@ void svm_host_osvw_init() static int svm_domain_initialise(struct domain *d) { +if ( is_hvm_domain(d) ) +/* + * This requires the hvm domain to be + * initialized first. + */ +return guest_iommu_init(d); + return 0; } static void svm_domain_destroy(struct domain *d) { +if ( is_hvm_domain(d) ) +guest_iommu_destroy(d); } static int svm_vcpu_initialise(struct vcpu *v) diff --git a/xen/drivers/passthrough/amd/iommu_guest.c b/xen/drivers/passthrough/amd/iommu_guest.c index b4e75ac..9f26765 100644 --- a/xen/drivers/passthrough/amd/iommu_guest.c +++ b/xen/drivers/passthrough/amd/iommu_guest.c @@ -891,6 +891,12 @@ int guest_iommu_init(struct domain* d) !has_viommu(d) ) return 0; +if ( d->arch.hvm_domain.io_handler == NULL ) +{ +AMD_IOMMU_DEBUG("Error: uninitalized hvm io handler\n"); +return 1; +} + iommu = xzalloc(struct guest_iommu); if ( !iommu ) { diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c index c1c0b6b..f791618 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -274,9 +274,6 @@ static int amd_iommu_domain_init(struct domain *d) hd->arch.paging_mode = is_hvm_domain(d) ? IOMMU_PAGING_MODE_LEVEL_2 : get_paging_mode(max_page); - -guest_iommu_init(d); - return 0; } @@ -476,7 +473,6 @@ static void deallocate_iommu_page_tables(struct domain *d) static void amd_iommu_domain_destroy(struct domain *d) { -guest_iommu_destroy(d); deallocate_iommu_page_tables(d); amd_iommu_flush_all_pages(d); } -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] [PATCH V2 0/2] Fix xen crash when starting HVM guest due to missing io handler
From: Suravee SuthikulpanitHi All, On systems with iommu v2 enabled, the hypervisor crashes when trying to start up an HVM guest. Investigating shows that the guest_iommu_init() is called before the HVM domain is initialized. It then tries to register_mmio_handler() causing the hvm_next_io_handler() to increment the io_handler_count. However, the registration fails silently and left the I/O handler uninitialized. At later time, hvm_find_io_handler() is called and iterate through the registered handlered, but then resulting in referencing NULL pointers. This patch series proposes fix for this issue. NOTE: For patch 2, since guest IOMMU emulation is still incompleted, this change is tentative and will be verified in the future. Alterantively, I can just simply remove the guest_iommu_init()/destroy() for now. I will be also looking at re-enabling this feature in Xen. Changes from V1: * Fix upper bound NR_IO_HANDLERS check for in patch 1. Thanks, Suravee Suravee Suthikulpanit (2): x86/hvm: Add check when register io handler svm: iommu: Only call guest_iommu_init() after initialized HVM domain xen/arch/x86/hvm/intercept.c| 6 +- xen/arch/x86/hvm/svm/svm.c | 10 ++ xen/drivers/passthrough/amd/iommu_guest.c | 6 ++ xen/drivers/passthrough/amd/pci_amd_iommu.c | 4 4 files changed, 21 insertions(+), 5 deletions(-) -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] [PATCH V2 1/2] x86/hvm: Add check when register io handler
From: Suravee SuthikulpanitAt the time of registering HVM I/O handler, the HVM domain might not have been initialized, which means the hvm_domain.io_handler would be NULL. In the hvm_next_io_handler(), this should be checked before returning and referencing the array. Also, the io_handler_count should only be incremented on success. So, this patch adds error handling in hvm_next_io_handler. Signed-off-by: Suravee Suthikulpanit --- xen/arch/x86/hvm/intercept.c | 6 +- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c index 7096d74..b837f5f 100644 --- a/xen/arch/x86/hvm/intercept.c +++ b/xen/arch/x86/hvm/intercept.c @@ -248,7 +248,10 @@ int hvm_io_intercept(ioreq_t *p) struct hvm_io_handler *hvm_next_io_handler(struct domain *d) { -unsigned int i = d->arch.hvm_domain.io_handler_count++; +unsigned int i = d->arch.hvm_domain.io_handler_count; + +if ( !d->arch.hvm_domain.io_handler ) +return NULL; if ( i == NR_IO_HANDLERS ) { @@ -256,6 +259,7 @@ struct hvm_io_handler *hvm_next_io_handler(struct domain *d) return NULL; } +d->arch.hvm_domain.io_handler_count++; return >arch.hvm_domain.io_handler[i]; } -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] [PATCH 0/2] Fix xen crash when starting HVM guest due to missing io handler
From: Suravee SuthikulpanitHi All, On systems with iommu v2 enabled, the hypervisor crashes when trying to start up an HVM guest. Investigating shows that the guest_iommu_init() is called before the HVM domain is initialized. It then tries to register_mmio_handler() causing the hvm_next_io_handler() to increment the io_handler_count. However, the registration fails silently and left the I/O handler uninitialized. At later time, hvm_find_io_handler() is called and iterate through the registered handlered, but then resulting in referencing NULL pointers. This patch series proposes fix for this issue. NOTE: For patch 2, since guest IOMMU emulation is still incompleted, this change is tentative and will be verified in the future. Alterantively, I can just simply remove the guest_iommu_init()/destroy() for now. I will be also looking at re-enabling this feature in Xen. Thanks, Suravee Suravee Suthikulpanit (2): x86/hvm: Add check when register io handler svm: iommu: Only call guest_iommu_init() after initialized HVM domain xen/arch/x86/hvm/intercept.c| 8 ++-- xen/arch/x86/hvm/svm/svm.c | 10 ++ xen/drivers/passthrough/amd/iommu_guest.c | 6 ++ xen/drivers/passthrough/amd/pci_amd_iommu.c | 4 4 files changed, 22 insertions(+), 6 deletions(-) -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] [PATCH 2/2] svm: iommu: Only call guest_iommu_init() after initialized HVM domain
From: Suravee SuthikulpanitThe guest_iommu_init() is currently called by the following code path: arch/x86/domain.c: arch_domain_create() ]- drivers/passthrough/iommu.c: iommu_domain_init() |- drivers/passthrough/amd/pci_amd_iommu.c: amd_iommu_domain_init(); |- drivers/passthrough/amd/iommu_guest.c: guest_iommu_init() At this point, the hvm_domain_initialised() has not been called. So register_mmio_handler(), in guest_iommu_init(), silently fails. This patch moves the call to guest_iommu_init/destroy() into the svm_domain_intialise/_destroy() instead. Signed-off-by: Suravee Suthikulpanit --- xen/arch/x86/hvm/svm/svm.c | 10 ++ xen/drivers/passthrough/amd/iommu_guest.c | 6 ++ xen/drivers/passthrough/amd/pci_amd_iommu.c | 4 3 files changed, 16 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index e62dfa1..0c4affc 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -45,6 +45,7 @@ #include #include #include +#include #include #include #include @@ -1176,11 +1177,20 @@ void svm_host_osvw_init() static int svm_domain_initialise(struct domain *d) { +if ( is_hvm_domain(d) ) +/* + * This requires the hvm domain to be + * initialized first. + */ +return guest_iommu_init(d); + return 0; } static void svm_domain_destroy(struct domain *d) { +if ( is_hvm_domain(d) ) +guest_iommu_destroy(d); } static int svm_vcpu_initialise(struct vcpu *v) diff --git a/xen/drivers/passthrough/amd/iommu_guest.c b/xen/drivers/passthrough/amd/iommu_guest.c index b4e75ac..9f26765 100644 --- a/xen/drivers/passthrough/amd/iommu_guest.c +++ b/xen/drivers/passthrough/amd/iommu_guest.c @@ -891,6 +891,12 @@ int guest_iommu_init(struct domain* d) !has_viommu(d) ) return 0; +if ( d->arch.hvm_domain.io_handler == NULL ) +{ +AMD_IOMMU_DEBUG("Error: uninitalized hvm io handler\n"); +return 1; +} + iommu = xzalloc(struct guest_iommu); if ( !iommu ) { diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c index c1c0b6b..f791618 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -274,9 +274,6 @@ static int amd_iommu_domain_init(struct domain *d) hd->arch.paging_mode = is_hvm_domain(d) ? IOMMU_PAGING_MODE_LEVEL_2 : get_paging_mode(max_page); - -guest_iommu_init(d); - return 0; } @@ -476,7 +473,6 @@ static void deallocate_iommu_page_tables(struct domain *d) static void amd_iommu_domain_destroy(struct domain *d) { -guest_iommu_destroy(d); deallocate_iommu_page_tables(d); amd_iommu_flush_all_pages(d); } -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
[Xen-devel] [PATCH 1/2] x86/hvm: Add check when register io handler
From: Suravee SuthikulpanitAt the time of registering HVM I/O handler, the HVM domain might not have been initialized, which means the hvm_domain.io_handler would be NULL. In the hvm_next_io_handler(), this should be checked before returning and referencing the array. Also, the io_handler_count should only be incremented on success. So, this patch adds error handling in hvm_next_io_handler. Signed-off-by: Suravee Suthikulpanit --- xen/arch/x86/hvm/intercept.c | 8 ++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c index 7096d74..13b81c9 100644 --- a/xen/arch/x86/hvm/intercept.c +++ b/xen/arch/x86/hvm/intercept.c @@ -248,14 +248,18 @@ int hvm_io_intercept(ioreq_t *p) struct hvm_io_handler *hvm_next_io_handler(struct domain *d) { -unsigned int i = d->arch.hvm_domain.io_handler_count++; +unsigned int i = d->arch.hvm_domain.io_handler_count; -if ( i == NR_IO_HANDLERS ) +if ( !d->arch.hvm_domain.io_handler ) +return NULL; + +if ( i == NR_IO_HANDLERS - 1 ) { domain_crash(d); return NULL; } +d->arch.hvm_domain.io_handler_count++; return >arch.hvm_domain.io_handler[i]; } -- 1.9.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel