people committed to this and actively maintain it,
> > who knows the rdma code well.
> >
> > Thanks,
> >
>
> OK, so comments from Yu Zhang and Gonglei? Can we work up a CI test
> along these lines that would ensure that future RDMA breakages are
> detected more easily?
>
> What do you think?
>
> - Michael
>
go in this direction.
Best regards,
Yu Zhang @ IONOS cloud
On Mon, Sep 30, 2024 at 5:00 PM Michael Galaxy wrote:
>
>
> On 9/29/24 17:26, Michael S. Tsirkin wrote:
> > !---|
> >This Message Is From an External Sende
Sorry for my confusion. I tested TLS migration by using RDMA, as RDMA
traffic bypasses the CPU, the TLS setting is not validated. With TCP,
the connection can't be established if "endpoint" setting is wrong.
On Tue, Jun 11, 2024 at 5:57 PM Yu Zhang wrote:
>
> Hello Daniel an
be fixed from the
security perspective. Thank you very much!
Best regards,
Yu Zhang @ IONOS cloud
On Mon, Aug 21, 2023 at 4:29 PM Yu Zhang wrote:
>
> Hello Daniel,
>
> sorry for my slow reply! I tested the approach you suggested by the
> following way:
>
> On
rce server.
Therefore, I assume that this version is not yet quite capable of
handling heavy load yet. I'm also looking in the code to see if
anything can be improved. We really appreciate your excellent work!
Best regards,
Yu Zhang @ IONOS cloud
On Wed, Jun 5, 2024 at 12:00 PM Gongle
InfiniBand
network adapters. One of them has:
BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller (rev 01)
The comparison between RDMA and TCP on the same NIC could make more sense.
Best regards,
Yu Zhang @ IONOS Cloud
On Thu, May 16, 2024 at 7:30 PM Michael Galaxy wrote:
>
>
recent Ethernet
NICs and keep you updated.
It seems that the benefits of RDMA becomes obviously when the VM has
large memory and is
running memory-intensive workload.
Best regards,
Yu Zhang @ IONOS Cloud
On Thu, May 9, 2024 at 4:14 PM Peter Xu wrote:
>
> On Thu, May 09, 2024 at 04:58:34PM
e replaced by TCP/IP for VM migration
at the moment.
Jinpu Wang is the upstream maintainer of RNBD/RTRS. He is experienced in
RDMA programming, and Yu Zhang maintains the downstream QEMU for IONOS
cloud in production.
With the consent and supports from Michael Galaxy, who has developed this
featur
e replaced by TCP/IP for VM migration
at the moment.
Jinpu Wang is the upstream maintainer of RNBD/RTRS. He is experienced in
RDMA programming, and Yu Zhang maintains the downstream QEMU for IONOS
cloud in production.
With the consent and supports from Michael Galaxy, who has developed this
featur
ments when necessary
Besides that, a patch is attached to announce this change in the community.
With your generous support, we hope that the development community
will make a positive decision for us.
Kind regards,
Yu Zhang@ IONOS Cloud
On Mon, Apr 29, 2024 at 4:57 PM Peter Xu wrote:
>
> On
> 1) Either a CI test covering at least the major RDMA paths, or at least
> periodically tests for each QEMU release will be needed.
We use a batch of regression test cases for the stack, which covers the
test for QEMU. I did such test for most of the QEMU releases planned as
candidates for rol
us
either to stick to the RDMA migration by using an increasingly older
version of QEMU,
or to abandon the currently used RDMA migration.
Best regards,
Yu Zhang
On Mon, Apr 1, 2024 at 9:56 AM Zhijian Li (Fujitsu)
wrote:
>
> Phil,
>
> on 3/29/2024 6:28 PM, Philippe Mathieu-Daudé
>> I have reviewed and tested the change. Have tweaked the commit message
>> accordingly.
>> I hope that's okay with you Yu Zhang :)
it's okay for me. As it's a tiny fix, you may modify or include it in
your own commits.
Best regards,
Yu Zhang
11.03.2024
On Mon,
Hello Peter and Zhijian,
I created a MR in gitlab. You may have a look and let me know whether
it's fine for you.
https://gitlab.com/qemu/qemu/-/merge_requests/4
Best regards,
Yu Zhang @ IONOS Compute Platform
11.03.2024
On Fri, Mar 8, 2024 at 10:13 AM Yu Zhang wrote:
>
> H
uot; before creating the
application password.
As it's tiny, I attached it in this email at this time (not elegant.),
so that it can get
included before the soft freezing.
Really sorry for this inconvenience.
--
>From c9fb6a6debfbd5e103aa90f30e9a028316449104 Mon Sep 17 00:00:
r receive their
data has a different type:
typedef struct RDMAContext {
char *host;
int port;
...
}
Is there any reason to keep "port" like this (char* instead of int) or
can we improve it?
Thank you so much for any of your comments!
Best regards,
Yu Zhang @ IONOS Compute Platform
05.03.2024
t;object-del",
"arguments": { "id": "tls0" }}' | sudo nc -U -w 1 ${SOCK}
echo '{"execute":"qmp_capabilities"}{ "execute": "object-add",
"arguments": { "id": "tls0", "qom-type": &
mands?
[1] https://www.qemu.org/docs/master/system/tls.html
[2]
https://www.berrange.com/posts/2016/08/16/improving-qemu-security-part-7-tls-support-for-migration/
Thank you so much for your reply!
Yu Zhang @ Compute Platform IONOS
06.08.2023
On Tue, Apr 4, 2023 at 2:25 PM Igor Mammedov wrote:
> On Tue, 4 Apr 2023 08:45:54 +0200
> Jinpu Wang wrote:
>
> > Hi Yu,
> >
> > On Mon, Apr 3, 2023 at 6:59 PM Yu Zhang wrote:
> > >
> > > Dear Laurent,
> > >
> > > Thank you for your q
to remove the line below
from acpi_pcihp_device_unplug_request_cb():
pdev->qdev.pending_deleted_event = true;
but you may have a reason to keep it. First of all, I'll open a bug in the
bug tracker and let you know.
Best regards,
Yu Zhang
On Mon, Apr 3, 2023 at 6:32 PM Laurent Vivie
w, the issue was also encountered by libvirt, but they simply ignored it:
https://bugzilla.redhat.com/show_bug.cgi?id=1878659
Hence, a question is: should we have the line below in
acpi_pcihp_device_unplug_request_cb()?
pdev->qdev.pending_deleted_event = true;
It would be great if you as the author could give us a few hints.
Thank you very much for your reply!
Sincerely,
Yu Zhang @ Compute Platform IONOS
03.04.2013
h the release of qemu,
or independent from the release of qemu?
Thank you very much for your reply.
Best regards,
Yu Zhang @ Compute Platform, IONOS
16.01.2023
e power mode of the pinned host cores ?
Thank you very much for your reply.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=670104
[2] https://lists.nongnu.org/archive/html/qemu-devel/2013-08/msg01875.html
Yu Zhang @IONOS Compute Platform
block driver node.
+
+If no node name is specified, it is automatically generated.
+The generated node name is not intended to be predictable
+and changes between QEMU invocations. For the top level, an
+explicit node name must be specified.
+
``media=media``
This
-virtio-disk5
- gets the node name from the target via network: #node123
However, on the source server, the node name #node123 can't be identified.
Assumption: the same "device" may have different "node-name" on the source
and target servers. It seems that sending "device" is quite easy, but
sending "device" and translating it to the correct "node-name" is not quite
straightforward.
The "block-export-add" command made it somehow unnecessarily complicated.
For this reason, we would like to know:
- whether it's possible not to deprecate the use of "nbd-server-add"
command, or
- whether there is a simpler QMP command for block device migration
Thank you so much for your reply.
Yu Zhang @Compute Platform Team of IONOS SE
05.07.2022
[1] https://wiki.qemu.org/ChangeLog/5.2
s.
Another aspect I'd like to know is, could the multi-processes be live
migrated just as the single qemu process?
Thank you so much for your time and patience.
Wish you all the best,
Yu Zhang
07.06.2022
On Fri, Jun 3, 2022 at 7:37 PM Jag Raman wrote:
>
>
> On Jun 3, 2022, at
lating multiple devices?
- Can we find more command line examples showing the combination of
orchestrator, remote emulation process, memory-backend-memfd and
x-pci-proxy-dev?
Thank you very much and all the best
Yu Zhang
03.06.2022
[1] https://www.qemu.org/docs/master/system/multi-process.
strator, remote emulation process, memory-backend-memfd and
x-pci-proxy-dev?
Thank you very much
Kind regard
Yu Zhang @ IONOS Compute Platform
03.06.2022
On Tue, Jan 15, 2019 at 03:13:14PM +0800, Yu Zhang wrote:
> On Fri, Dec 28, 2018 at 11:29:41PM -0200, Eduardo Habkost wrote:
> > On Fri, Dec 28, 2018 at 10:32:59AM +0800, Yu Zhang wrote:
> > > On Thu, Dec 27, 2018 at 01:14:11PM -0200, Eduardo Habkost wrote:
> > > >
On Mon, Jan 14, 2019 at 11:02:28PM -0500, Michael S. Tsirkin wrote:
> On Wed, Dec 12, 2018 at 09:05:37PM +0800, Yu Zhang wrote:
> > Intel's upcoming processors will extend maximum linear address width to
> > 57 bits, and introduce 5-level paging for CPU. Meanwhile, the pl
On Fri, Dec 28, 2018 at 11:29:41PM -0200, Eduardo Habkost wrote:
> On Fri, Dec 28, 2018 at 10:32:59AM +0800, Yu Zhang wrote:
> > On Thu, Dec 27, 2018 at 01:14:11PM -0200, Eduardo Habkost wrote:
> > > On Wed, Dec 26, 2018 at 01:30:00PM +0800, Yu Zhang wrote:
> > > >
On Thu, Dec 27, 2018 at 01:14:11PM -0200, Eduardo Habkost wrote:
> On Wed, Dec 26, 2018 at 01:30:00PM +0800, Yu Zhang wrote:
> > On Tue, Dec 25, 2018 at 11:56:19AM -0500, Michael S. Tsirkin wrote:
> > > On Sat, Dec 22, 2018 at 09:11:26AM +0800, Yu Zhang wrote:
> > > >
On Tue, Dec 25, 2018 at 12:00:08PM -0500, Michael S. Tsirkin wrote:
> On Sat, Dec 22, 2018 at 08:41:37AM +0800, Yu Zhang wrote:
> > On Fri, Dec 21, 2018 at 01:10:13PM -0500, Michael S. Tsirkin wrote:
> > > On Sat, Dec 22, 2018 at 01:34:01AM +0800, Yu Zhang wrote:
> > > &
On Tue, Dec 25, 2018 at 11:56:19AM -0500, Michael S. Tsirkin wrote:
> On Sat, Dec 22, 2018 at 09:11:26AM +0800, Yu Zhang wrote:
> > On Fri, Dec 21, 2018 at 02:02:28PM -0500, Michael S. Tsirkin wrote:
> > > On Sat, Dec 22, 2018 at 01:37:58AM +0800, Yu Zhang wrote:
> > > &
On Fri, Dec 21, 2018 at 02:02:28PM -0500, Michael S. Tsirkin wrote:
> On Sat, Dec 22, 2018 at 01:37:58AM +0800, Yu Zhang wrote:
> > On Fri, Dec 21, 2018 at 12:04:49PM -0500, Michael S. Tsirkin wrote:
> > > On Sat, Dec 22, 2018 at 12:09:44AM +0800, Yu Zhang wrote:
> > >
On Fri, Dec 21, 2018 at 01:10:13PM -0500, Michael S. Tsirkin wrote:
> On Sat, Dec 22, 2018 at 01:34:01AM +0800, Yu Zhang wrote:
> > On Fri, Dec 21, 2018 at 12:15:26PM -0500, Michael S. Tsirkin wrote:
> > > On Sat, Dec 22, 2018 at 12:19:20AM +0800, Yu Zhang wrote:
> > > &
On Fri, Dec 21, 2018 at 12:04:49PM -0500, Michael S. Tsirkin wrote:
> On Sat, Dec 22, 2018 at 12:09:44AM +0800, Yu Zhang wrote:
> > Well, my understanding of the vt-d spec is that the address limitation in
> > DMAR are referring to the same concept of CPUID.MAXPHYSADDR. I do not th
On Fri, Dec 21, 2018 at 12:15:26PM -0500, Michael S. Tsirkin wrote:
> On Sat, Dec 22, 2018 at 12:19:20AM +0800, Yu Zhang wrote:
> > > I'd like to avoid poking at the CPU from VTD code. That's all.
> >
> > OK. So for the short term,how about I remove the chec
On Thu, Dec 20, 2018 at 01:28:21PM -0500, Michael S. Tsirkin wrote:
> On Thu, Dec 20, 2018 at 01:49:21PM +0800, Yu Zhang wrote:
> > On Wed, Dec 19, 2018 at 10:23:44AM -0500, Michael S. Tsirkin wrote:
> > > On Wed, Dec 19, 2018 at 01:57:43PM +0800, Yu Zhang wrote:
> > > &
On Fri, Dec 21, 2018 at 03:13:25PM +0100, Igor Mammedov wrote:
> On Thu, 20 Dec 2018 19:18:01 -0200
> Eduardo Habkost wrote:
>
> > On Wed, Dec 19, 2018 at 11:40:37AM +0100, Igor Mammedov wrote:
> > > On Wed, 19 Dec 2018 10:57:17 +0800
> > > Yu Zhang wrote:
>
On Wed, Dec 19, 2018 at 11:47:23AM -0500, Michael S. Tsirkin wrote:
> On Wed, Dec 19, 2018 at 11:40:37AM +0100, Igor Mammedov wrote:
> > On Wed, 19 Dec 2018 10:57:17 +0800
> > Yu Zhang wrote:
> >
> > > On Tue, Dec 18, 2018 at 03:55:36PM +0100, Igor Mammedov wrote:
On Wed, Dec 19, 2018 at 10:23:44AM -0500, Michael S. Tsirkin wrote:
> On Wed, Dec 19, 2018 at 01:57:43PM +0800, Yu Zhang wrote:
> > On Tue, Dec 18, 2018 at 11:35:34PM -0500, Michael S. Tsirkin wrote:
> > > On Wed, Dec 19, 2018 at 11:40:06AM +0800, Yu Zhang wrote:
> > > &
On Tue, Dec 18, 2018 at 10:12:45PM -0500, Michael S. Tsirkin wrote:
> On Wed, Dec 19, 2018 at 11:03:58AM +0800, Yu Zhang wrote:
> > On Tue, Dec 18, 2018 at 09:58:35AM -0500, Michael S. Tsirkin wrote:
> > > On Tue, Dec 18, 2018 at 03:55:36PM +0100, Igor Mammedov wrote:
> >
On Tue, Dec 18, 2018 at 11:35:34PM -0500, Michael S. Tsirkin wrote:
> On Wed, Dec 19, 2018 at 11:40:06AM +0800, Yu Zhang wrote:
> > On Tue, Dec 18, 2018 at 09:49:02AM -0500, Michael S. Tsirkin wrote:
> > > On Tue, Dec 18, 2018 at 09:45:41PM +0800, Yu Zhang wrote:
> > > &
On Tue, Dec 18, 2018 at 09:49:02AM -0500, Michael S. Tsirkin wrote:
> On Tue, Dec 18, 2018 at 09:45:41PM +0800, Yu Zhang wrote:
> > On Tue, Dec 18, 2018 at 07:43:28AM -0500, Michael S. Tsirkin wrote:
> > > On Tue, Dec 18, 2018 at 06:01:16PM +0800, Yu Zhang wrote:
> > > &
On Tue, Dec 18, 2018 at 09:58:35AM -0500, Michael S. Tsirkin wrote:
> On Tue, Dec 18, 2018 at 03:55:36PM +0100, Igor Mammedov wrote:
> > On Tue, 18 Dec 2018 17:27:23 +0800
> > Yu Zhang wrote:
> >
> > > On Mon, Dec 17, 2018 at 02:17:40PM +0100, Igor Mammedov wrote:
On Tue, Dec 18, 2018 at 03:55:36PM +0100, Igor Mammedov wrote:
> On Tue, 18 Dec 2018 17:27:23 +0800
> Yu Zhang wrote:
>
> > On Mon, Dec 17, 2018 at 02:17:40PM +0100, Igor Mammedov wrote:
> > > On Wed, 12 Dec 2018 21:05:38 +0800
> > > Yu Zhang wrote:
> > &g
On Tue, Dec 18, 2018 at 07:43:28AM -0500, Michael S. Tsirkin wrote:
> On Tue, Dec 18, 2018 at 06:01:16PM +0800, Yu Zhang wrote:
> > On Tue, Dec 18, 2018 at 05:47:14PM +0800, Yu Zhang wrote:
> > > On Mon, Dec 17, 2018 at 02:29:02PM +0100, Igor Mammedov wrote:
> > > >
On Tue, Dec 18, 2018 at 05:47:14PM +0800, Yu Zhang wrote:
> On Mon, Dec 17, 2018 at 02:29:02PM +0100, Igor Mammedov wrote:
> > On Wed, 12 Dec 2018 21:05:39 +0800
> > Yu Zhang wrote:
> >
> > > A 5-level paging capable VM may choose to use 57-bit IOVA address width.
On Mon, Dec 17, 2018 at 02:29:02PM +0100, Igor Mammedov wrote:
> On Wed, 12 Dec 2018 21:05:39 +0800
> Yu Zhang wrote:
>
> > A 5-level paging capable VM may choose to use 57-bit IOVA address width.
> > E.g. guest applications may prefer to use its VA as IOVA when performi
On Mon, Dec 17, 2018 at 02:17:40PM +0100, Igor Mammedov wrote:
> On Wed, 12 Dec 2018 21:05:38 +0800
> Yu Zhang wrote:
>
> > Currently, vIOMMU is using the value of IOVA address width, instead of
> > the host address width(HAW) to calculate the number of reserved bits in
>
Sorry, any comments for this series? Thanks. :)
B.R.
Yu
On 12/12/2018 9:05 PM, Yu Zhang wrote:
Intel's upcoming processors will extend maximum linear address width to
57 bits, and introduce 5-level paging for CPU. Meanwhile, the platform
will also extend the maximum guest address widt
On Wed, Dec 12, 2018 at 12:12:33PM -0200, Eduardo Habkost wrote:
> On Wed, Dec 12, 2018 at 05:08:39PM +0800, Yu Zhang wrote:
> > On Tue, Dec 11, 2018 at 05:25:27PM -0200, Eduardo Habkost wrote:
> > > Some downstream distributions of QEMU set host-phys-bits=on by
> > >
width. When creating a VM with 5-level paging feature, one can choose to
create a virtual VTD with 5-level paging capability, with configurations
like "-device intel-iommu,x-aw-bits=57".
Signed-off-by: Yu Zhang
Reviewed-by: Peter Xu
---
Cc: "Michael S. Tsirkin"
Cc: Marcel
initialized based on the maximum physical address set to
guest CPU. Also, definitions such as VTD_HOST_AW_39/48BIT etc. are renamed
to clarify.
Signed-off-by: Yu Zhang
Reviewed-by: Peter Xu
---
Cc: "Michael S. Tsirkin"
Cc: Igor Mammedov
Cc: Marcel Apfelbaum
Cc: Paolo Bonzini
Cc: Richard
cover letter changes(e.g. mention the test
patch in kvm-unit-tests).
- Coding style changes.
---
Cc: "Michael S. Tsirkin"
Cc: Igor Mammedov
Cc: Marcel Apfelbaum
Cc: Paolo Bonzini
Cc: Richard Henderson
Cc: Eduardo Habkost
Cc: Peter Xu
---
Yu Zhang (2):
intel-iommu: differentiate h
On Tue, Dec 11, 2018 at 05:25:27PM -0200, Eduardo Habkost wrote:
> Some downstream distributions of QEMU set host-phys-bits=on by
> default. This worked very well for most use cases, because
> phys-bits really didn't have huge consequences. The only
> difference was on the CPUID data seen by guest
On Wed, Nov 14, 2018 at 02:41:15PM +0800, Peter Xu wrote:
> On Wed, Nov 14, 2018 at 02:04:44PM +0800, Yu Zhang wrote:
> > The 64-bit key used by vtd_lookup_iotlb() to search the cached
> > mappings is formed by combining the GFN, source id and the page
> > level. To cover 57-b
initialized based on the maximum physical address set to
guest CPU. Also, definitions such as VTD_HOST_AW_39/48BIT etc. are renamed
to clarify.
Signed-off-by: Yu Zhang
---
Cc: "Michael S. Tsirkin"
Cc: Igor Mammedov
Cc: Marcel Apfelbaum
Cc: Paolo Bonzini
Cc: Richard Henderson
Cc: Eduardo H
width. When creating a VM with 5-level paging feature, one can choose to
create a virtual VTD with 5-level paging capability, with configurations
like "-device intel-iommu,x-aw-bits=57".
Signed-off-by: Yu Zhang
---
Cc: "Michael S. Tsirkin"
Cc: Marcel Apfelbaum
Cc: Paolo
The 64-bit key used by vtd_lookup_iotlb() to search the cached
mappings is formed by combining the GFN, source id and the page
level. To cover 57-bit IOVA, the shift of source id and of page
level need to be enlarged by 9 - the stride of one paging structure
level.
Signed-off-by: Yu Zhang
---
Cc
info.
- Address comments from Peter Xu: only searches for 4K/2M/1G mappings in
iotlb are meaningful.
- Address comments from Peter Xu: cover letter changes(e.g. mention the test
patch in kvm-unit-tests).
- Coding style changes.
Yu Zhang (3):
intel-iommu: differentiate host address width from IOV
On Tue, Nov 13, 2018 at 02:12:17PM +0800, Peter Xu wrote:
> On Tue, Nov 13, 2018 at 01:45:44PM +0800, Yu Zhang wrote:
>
> [...]
>
> > > > Since at it, another thing I thought about is making sure the IOMMU
> > > > capabilities will match between host and gue
On Tue, Nov 13, 2018 at 01:18:54PM +0800, Peter Xu wrote:
> On Mon, Nov 12, 2018 at 08:38:30PM +0800, Yu Zhang wrote:
> > On Mon, Nov 12, 2018 at 05:36:38PM +0800, Peter Xu wrote:
> > > On Mon, Nov 12, 2018 at 05:25:48PM +0800, Yu Zhang wrote:
> > > > On Mon, No
On Tue, Nov 13, 2018 at 01:04:51PM +0800, Peter Xu wrote:
> On Tue, Nov 13, 2018 at 11:37:07AM +0800, Peter Xu wrote:
> > On Mon, Nov 12, 2018 at 05:42:01PM +0800, Yu Zhang wrote:
> > > On Mon, Nov 12, 2018 at 04:36:34PM +0800, Peter Xu wrote:
> > > > On Fri, Nov 09
On Tue, Nov 13, 2018 at 11:37:07AM +0800, Peter Xu wrote:
> On Mon, Nov 12, 2018 at 05:42:01PM +0800, Yu Zhang wrote:
> > On Mon, Nov 12, 2018 at 04:36:34PM +0800, Peter Xu wrote:
> > > On Fri, Nov 09, 2018 at 07:49:46PM +0800, Yu Zhang wrote:
> > > > A 5-level pagin
On Mon, Nov 12, 2018 at 05:36:38PM +0800, Peter Xu wrote:
> On Mon, Nov 12, 2018 at 05:25:48PM +0800, Yu Zhang wrote:
> > On Mon, Nov 12, 2018 at 04:51:22PM +0800, Peter Xu wrote:
> > > On Fri, Nov 09, 2018 at 07:49:47PM +0800, Yu Zhang wrote:
> > > > This patch
On Mon, Nov 12, 2018 at 04:36:34PM +0800, Peter Xu wrote:
> On Fri, Nov 09, 2018 at 07:49:46PM +0800, Yu Zhang wrote:
> > A 5-level paging capable VM may choose to use 57-bit IOVA address width.
> > E.g. guest applications like DPDK prefer to use its VA as IOVA when
> > perf
On Mon, Nov 12, 2018 at 04:15:46PM +0800, Peter Xu wrote:
> On Fri, Nov 09, 2018 at 07:49:45PM +0800, Yu Zhang wrote:
> > Currently, vIOMMU is using the value of IOVA address width, instead of
> > the host address width(HAW) to calculate the number of reserved bits in
> > da
On Mon, Nov 12, 2018 at 04:51:22PM +0800, Peter Xu wrote:
> On Fri, Nov 09, 2018 at 07:49:47PM +0800, Yu Zhang wrote:
> > This patch updates vtd_lookup_iotlb() to search cached mappings only
> > for all page levels supported by address width of current vIOMMU. Also,
> > to co
initialized based on the maximum physical address set to
guest CPU. Also, definitions such as VTD_HOST_AW_39/48BIT etc. are renamed
to clarify.
Signed-off-by: Yu Zhang
---
Cc: "Michael S. Tsirkin"
Cc: Igor Mammedov
Cc: Marcel Apfelbaum
Cc: Paolo Bonzini
Cc: Richard Henderson
Cc: Eduardo H
address
width. When creating a VM with 5-level paging feature, one can choose to
create a virtual VTD with 5-level paging capability, with configurations
like "-device intel-iommu,x-aw-bits=57".
Signed-off-by: Yu Zhang
---
Cc: "Michael S. Tsirkin"
Cc: Marcel Apfelbaum
Cc: Paolo
structure level.
Signed-off-by: Yu Zhang
---
Cc: "Michael S. Tsirkin"
Cc: Marcel Apfelbaum
Cc: Paolo Bonzini
Cc: Richard Henderson
Cc: Eduardo Habkost
Cc: Peter Xu
---
hw/i386/intel_iommu.c | 5 +++--
hw/i386/intel_iommu_internal.h | 7 ++-
2 files changed, 5 insert
Intel Virtualization Technology for Directed I/O).
This patch set extends the current logic to support a wider address width.
A 5-level paging capable IOMMU(for 2nd level translation) can be rendered
with configuration "device intel-iommu,x-aw-bits=57".
Yu Zhang (3):
intel-iommu: dif
On Thu, Aug 23, 2018 at 02:34:31PM +0200, Igor Mammedov wrote:
> On Thu, 23 Aug 2018 17:01:33 +0800
> Yu Zhang wrote:
>
> > On 8/23/2018 2:01 AM, Eduardo Habkost wrote:
> > > On Wed, Aug 22, 2018 at 03:05:36PM +0200, Igor Mammedov wrote:
> > >> On Wed, 22 A
On 8/23/2018 2:01 AM, Eduardo Habkost wrote:
On Wed, Aug 22, 2018 at 03:05:36PM +0200, Igor Mammedov wrote:
On Wed, 22 Aug 2018 12:06:26 +0200
Laszlo Ersek wrote:
On 08/22/18 11:46, Igor Mammedov wrote:
Commit
10efd7e108 "pc: acpi: fix memory hotplug regression by reducing stub SRAT en
76 matches
Mail list logo