>
> > Perhaps we just need an ioctl where an X server can switch this.
>
> Switch what? Turn on or off transparent translation?
Turn on/off bypass for its device.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More
On Tuesday, June 26, 2007 10:31:57 Andi Kleen wrote:
> > >>(and I think it mostly already doesn't even without that)
> > >
> > >It uses /sys/bus/pci/* which is not any better as seen from the
> > > IOMMU.
> > >
> > >Any interface will need to be explicit because user space needs to
> > > know which
> >>(and I think it mostly already doesn't even without that)
> >
> >It uses /sys/bus/pci/* which is not any better as seen from the IOMMU.
> >
> >Any interface will need to be explicit because user space needs to know
> >which
> >DMA addresses to put into the hardware. It's not enough to just
>
Andi Kleen wrote:
On Tue, Jun 26, 2007 at 08:15:05AM -0700, Arjan van de Ven wrote:
Also the user interface for X server case needs more work.
actually with the mode setting of X moving into the kernel... X won't
use /dev/mem anymore at all
We'll see if that happens. It has been talked about
On Tue, Jun 26, 2007 at 08:48:04AM -0700, Keshavamurthy, Anil S wrote:
> Our initial benchmark results showed we had around 3% extra CPU
> utilization overhead when compared to native(i.e without IOMMU).
> Again, our benchmark was on small SMP machine and we used iperf and
> a 1G ethernet cards.
On Tue, Jun 26, 2007 at 11:11:25AM -0400, Muli Ben-Yehuda wrote:
> On Tue, Jun 26, 2007 at 08:03:59AM -0700, Arjan van de Ven wrote:
> > Muli Ben-Yehuda wrote:
> > >How much? we have numbers (to be presented at OLS later this week)
> > >that show that on bare-metal an IOMMU can cost as much as 15%-
On Tue, Jun 26, 2007 at 11:09:40AM -0400, Muli Ben-Yehuda wrote:
> On Tue, Jun 26, 2007 at 05:56:49PM +0200, Andi Kleen wrote:
>
> > > > - The IOMMU can merge sg lists into a single virtual block. This could
> > > > potentially speed up SG IO when the device is slow walking SG
> > > > lists. [I l
On Tue, Jun 26, 2007 at 08:15:05AM -0700, Arjan van de Ven wrote:
> >
> >Also the user interface for X server case needs more work.
> >
>
> actually with the mode setting of X moving into the kernel... X won't
> use /dev/mem anymore at all
We'll see if that happens. It has been talked about fore
Also the user interface for X server case needs more work.
actually with the mode setting of X moving into the kernel... X won't
use /dev/mem anymore at all
(and I think it mostly already doesn't even without that)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
On Tue, Jun 26, 2007 at 08:03:59AM -0700, Arjan van de Ven wrote:
> Muli Ben-Yehuda wrote:
> >How much? we have numbers (to be presented at OLS later this week)
> >that show that on bare-metal an IOMMU can cost as much as 15%-30% more
> >CPU utilization for an IO intensive workload (netperf). It wi
On Tue, Jun 26, 2007 at 05:56:49PM +0200, Andi Kleen wrote:
> > > - The IOMMU can merge sg lists into a single virtual block. This could
> > > potentially speed up SG IO when the device is slow walking SG
> > > lists. [I long ago benchmarked 5% on some block benchmark with
> > > an old MPT Fusion
Muli Ben-Yehuda wrote:
How much? we have numbers (to be presented at OLS later this week)
that show that on bare-metal an IOMMU can cost as much as 15%-30% more
CPU utilization for an IO intensive workload (netperf). It will be
interesting to see comparable numbers for VT-d.
for VT-d it is a LO
Muli Ben-Yehuda <[EMAIL PROTECTED]> writes:
> On Tue, Jun 26, 2007 at 09:12:45AM +0200, Andi Kleen wrote:
>
> > There are some potential performance benefits too:
> > - When you have a device that cannot address the complete address range
> > an IOMMU can remap its memory instead of bounce buffer
On Tue, Jun 26, 2007 at 09:12:45AM +0200, Andi Kleen wrote:
> There are some potential performance benefits too:
> - When you have a device that cannot address the complete address range
> an IOMMU can remap its memory instead of bounce buffering. Remapping
> is likely cheaper than copying.
But
On Tuesday 26 June 2007 08:45:50 Andrew Morton wrote:
> On Tue, 19 Jun 2007 14:37:01 -0700 "Keshavamurthy, Anil S" <[EMAIL
> PROTECTED]> wrote:
>
> > This patch supports the upcomming Intel IOMMU hardware
> > a.k.a. Intel(R) Virtualization Technology for Directed I/O
> > Architecture
>
> So
On Tue, 19 Jun 2007 14:37:01 -0700 "Keshavamurthy, Anil S" <[EMAIL PROTECTED]>
wrote:
> This patch supports the upcomming Intel IOMMU hardware
> a.k.a. Intel(R) Virtualization Technology for Directed I/O
> Architecture
So... what's all this code for?
I assume that the intent here is to
Hi All,
This patch supports the upcomming Intel IOMMU hardware
a.k.a. Intel(R) Virtualization Technology for Directed I/O
Architecture and the hardware spec for the same can be found here
http://www.intel.com/technology/virtualization/index.htm
This version of the patches incorpor
Sorry for the resend as my previous posting did not make
it to several people.
Hi,
We are pleased to announce the revised version of
the Intel IOMMU driver. This driver incorporates several
feedback received from Anid Kleen, David Miller and
several others.
Most notable changes from prev
Hi,
We are pleased to announce the revised version of
the Intel IOMMU driver. This driver incorporates several
feedback received from Anid Kleen, David Miller and
several others.
Most notable changes from previous postings (apart from
general code cleanup) are
1) Replaced linear linked l
19 matches
Mail list logo