On Wed, Feb 13, 2008 at 10:23:34AM -0800, Randy Dunlap wrote:
>> Index: linux-2.6.24-mm1/Documentation/kernel-parameters.txt
>> ===
>> --- linux-2.6.24-mm1.orig/Documentation/kernel-parameters.txt
>> 2008-02-12
>> 07:12:06.000
On Tue, Feb 12, 2008 at 07:54:48AM -0800, David Miller wrote:
> > Something could be done:
> > we could enable drivers to have DMA-pools they manage that get mapped
> > and are re-used.
> >
> > I would rather the DMA-pools be tied to PID's that way any bad behavior
> > would be limited to the addr
Index: linux-2.6.24-mm1/Documentation/kernel-parameters.txt
===
--- linux-2.6.24-mm1.orig/Documentation/kernel-parameters.txt 2008-02-12
07:12:06.0 -0800
+++ linux-2.6.24-mm1/Documentation/kernel-parameters.txt2008-
On Tue, Feb 12, 2008 at 12:21:08PM -0800, Randy Dunlap wrote:
>> Index: linux-2.6.24-mm1/Documentation/kernel-parameters.txt
>> ===
>> --- linux-2.6.24-mm1.orig/Documentation/kernel-parameters.txt
>> 2008-02-12
>> 07:12:06.000
From: mark gross <[EMAIL PROTECTED]>
Date: Tue, 12 Feb 2008 07:54:48 -0800
> Something could be done:
> we could enable drivers to have DMA-pools they manage that get mapped
> and are re-used.
>
> I would rather the DMA-pools be tied to PID's that way any bad behavior
> would be limited to the ad
Index: linux-2.6.24-mm1/Documentation/kernel-parameters.txt
===
--- linux-2.6.24-mm1.orig/Documentation/kernel-parameters.txt 2008-02-12
07:12:06.0 -0800
+++ linux-2.6.24-mm1/Documentation/kernel-parameters.txt2008-
On Tue, Feb 12, 2008 at 08:34:39AM -0800, Randy Dunlap wrote:
> mark gross wrote:
>> Index: linux-2.6.24-mm1/drivers/pci/intel-iommu.c
>> ===
>> --- linux-2.6.24-mm1.orig/drivers/pci/intel-iommu.c 2008-02-12
>> 07:12:06.0 -0
mark gross wrote:
Index: linux-2.6.24-mm1/drivers/pci/intel-iommu.c
===
--- linux-2.6.24-mm1.orig/drivers/pci/intel-iommu.c 2008-02-12
07:12:06.0 -0800
+++ linux-2.6.24-mm1/drivers/pci/intel-iommu.c 2008-02-12 07:47:0
On Mon, Feb 11, 2008 at 03:27:16PM -0800, Randy Dunlap wrote:
> On Mon, 11 Feb 2008 14:41:05 -0800 mark gross wrote:
>
> > The hole is the following scenarios:
> > do many map_signal operations, do some unmap_signals, reuse a recently
> > unmapped page, > memory>
> >
> > Or: you have rouge hard
On Tue, Feb 12, 2008 at 01:00:06AM -0800, David Miller wrote:
> From: Muli Ben-Yehuda <[EMAIL PROTECTED]>
> Date: Tue, 12 Feb 2008 10:52:56 +0200
>
> > The streaming DMA-API was designed to conserve IOMMU mappings for
> > machines where IOMMU mappings are a scarce resource, and is a poor
> > fit f
On Tue, Feb 12, 2008 at 10:52:56AM +0200, Muli Ben-Yehuda wrote:
> On Mon, Feb 11, 2008 at 02:41:05PM -0800, mark gross wrote:
>
> > The intel-iommu hardware requires a polling operation to flush IOTLB
> > PTE's after an unmap operation. Through some TSC instrumentation of
> > a netperf UDP strea
On Tue, Feb 12, 2008 at 01:00:06AM -0800, David Miller wrote:
> From: Muli Ben-Yehuda <[EMAIL PROTECTED]>
> Date: Tue, 12 Feb 2008 10:52:56 +0200
>
> > The streaming DMA-API was designed to conserve IOMMU mappings for
> > machines where IOMMU mappings are a scarce resource, and is a poor
> > fit f
From: Muli Ben-Yehuda <[EMAIL PROTECTED]>
Date: Tue, 12 Feb 2008 10:52:56 +0200
> The streaming DMA-API was designed to conserve IOMMU mappings for
> machines where IOMMU mappings are a scarce resource, and is a poor
> fit for a modern IOMMU such as VT-d with a 64-bit IO address space
> (or even a
On Mon, Feb 11, 2008 at 02:41:05PM -0800, mark gross wrote:
> The intel-iommu hardware requires a polling operation to flush IOTLB
> PTE's after an unmap operation. Through some TSC instrumentation of
> a netperf UDP stream with small packets test case it was seen that
> the flush operations wher
On Mon, 11 Feb 2008 14:41:05 -0800 mark gross wrote:
> The hole is the following scenarios:
> do many map_signal operations, do some unmap_signals, reuse a recently
> unmapped page, memory>
>
> Or: you have rouge hardware using DMA's to look at pages: do many
or rogue hardware?
> map_signal
The intel-iommu hardware requires a polling operation to flush IOTLB
PTE's after an unmap operation. Through some TSC instrumentation of a
netperf UDP stream with small packets test case it was seen that the
flush operations where sucking up to 16% of the CPU time doing
iommu_flush_iotlb's
The fo
16 matches
Mail list logo