On 09/01/2011 12:00 AM, David Gibson wrote:
This patch adds functions to pci.[ch] to perform PCI DMA operations. At
present, these are just stubs which perform directly cpu physical memory
accesses.
Using these stubs, however, distinguishes PCI device DMA transactions from
other accesses to phy
On Fri, Sep 02, 2011 at 02:38:25PM +1000, David Gibson wrote:
> > I'd prefer the stubs to be inline. Not just as an optimization:
> > it also makes it easier to grok what goes on in the common
> > no-iommu case.
>
> To elaborate on my earlier mail. The problem with making them inlines
> is that t
On Fri, Sep 2, 2011 at 8:35 AM, Avi Kivity wrote:
> On 09/01/2011 07:32 PM, Anthony Liguori wrote:
>>>
>>> True. But I still think it's the right thing.
>>>
>>> We can't really pass a MemoryRegion as the source address, since there
>>> is no per-device MemoryRegion.
>>
>>
>> Couldn't the PCI bus e
On Fri, Sep 02, 2011 at 11:37:25AM +0300, Avi Kivity wrote:
> On 09/01/2011 07:05 PM, Anthony Liguori wrote:
> >
> > The challenge is what you do about something like ne2k where the core
> > chipset can either be a PCI device or an ISA device. You would have
> > to implement a wrapper around pci
On 09/01/2011 09:25 PM, Anthony Liguori wrote:
> I think this is the wrong approach given the introduction of the memory API.
>
> I think we should have a generic memory access function that takes a
> MemoryRegion as it's first argument.
>
> The PCI bus should then expose one memory region for e
On 09/01/2011 07:05 PM, Anthony Liguori wrote:
The challenge is what you do about something like ne2k where the core
chipset can either be a PCI device or an ISA device. You would have
to implement a wrapper around pci_dma_rw() in order to turn it into
cpu_physical_memory_rw when doing ISA.
On 09/01/2011 07:32 PM, Anthony Liguori wrote:
True. But I still think it's the right thing.
We can't really pass a MemoryRegion as the source address, since there
is no per-device MemoryRegion.
Couldn't the PCI bus expose 255 MemoryRegions though?
What would those mean? A MemoryRegion is
On Thu, Sep 01, 2011 at 06:35:51PM +0300, Michael S. Tsirkin wrote:
> On Thu, Sep 01, 2011 at 03:00:54PM +1000, David Gibson wrote:
[snip]
> > +#define DECLARE_LDST_DMA(_lname, _sname, _bits) \
> > +uint##_bits##_t ld##_lname##_pci_dma(PCIDevice *dev, dma_addr_t addr);
> > \
> > +void st##
On Thu, Sep 01, 2011 at 07:03:34PM +0300, Avi Kivity wrote:
> On 09/01/2011 06:55 PM, Anthony Liguori wrote:
> >On 09/01/2011 12:00 AM, David Gibson wrote:
> >>This patch adds functions to pci.[ch] to perform PCI DMA operations. At
> >>present, these are just stubs which perform directly cpu physi
On Thu, Sep 01, 2011 at 06:35:51PM +0300, Michael S. Tsirkin wrote:
> On Thu, Sep 01, 2011 at 03:00:54PM +1000, David Gibson wrote:
[snip]
> > +DECLARE_LDST_DMA(ub, b, 8);
> > +DECLARE_LDST_DMA(uw, w, 16);
> > +DECLARE_LDST_DMA(l, l, 32);
> > +DECLARE_LDST_DMA(q, q, 64);
> > +
> > +#undef DECLARE_L
On Thu, Sep 01, 2011 at 11:05:48AM -0500, Anthony Liguori wrote:
> On 09/01/2011 11:03 AM, Avi Kivity wrote:
> >On 09/01/2011 06:55 PM, Anthony Liguori wrote:
> >>On 09/01/2011 12:00 AM, David Gibson wrote:
[snip]
> The challenge is what you do about something like ne2k where the
> core chipset can
On 09/01/2011 11:11 AM, Avi Kivity wrote:
On 09/01/2011 07:05 PM, Anthony Liguori wrote:
I think the patchset is fine. It routes all access through pci_dma_rw(),
which accepts a PCIDevice. We can later define pci_dma_rw() in terms of
the memory API and get the benefit of the memory hierarchy.
On 09/01/2011 07:05 PM, Anthony Liguori wrote:
I think the patchset is fine. It routes all access through pci_dma_rw(),
which accepts a PCIDevice. We can later define pci_dma_rw() in terms of
the memory API and get the benefit of the memory hierarchy.
The challenge is what you do about somethi
On 09/01/2011 11:03 AM, Avi Kivity wrote:
On 09/01/2011 06:55 PM, Anthony Liguori wrote:
On 09/01/2011 12:00 AM, David Gibson wrote:
This patch adds functions to pci.[ch] to perform PCI DMA operations. At
present, these are just stubs which perform directly cpu physical memory
accesses.
Using
On 09/01/2011 06:55 PM, Anthony Liguori wrote:
On 09/01/2011 12:00 AM, David Gibson wrote:
This patch adds functions to pci.[ch] to perform PCI DMA operations. At
present, these are just stubs which perform directly cpu physical memory
accesses.
Using these stubs, however, distinguishes PCI de
On 09/01/2011 12:00 AM, David Gibson wrote:
This patch adds functions to pci.[ch] to perform PCI DMA operations. At
present, these are just stubs which perform directly cpu physical memory
accesses.
Using these stubs, however, distinguishes PCI device DMA transactions from
other accesses to phy
On Thu, Sep 01, 2011 at 03:00:54PM +1000, David Gibson wrote:
> This patch adds functions to pci.[ch] to perform PCI DMA operations. At
> present, these are just stubs which perform directly cpu physical memory
> accesses.
>
> Using these stubs, however, distinguishes PCI device DMA transactions
This patch adds functions to pci.[ch] to perform PCI DMA operations. At
present, these are just stubs which perform directly cpu physical memory
accesses.
Using these stubs, however, distinguishes PCI device DMA transactions from
other accesses to physical memory, which will allow PCI IOMMU suppo
18 matches
Mail list logo