On 06/12/2012 07:37 PM, Benjamin Herrenschmidt wrote:
Not really no, we don't have proper DMA APIs to shoot from devices.
What the DMAContext patches provide is a generic dma_* API but if we are
going to get rid of DMAContext in favor of a (modified ?) MemoryRegion
I'd rather not expose that to
On Wed, 2012-06-13 at 15:57 -0500, Anthony Liguori wrote:
I think pci_* wrappers is the right thing to do in the short term (and
probably
long term too).
Oh I agree absolutely. Same for vio, I'll do some wrappers. One
remaining question is where do the barriers go in that scheme...
I'll
On Wed, Jun 13, 2012 at 10:37:41AM +1000, Benjamin Herrenschmidt wrote:
On Tue, 2012-06-12 at 12:46 +0300, Avi Kivity wrote:
I think that transformation function lives in the bus layer
MemoryRegion. It's a bit tricky though because you need some sort of
notion of who is asking. So you
On Thu, 2012-06-14 at 02:00 +0200, Edgar E. Iglesias wrote:
TBH, I don't understand any of the upstream access discussion nor
the specifics of DMA accesses for the memory/system bus accesses.
When a device, like a DMA unit accesses the memory/system bus it,
AFAIK, does it from a different
On Thu, Jun 14, 2012 at 11:34:10AM +1000, Benjamin Herrenschmidt wrote:
On Thu, 2012-06-14 at 02:00 +0200, Edgar E. Iglesias wrote:
TBH, I don't understand any of the upstream access discussion nor
the specifics of DMA accesses for the memory/system bus accesses.
When a device, like a
On Thu, 2012-06-14 at 04:03 +0200, Edgar E. Iglesias wrote:
Thanks for the clarificatino Ben.
I don't know much about PCI but in the embedded world I've never seen
anything that resemblems what you describe. Devices at the bottom of
the hierharcy (or at any location) that make acceses to the
On Thu, Jun 14, 2012 at 12:16:45PM +1000, Benjamin Herrenschmidt wrote:
On Thu, 2012-06-14 at 04:03 +0200, Edgar E. Iglesias wrote:
Thanks for the clarificatino Ben.
I don't know much about PCI but in the embedded world I've never seen
anything that resemblems what you describe. Devices
On Thu, 2012-06-14 at 04:31 +0200, Edgar E. Iglesias wrote:
An AXI device might issue a cycle on the AXI portion, that can be
decoded by either a sibling AXI device ... or go up. In most cases
No, it doesn't really go up.. This is where we disagree.
Well, not really indeed as the memory
On Thu, Jun 14, 2012 at 12:41:06PM +1000, Benjamin Herrenschmidt wrote:
On Thu, 2012-06-14 at 04:31 +0200, Edgar E. Iglesias wrote:
An AXI device might issue a cycle on the AXI portion, that can be
decoded by either a sibling AXI device ... or go up. In most cases
No, it doesn't really
On Thu, 2012-06-14 at 05:17 +0200, Edgar E. Iglesias wrote:
The CPU's MMU is a CPU local thing, it can be ignored in this context...
Anyway, I might very well be missing or missunderstganding something so
I'm not claiming I'm having the absolute thruth here but it seems to me
like
snip thread
So I was looking at this accessor business. We already have them for
PCI. PAPR VIO already has its own as well.
That leaves us with various devices such as OHCI that can exist
on different bus types and use the lower-level DMAContext based
variant...
Now I'm keen to keep it
On 06/12/2012 01:29 AM, Anthony Liguori wrote:
So it makes some amount of sense to use the same structure. For example,
if a device issues accesses, those could be caught by a sibling device
memory region... or go upstream.
Let's just look at downstream transformation for a minute...
We do
On Tue, 2012-06-12 at 12:46 +0300, Avi Kivity wrote:
I think that transformation function lives in the bus layer
MemoryRegion. It's a bit tricky though because you need some sort of
notion of who is asking. So you need:
dma_memory_write(MemoryRegion *parent, DeviceState *caller,
system_memory
alias - pci
alias - ram
pci
bar1
bar2
pcibm
alias - pci (prio 1)
alias - system_memory (prio 0)
cpu_physical_memory_rw() would be implemented as
memory_region_rw(system_memory, ...) while pci_dma_rw()
On 06/11/2012 05:00 PM, Benjamin Herrenschmidt wrote:
system_memory
alias - pci
alias - ram
pci
bar1
bar2
pcibm
alias - pci (prio 1)
alias - system_memory (prio 0)
cpu_physical_memory_rw() would be implemented as
On Mon, 2012-06-11 at 17:29 -0500, Anthony Liguori wrote:
I don't know that we really have bit masking done right in the memory API.
That's not a big deal:
When we add a subregion, it always removes the offset from the address when
it
dispatches. This more often than not works out well
Am 12.06.2012 00:00, schrieb Benjamin Herrenschmidt:
system_memory
alias - pci
alias - ram
pci
bar1
bar2
pcibm
alias - pci (prio 1)
alias - system_memory (prio 0)
cpu_physical_memory_rw() would be implemented as
On 06/11/2012 06:46 PM, Benjamin Herrenschmidt wrote:
On Mon, 2012-06-11 at 17:29 -0500, Anthony Liguori wrote:
When we add a subregion, it always removes the offset from the address when it
dispatches. This more often than not works out well but for what you're
describing above, it sounds
On Mon, 2012-06-11 at 20:33 -0500, Anthony Liguori wrote:
On 06/11/2012 06:46 PM, Benjamin Herrenschmidt wrote:
On Mon, 2012-06-11 at 17:29 -0500, Anthony Liguori wrote:
When we add a subregion, it always removes the offset from the address
when it
dispatches. This more often than not
On Tue, 2012-06-12 at 03:04 +0200, Andreas Färber wrote:
That's not quite the way we're modelling it yet as shown by Avi above,
there is no CPU address space, only a system address space.
That can be considered as roughly equivalent for now, though it might
become problematic when modelling
20 matches
Mail list logo