I made a first implementation of this concept. CPU-bus uses
southbound functions, device-CPU northbound ones.
The system is not symmetric, the device address range allocation could
well be separate.
What do you think?
Index: qemu/cpu-all.h
On 8/30/07, Paul Brook [EMAIL PROTECTED] wrote:
If this is the case, it means we don't need anything complicated. Devices
map themselves straight into the system address space at the appropriate
slot address (no plug-n-play to worry about), and device DMA goes via the
IOMMU.
Further
From DMA2.txt, NCR89C100.txt, NCR89C105.txt and turbosparc.pdf I
gather the following:
- CPU and IOMMU always perform slave accesses
- Slave accesses use the 28-bit address bus to select the device
I thought device selection was separate from the 28-bit SBus slave address
space. ie. each
On 9/8/07, Paul Brook [EMAIL PROTECTED] wrote:
From DMA2.txt, NCR89C100.txt, NCR89C105.txt and turbosparc.pdf I
gather the following:
- CPU and IOMMU always perform slave accesses
- Slave accesses use the 28-bit address bus to select the device
I thought device selection was separate
IIUC devices never register addresses on the master bus. The only thing
that responds on that bus is the IOMMU.
Generally yes, but these intelligent masters and their targets would
register on on both buses. The only case I can only think of is a
video grabber, it's frame memory could be
On 8/28/07, Paul Brook [EMAIL PROTECTED] wrote:
On second thought, there is a huge difference between a write access
originating from CPU destined for the device and the device writing to
main memory. The CPU address could be 0xf000 1000, which may translate
to a bus address of 0x1000, as
This is a bit mysterious for me too. SBus address space is 28 bits
(256MB). Usually each slot maps to a different area. So the CPU sees
one slot for example at 0x3000 and other at 0x4000 .
IOMMU can map max 2G of memory, usually a 32 or 64MB region. For the
devices, this device
If this is the case, it means we don't need anything complicated. Devices
map themselves straight into the system address space at the appropriate
slot address (no plug-n-play to worry about), and device DMA goes via the
IOMMU.
Further searching by google suggests I may be wrong.
The
On 8/26/07, Blue Swirl [EMAIL PROTECTED] wrote:
On 8/26/07, Fabrice Bellard [EMAIL PROTECTED] wrote:
Paul Brook wrote:
pci_gdma.diff: Convert PCI devices and targets
Any comments? The patches are a bit intrusive and I can't test the
targets except that they compile.
Shouldn't the
On second thought, there is a huge difference between a write access
originating from CPU destined for the device and the device writing to
main memory. The CPU address could be 0xf000 1000, which may translate
to a bus address of 0x1000, as an example. The device could write to
main memory
Paul Brook wrote:
pci_gdma.diff: Convert PCI devices and targets
Any comments? The patches are a bit intrusive and I can't test the
targets except that they compile.
Shouldn't the PCI DMA object be a property of the PCI bus?
ie. we don't want/need to pass it round as a separate parameter. It
On 8/26/07, Fabrice Bellard [EMAIL PROTECTED] wrote:
Paul Brook wrote:
pci_gdma.diff: Convert PCI devices and targets
Any comments? The patches are a bit intrusive and I can't test the
targets except that they compile.
Shouldn't the PCI DMA object be a property of the PCI bus?
ie. we
On Friday 24 August 2007, Blue Swirl wrote:
I have now converted the ISA DMA devices (SB16, FDC), most PCI devices
and targets.
gdma.diff: Generic DMA
pc_ppc_dma_to_gdma.diff: Convert x86 and PPC to GDMA
pc_sb16_to_gdma.diff: Convert SB16 to GDMA
pc_fdc_to_gdma.diff: FDC
Paul Brook wrote:
On Friday 24 August 2007, Blue Swirl wrote:
I have now converted the ISA DMA devices (SB16, FDC), most PCI devices
and targets.
gdma.diff: Generic DMA
pc_ppc_dma_to_gdma.diff: Convert x86 and PPC to GDMA
pc_sb16_to_gdma.diff: Convert SB16 to GDMA
pc_fdc_to_gdma.diff: FDC
pci_gdma.diff: Convert PCI devices and targets
Any comments? The patches are a bit intrusive and I can't test the
targets except that they compile.
Shouldn't the PCI DMA object be a property of the PCI bus?
ie. we don't want/need to pass it round as a separate parameter. It can
be
In 8/16/07, malc [EMAIL PROTECTED] wrote:
Very long time ago i changed the ISA DMA API to address some of the
critique that Fabrice expressed, i can't remember offhand if that
included removal of explicit position passing or not (the patch is on
some off-line HDD so it's not easy to check
On 8/14/07, Blue Swirl [EMAIL PROTECTED] wrote:
Would the framework need any changes to support other targets? Comments
welcome.
Replying to myself: Yes, changes may be needed. Some of the DMA
controllers move the data outside CPU loop, but that does not make
much difference.
Background: I
On Thu, 16 Aug 2007, Blue Swirl wrote:
On 8/14/07, Blue Swirl [EMAIL PROTECTED] wrote:
Would the framework need any changes to support other targets? Comments welcome.
Replying to myself: Yes, changes may be needed. Some of the DMA
controllers move the data outside CPU loop, but that does
18 matches
Mail list logo