I've started working to get PCI MGA cards, the PCI G450 specifically, working with DRI. My initial goal is to just get it working with crummy performance, then improve it by adding support for IOMMUs (to simulate AGP texturing) on systems like pSeries and AMD64 that have them.

I've started by digging through the DRI init process in the X.org MGA DDX. As near as I can tell, the driver uses AGP memory for four things. To make the PCI cards work, I'll need to make it do without AGP for these things.

1. WARP microcode. This seems *really* odd to me. The DDX carves off a 32KiB chunk of AGP space and gives it to the kernel to use to store the WARP microcode. Why is the DDX involved in this *at all*? The microcode exists only in the kernel module. It seems that the DRM could just as easily drm_pci_alloc a chunk of memory large enough to hold the microcode for the card (which is different for G400-class cards and G200-class cards).

2. Primary DMA buffer. The DDX carves of 1MB for the primary DMA buffer. I don't think that's outside the reasonable realm for drm_pci_alloc. If it is, can this work with a smaller buffer?

3. Secondary DMA buffers. The DDX carves off room for 128 64KiB DMA buffers. I haven't dug that deeply, but I seem to recall that the DRI driver uses these buffers as non-contiguous. That is, it treats them as 128 separate buffers and not a big 8MB buffer that it cards 64KiB chunks from. If that's the case, then it should be easy enough to modify the driver the drm_pci_alloc (upto) 128 64KiB chunks for PCI cards. Is there any actual performance benefit to having this be in AGP space at all or do they just have to be in the same "address space" as the primary DMA buffer?

4. AGP textures. Without an IOMMU, we pretty much have to punt here. Performance will be bad, but I can live with that.


If these assumptions are at least /mostly/ correct, I think I have a pretty good idea how I'll change the init process around. I'd like to, basically, pull most of MGADRIAgpInit into the kernel. There will be a single device-specific command called something like DRM_MGA_DMA_BOOTSTRAP. The DDX will pass in the desired AGP mode and size. The DRM will do some magic and fill in the rest of the structure. The structure used will probably look something like below. Notice that the DDX *never* needs to know anything about the WARP microcode in this arrangement.


struct drm_mga_dma_bootstrap {
        /**
         * 1MB region of primary DMA space.  This is AGP space if
         * \c agp_mode is non-zero and PCI space otherwise.
         */
        drmRegion       primary_dma;

        /**
         * Region for holding textures.  If \c agp_mode is zero and
         * there is no IOMMU available, this will be zero size.
         */
        drmRegion       textures;

        /**
         * Upto 128 secondary DMA buffers.  Each region will be a
         * multiple of 64KiB.  If \c agp_mode is non-zero, typically
         * only the first region will be configured.  Otherwise,
         * each region will be used and allocated for 64KiB.
         */
        drmRegion       secondary_dma[128];

        u8      agp_size;       /**< Size of AGP region in MB. */
        u8      agp_mode;       /**< Set AGP mode.  0 for PCI. */
};

Does this look good, or should I try to get more sleep before designing interfaces like this? ;)



-------------------------------------------------------
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7393&alloc_id=16281&op=click
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to