On Wed, Mar 19, 2003 at 03:56:04PM -0800, Philip Brown wrote: > On Wed, Mar 19, 2003 at 11:46:13PM +0000, José Fonseca wrote: > > On Thu, Mar 13, 2003 at 11:47:13PM -0800, Philip Brown wrote: > > > The docs at http://dri.sourceforge.net/doc/drm_low_level.html > > > > > > say about drmAddMap(), for type DRM_SHM, > > > > > > "A shared memory area of the requested size will be created and > > > locked in kernel memory." > > > > > > > > > Is that supposed to be the same thing as "locked in physical > > > memory"? Because without specifying more exactly, under some > > > OSes, it is not the same thing by default. > > > > You have to better enlighten the difference between those. > > > > In either case, the physical memory offset itself is not important > > since the SAREA won't be accessed directly by the graphicas > > hardware: just the kernel, X and the client. > > > I'd rather hear first what it is "supposed" to mean in the current > implementations. The current docs say "locked in kernel memory". > As written, that statement does not make sense to me. So I am > GUESSING it really means "locked in physical memory". But I would like > clarification on what it is supposed to mean.
You said yourself that "locked in physical memory" and "kernel memory" are two different things. Don't know why you're trying to guess otherwise... If you care to look at xc/programs/Xserver/hw/xfree86/os-support/linux/drm/kernel/drm_bufs.h:130, on DRM(addmap)() definition you'll read: case _DRM_SHM: map->handle = vmalloc_32(map->size); ^^^^^^^^^^ DRM_DEBUG( "%ld %d %p\n", map->size, DRM(order)( map->size ), map->handle ); if ( !map->handle ) { DRM(free)( map, sizeof(*map), DRM_MEM_MAPS ); return -ENOMEM; } map->offset = (unsigned long)map->handle; if ( map->flags & _DRM_CONTAINS_LOCK ) { dev->sigdata.lock = dev->lock.hw_lock = map->handle; /* Pointer to lock */ } break; On my /usr/src/linux/Documentation/DMA-mapping.txt it says: "This means specifically that you may _not_ use the memory/addresses returned from vmalloc() for DMA. It is possible to DMA to the _underlying_ memory mapped into a vmalloc() area, but this requires walking page tables to get the physical addresses, and then translating each of those pages back to a kernel address using something like __va(). " > (I understand that the actual physical offset is not important. but > I suspect that the original writers intended for it to be locked down > into physical memory for performance reasons.) If the memory doesn't need to be locked in physical memory, unless the OS is severly broken, I don't see what is there to be gained by avoiding VM paging. You can surely do it that way, but I don't see the point. José Fonseca PS: This aren't things to guess, but to research for. ------------------------------------------------------- This SF.net email is sponsored by: Tablet PC. Does your code think in ink? You could win a Tablet PC. Get a free Tablet PC hat just for playing. What are you waiting for? http://ads.sourceforge.net/cgi-bin/redirect.pl?micr5043en _______________________________________________ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel