On Tue, Nov 09, 2010 at 03:18:37PM +0000, Eduardo Horvath wrote: > On Tue, 9 Nov 2010, Masao Uebayashi wrote: > > > On Tue, Nov 09, 2010 at 07:28:34PM +1100, matthew green wrote: > > > > > > > I'll merge this in a few days. I believe I've given enough reasonings > > > > to back this design in various places. > > > > > > do not do this. > > > > > > this code has currently seen review that was less than favourable > > > and you have not given much consideration to the flaws. unless > > > > What are the flaws? > > There are two issues I see with the design and I don't understand how > they are addressed: > > 1) On machines where the cache is responsible for handling ECC, how do you > prevent a user from trying to mount a device XIP, causing a data error and > a system crash?
Sorry, I don't understand this situation... How does this differ from user mapped RAM pages with ECC? > > 2) How will this work with mfs and memory disks where you really want to > use XIP always but the pages are standard, managed RAM? This is a good question. What you need to do is: - Provide a block device interface (mount) - Provide a vnode pager interface (page fault) You'll allocate managed RAM pages in the memory disk driver, and keep them. When a file is accessed, fault handler asks vnode pager to give relevant pages back to it. My current code assumes XIP backend is always a contiguous MMIO device. Both physical address pages and metadata (vm_page) are contiguous, we can look up matching vm_pages (genfs_getpages_xip). If you want to use managed RAM pages, you need to manage a collection of vm_pages, presented as a range. This is exactly what uvm_object is for. I think it's natural that device drivers own uvm_object, and return their pages back to other subsystems, or "loan" pages to other uvm_objects like vnode. The problem is, the current I/O subsystem and UVM are not integrated very well. So, the answer is, you can't do that now, but it's a known problem. (Extending uvm_object and using it everywhere is the way to go.)