Let me be more precise in general to the overall original question: I want a userland process that I designate to only use a specific hard coded region physical of memory for it's heap. A UIO driver is the means by which I've gone about seeking to achieve this.
On Tue, Oct 6, 2015 at 10:41 AM, Kenneth Adam Miller < kennethadammil...@gmail.com> wrote: > > On Tue, Oct 6, 2015 at 10:32 AM, Yann Droneaud <ydrone...@opteya.com> > wrote: > >> Le mardi 06 octobre 2015 à 10:13 -0400, Kenneth Adam Miller a écrit : >> > >> > >> > On Tue, Oct 6, 2015 at 9:58 AM, Yann Droneaud <ydrone...@opteya.com> >> > wrote: >> > > Le mardi 06 octobre 2015 à 09:26 -0400, Kenneth Adam Miller a écrit >> > > : >> > > >> > > > Any body know about the issue of assigning a process a region of >> > > > physical memory to use for it's malloc and free? I'd like to just >> > > > have the process call through to a UIO driver with an ioctl, and >> > > then >> > > > once that's done it gets all it's memory from a specific region. >> > > > >> > > >> > > You mean CONFIG_UIO_DMEM_GENIRQ (drivers/uio/uio_dmem_genirq.c) >> > > >> > > See: >> > > http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/comm >> > > it/?id=0a0c3b5a24bd802b1ebbf99e0b01296647b8199b >> > > http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/comm >> > > it/?id=b533a83008c3fb4983c1213276790cacd39b518f >> > > https://www.kernel.org/doc/htmldocs/uio-howto/using-uio_dmem_genirq >> > > .html >> > > >> > > >> > Well I don't think that does exactly what I would like, although I've >> > got that on my machine and I've been compiling it and learning from >> > it. Here's my understanding of the process of the way mmap works: >> > >> > Mmap is called from userland and it maps a region of memory of a >> > certain size according to the parameters given to it, and the return >> > value it has is the address at which the block requested starts, if >> > it was successful (which I'm not addressing the unsuccessful case >> > here for brevity). The userland process now has only a pointer to a >> > region of space, as if they had allocated something with new or >> > malloc. Further calls to new or malloc don't mean that the pointers >> > returned will preside within the new mmap'd chunk, they are just >> > separate regions also mapped into the process. >> > >> >> You have to write your own custom allocator using the mmap()'ed memory >> your retrieved from UIO. >> > > I know about C++'s placement new. But I'd prefer to not have to write my > userland code in such a way-I want my userland code to remain agnostic of > where it gets the memory from. I just want to put a small prologue in my > main, and then have the rest of the program oblivious to the change. > > >> >> > What I would like is a region of memory that, once mapped to a >> > process, further calls to new/malloc return pointers that preside >> > within this chunk. Calls to new/malloc and delete/free only edit the >> > process's internal table, which is fine. >> > >> > Is that wrong? Or is it that mmap already does the latter? >> >> It's likely wrong. glibc's malloc() using brk() and mmap() to allocate >> anonymous pages. Tricking this implementation to use another mean to >> retrieve memory is left to the reader. >> >> > > > >> Anyway, are you sure you want any random calls to malloc() (from glibc >> itself or any other linked-in libraries) to eat UIO allocated buffers ? >> I don't think so: such physically contiguous, cache coherent buffers >> are premium ressources, you don't want to distribute them gratuitously. >> >> > Yes - we have a hard limit on memory for our processes, and if they try to > use more than what we mmap to them, they die, and we're more than fine with > that. In fact, that's part of our use case and model, we've planned to > allocate just 5 or so processes on our behemoth machine with gigabytes of > memory. So they aren't so premium to us. > > >> Regards. >> >> -- >> Yann Droneaud >> OPTEYA >> >> >> >> >
_______________________________________________ Kernelnewbies mailing list Kernelnewbies@kernelnewbies.org http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies