On Fri, 26 Aug 2005, Andreas Baer wrote: > What are the current file size limits for memory mapping via glibc's > mmap() function on linux:
Its 2^wordsize - areas used for other purposes. 1-3GB on i386. A couple of petabytes(?) on ia64. > - I doubt that the full 64-Bit (something within Exabyte) are available > in practical use. Right or wrong? You can likely map a 512 M section starting at 6GB into your address space. > I've also found an old kernel-list e-mail from 2004 that says: > "There is a limit per process in the kernel vm that prevent from > mmapping more than 512GB of data." > > - Is this still true for the current kernel? That depends on the architecture. Certainly no problem on Itanium. > Let's presume the following case. I have an 8 GB file, 1 GB physical > memory and I want to use memory mapping for that file using LFS on a > 32-Bit machine. > - Is it possible? Yes. I am not sure why one would have to use LFS. This should work on any filesystem. > If yes, let's presume that I have 2 or more pointers, that are > frequently pointing to completely different places and switch the data > they are pointing to. > > - How is it managed (by the kernel)? Through the pages, that are > mentioned in the glibc documentation above? Are these page operations > really faster than normal random file access (lseek etc)? Kernel gets a fault if the page pointed to is not present and reads it in from the file. And yes its faster because I/O is aligned and the page does not need to be copied from kernel to user space. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/