> I believe the idea is to be able to view parts of huge file without > loading them to RAM first. (for really big files, they may not even > fit in RAM.)
Some really big files won't fit even in virtual memory. For example on 32bit architectures (like commonly used i386) you have 2gb usable virtul memory for each process (rest 2 gb is reserverd for kernelspace). Since some part of the 2gb space is already occupied by libraries, etc, you usually can get perhaps 1 gb of virtual memory in which you can mmap things (since the virtual memory range in which you will mmap the file have to be contiguous). And files larger than 4 Gb are quite common (large archives, apache logs from busy servers, etc ...). So mmap may be unavailable at compile time (due to platform issues ...) or at runtime (if you request block too large) We could have some "get that file into memory" call, that will try to use mmap if possible and store pointer to freeing the block (that would call munmap, free or some other method depending on how the block was acquired) But we need to cope with situations, where the file won't fit in RAM and won't fit in virtual memory either. For example 8gb file on i386 architecture with 2 gb of ram. Martin Petricek _______________________________________________ Mc-devel mailing list http://mail.gnome.org/mailman/listinfo/mc-devel