Re: [Libhugetlbfs-devel] version 1.3 vs 2.12 vs current
On Fri, 15 Jun 2012 11:48:47 -0400, starlight.201...@binnacle.cx wrote: > Mostly answered these myself. Posting > to help future searchers: > > (1) yes > > (2) yes > > In both cases it seems one should always set > HUGETLB_MORECORE=yes in the environment as > I don't see how that can be made the default > when linking executables. It cannot be made default because it requires a custom function be used by malloc instead of brk(). Part of what the library does when you set MORECORE=yes is to drop this in place. Also, libhugetlbfs shouldn't interact with Transparent Huge Pages (THP) in any negative way. Because THP only work on anonymous memory, anything setup using libhugetlbfs will be ignored because it is all file backed. > > (3) don't know; doesn't seem worth > the trouble to try it as the added > command utilities seem to be either > (a) wrappers that set the env variables > or (b) interact with newer kernel > hugepage features; i.e. there's not > much value-add It may be possible to build the library, but 2.12 depends on kernel features that are not present in the RHEL 5.X kernels. So you are correct in that it isn't really worthwhile. > > > At 01:14 PM 6/13/2012 -0400, starlight.201...@binnacle.cx wrote: >>Hello, >> >>I'm interested in how the different versions >>behave and relate to each other. Have an >>app that is well suited to hugepages >>and has straightforward linking. This app >>already supports creating it's own work >>areas in hugepage shared memory segments. >>Looking to get as much code and malloc >>into hugepages as possible without changing >>anything. >> >>The app links with 'libhugetlbfs' and works >>under CentOS 5.8 with kernel 2.6.18-308.el5. >>A debug trace shows code segments being >>copied to huge page memory successfully >>and 'pmap -x' shows them as "deleted" >>(really hugepages flag, bug in pmap). >> >>So that's great and I'm quite happy. >> >>What I'm wondering is >> >>1) will higher-version 'libhugetlbfs.so' >>libraries work ok such that binaries linked >>on this system with 1.3 will run properly >>on newer distros and kernels such as >>CentOS 6.2 with 2.12? >> >>2) if (1) is true, will newer features >>such as transparently mapping anonymous >>segments in hugepages work automatically >>in this case? Or is a relink on the >>newer system required? >> >>3) is it possible to build a current >>version of 'libhugetlbfs' and utilities >>on the older RHEL 5.8 distribution to >>get the benefit of the utilities that >>have been added subsequent to version >>1.3? Or will it break due to various >>incompatibilities? I have built and >>installed binutils 2.22 in /usr/local >>if that makes any difference. >> >>I'm hoping someone might know the answers >>to these questions already, without >>exerting effort. No problem if the >>answer is "unknown" or "undefined". >> >>Thanks! >> > > > > -- > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. > Discussions > will include endpoint security, mobile security and the latest in > malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > ___ > Libhugetlbfs-devel mailing list > Libhugetlbfs-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ ___ Libhugetlbfs-devel mailing list Libhugetlbfs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel
Re: [Libhugetlbfs-devel] Help on automatic backing of memory regions
On Thu, 14 Jun 2012 17:40:31 +0200, telenn barz wrote: > Hi all, > > This post is a request for clarification on some features of > libhugetlbfs. I realize that this mailing list is not intended for > this kind of help demand, but after searching unsuccessfully in the > mailing list archive, reading the man pages of hugetlbfs, hugeadm and > hugectl, as well as the excellent article series on LWN.net (the > second one in particular : http://lwn.net/Articles/375096/ [1]), I > didnt find any other relevant place to ask for help. So sorry if I > bother you with my noob questions, and thanks in advance to anyone > who > would take a little time to answer. > > My questions are related to automatic backing of memory regions, when > the system supports multiple page sizes. Say for instance 4, 16, 64, > 256 KB, 1, 4, 16, 64, 256 MB, 1, 4 GB page sizes. We also make the > assumption that pools of each page size have been configured > (hugeadm), and that the application has been pre-linked to > libhugetlbfs. > > Question 1: > > Does libhugetlbfs optimally back text, data and bss segments ? I > mean, > if a text segment is 5,259,264 bytes, will it be mapped with a > combination of "4 MB + 1 MB + 16 KB" page sizes ?... In other words > maybe : when using hugectl, is it allowed to repeat the "--text", > "--data", "--bss" options with different page sizes, or does it only > work with a given page size ? Each segment will only work with a single page size, this page size can be selected at run time by passing the desired page size to the appropriate segment flag (i.e. --text=4M --bss=16K). If no page size is specified, the system default is selected. So in your example with a text segment of 5,259,246 bytes with a system default of 4MB huge pages, 2*4MB pages will be used. Note that repeated --text options will overwrite the requested page size. > > Question 2: > Same question for the heap. The rules for the heap are the same as for the other segments with respect to huge page size. > > Question 3: > Is it possible to limit the number of huge pages to be allocated per > process for the heap (knowing that once this limit is reached, next > allocations will fallback to the default page size, 4 KB) ? There is a set of patches being discussed to add a cgroup controller for huge pages, AFAIK it has not yet been merged though I believe it is close (see here: https://lkml.org/lkml/2012/6/9/22). The best way I can think of to limit the huge page usage is to use separate mount points for users and each of them can have different limits. > > Regards, > Telenn > > -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ ___ Libhugetlbfs-devel mailing list Libhugetlbfs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel
Re: [Libhugetlbfs-devel] libhugetlbfs malloc test fail
On Tue, 12 Jun 2012 21:13:42 -0400, Josh Boyer wrote: > On Tue, Jun 12, 2012 at 5:18 PM, Eric B Munson > wrote: >> On Tue, 12 Jun 2012 15:34:32 -0400, Josh Boyer wrote: >>> >>> Hi, >>> >>> I've been poking around at the libhugetlbfs testcases and there are >>> a >>> small number of failures that I can't figure out. The primary one >>> is >>> the malloc test. From all appearances, it seems that the custom >>> hugetlbfs_morecore function isn't getting called when the test >>> calls >>> malloc, and it fails because the page wasn't allocated from the >>> hugepages. You can see below that the INFO call in the >>> hugetlbfs_morecore function never prints anything: >>> >>> [jwboyer@zod obj64]$ sudo >>> LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:../../obj64/ >>> LD_PRELOAD=../../obj64/libhugetlbfs.so HUGETLB_MORECORE=yes >>> HUGETLB_VERBOSE=4 ./malloc >>> libhugetlbfs [zod.bos.redhat.com:22061]: INFO: Found pagesize 2048 >>> kB >>> libhugetlbfs [zod.bos.redhat.com:22061]: INFO: Parsed kernel >>> version: >>> [3] . [3] . [7] >>> libhugetlbfs [zod.bos.redhat.com:22061]: INFO: Feature >>> private_reservations is present in this kernel >>> libhugetlbfs [zod.bos.redhat.com:22061]: INFO: Feature >>> noreserve_safe >>> is present in this kernel >>> libhugetlbfs [zod.bos.redhat.com:22061]: INFO: Feature map_hugetlb >>> is >>> present in this kernel >>> libhugetlbfs [zod.bos.redhat.com:22061]: INFO: Kernel has >>> MAP_PRIVATE >>> reservations. Disabling heap prefaulting. >>> libhugetlbfs [zod.bos.redhat.com:22061]: INFO: Kernel supports >>> MAP_HUGETLB >>> libhugetlbfs [zod.bos.redhat.com:22061]: INFO: HUGETLB_SHARE=0, >>> sharing disabled >>> libhugetlbfs [zod.bos.redhat.com:22061]: INFO: >>> HUGETLB_NO_RESERVE=no, >>> reservations enabled >>> libhugetlbfs [zod.bos.redhat.com:22061]: INFO: No segments were >>> appropriate for remapping >>> libhugetlbfs [zod.bos.redhat.com:22061]: INFO: setup_morecore(): >>> heapaddr = 0xe0 >>> Starting testcase "./malloc", pid 22061 >>> HUGETLB_MORECORE=yes >>> HUGETLB_RESTRICT_EXE=(null) >>> expect_hugepage=1 I missed it last time through, but this line is the problem. Note that earlier the morecore_setup function said that the heap base is at 0xe0, and we allocated below that. If the __morecore hook was called, there would be more debug output here about what it was asked to do. >>> malloc(4) = 0xd89010 >>> FAIL Address is not hugepage >>> [jwboyer@zod obj64]$ >>> >>> This is with the latest git head of libhugetlbfs, and I have 32 >>> hugepages setup and free: >>> >>> [jwboyer@zod glibc]$ grep Huge /proc/meminfo >>> AnonHugePages: 299008 kB >>> HugePages_Total: 32 >>> HugePages_Free: 32 >>> HugePages_Rsvd: 0 >>> HugePages_Surp: 0 >>> Hugepagesize: 2048 kB >>> [jwboyer@zod glibc]$ >>> >>> I ran the test in gdb as well, and my breakpoint on >>> hugetlbfs_morecore >>> never triggered after main was started. Perhaps this is a >>> side-effect >>> of glibc already having pre-allocated heap from arenas? >>> >>> Has anyone else seen this? >> >> Quick question, do you have hugetlbfs mounted? > > Yep. > > [jwboyer@zod glibc]$ mount | grep hugetlb > hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel) > [jwboyer@zod glibc]$ > > I would have thought that was obvious from: > >>> libhugetlbfs [zod.bos.redhat.com:22061]: INFO: Found pagesize 2048 >>> kB Not necessarily, there is usually a small portion after that that prints the mount point, that is why I asked. > > All of the other tests in the testsuite complete fine with the > exception of the malloc HUGETLB_MEMCORE tests where expect_hugepage=1 > and the readahead_reserve.sh test. The latter seems to be known > broken > or at least reported as such in 2010 with no replies. > > Again, it seems that the hugetlbfs_memcore function is never called > for > the malloc tests. I even ran it under gdb, set a breakpoint at main, > then set one at hugetlbfs_memcore after I was in main and it never > hit > that breakpoint. The setup for this hasn't changed in a number of years, the only thing I can come up with is that glibc is ignoring the __morecore hook. I am looking into why this might happen. -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ ___ Libhugetlbfs-devel mailing list Libhugetlbfs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel