On Mon, Dec 03, 2007 at 02:10:54PM -0600, Steve Fox wrote:
> On Mon, 2007-12-03 at 13:14 +1100, David Gibson wrote:
> > On Tue, Nov 20, 2007 at 10:03:56AM -0600, Steve Fox wrote:
> > > From my understanding, with proper BSS padding virtaddr + memsz should
> > > end up aligned on a 1TB boundary, 0x20000000000 in this case. But
> > > readelf shows it totals 0x10100000000. If I use gdb to break inside
> > > glibc, I see the heap beginning immediately after 0x10100000000, which
> > > mixes huge and base pages in the same segment.
> > >
> > > Am I off in the weeds?
> >
> > Slightly. That alignment / adjustment is about getting the *start* of
> > the BSS in the right place, not the end.
>
> Right. That's what I was trying to say (apparently not so well :). The
> 0x10100000000 address is the end of the BSS padding. I expected it to be
> at 0x20000000000 because I added a second ALIGN statement. But your
> statements below (regarding overcommit) make me wonder if ld
> intentionally limits the amount of padding we can do.
Uh.. that seems very unlikely. I could imagine ld choking on
too-large segments, but not silently truncating the padding. Can you
send a full copy of your modified script that we're discussing - I
think we've only seen fragments in the thread to date. Could it be
something as simple as a typo in the number of zeroes in your ALIGN?
> > First, we may have an option about where the heap should go. Putting
> > it immediately after the BSS segment would be "normal", but having BSS
> > at funny addresses in hugepages isn't exactly normal anyway. So,
> > putting the heap after the (hugepage) BSS is one option, which will
> > require padding the BSS out, or otherwise pushing the break out to the
> > next slice boundary.
>
> Since the simple padding doesn't work, I'd love to hear other ideas.
>
> > The other option, however, is to have the heap follow the normal page
> > data segment.
>
> If the BSS begins at 1TB, doesn't that limit the heap to < 1TB? Isn't
> that limitation similar to my proposal to align the BSS in 256MB
> segments? The difference being whether the heap or the BSS is more
> likely to hit that limit.
Well, yes, that's true. Mind you something that uses >1TB of heap is
probably a good candidate for using the heap-in-hugepages feature of
libhugetlbfs as well. I think that would put the hugepage heap after
the BSS, evading that restriction.
> > The second issue is that padding out the BSS too much can cause
> > out-of-memory errors on loading. We know that the padding pages will
> > never be instantiated, but the kernel doesn't and I think there's a
> > limit to how far it will let you overcommit. I've had trouble with
> > just the 256MB padding when I was experimenting on a small-memory 4xx
> > machine. Padding the BSS to 1TB could well cause trouble even on
> > fairly large machines.
>
> Interesting. I hadn't considered that.
>
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
_______________________________________________
Libhugetlbfs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel