I have been following this thread as an interested spectator, not having had the opportunity to "play". Even zLinux has had large pages for a while now - pity about z/VM ... ahem. Linux doesn't attempt to break up large pages under memory pressure, but does allow the number of such pages to be dynamically adjusted (I haven't checked the zLinux code, but I presume it does likewise). It also spreads the allocation across any (NUMA) memory nodes that are available - does zSeries do likewise across multiple books ?.
Shane ... ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html