> David S. Miller wrote:
> > The only thing that breaks is if apps don't call
> sysconf(_SC_PAGESIZE)
> > or some similar function such as getpagesize() to
> obtain that
> > information portably.
> 
> .. or they make assumptions about the possible range
> of values. ;)
> 
> > Or did Solaris accidently return 8K always in some
> version of the
> > Solaris libc?  I don't see how this is possible, as
> applications
> 
> No that wasn't the case. The case Bart mentioned was
> one, an app was 
> creating a unmapped zone around a segment and the
> segment ended up all 
> being unmapped because they were subtracting 64K off
> of each end of a 128K 
> segment. It was clearly a bug but since the app was
> statically linked to 
> library code containing the bug there wasn't a good
> way to fix it in the 
> field.
> 
> There were the other drawbacks I mentioned as well,
> which we could get 
> around by only upping the pagesize on newer platforms
> with better cache 
> associativity and larger memory. This approach may be
> tenable.
> 
> > I also disagree with Eric Lowe about the usefulness
> of increasing the
> > base page size.  It's very useful, and that's why
> we have several
> > platforms under Linux which have moved up to a
> default page size of
> > 64K or larger (IA64, PowerPC 64-bit).  We even use
> 256MB TLB entries
> > for the Linux kernel on Niagara, and if the chip
> supported 16GB TLB
> > entries we'd use those too, it's a huge issue.
> 
> In the case of Niagara the tradeoffs are clearly in
> favor of optimizing 
> for the TLB. Our auto-MPSS policies are very
> aggressive on that platform 
> and result in most of the heap and stack mapped with
> 64K and larger pages, 
> up to 256M, and large pages are also automatically
> selected for suitably 
> sized text on that platform. It would be interesting
> to try native 64K 
> PAGESIZE support on a Niagara and see how much of a
> win it is over what is 
> currently available in Nevada... When the 64K
> prototype was done a few 
> years back, most of the performance analysis was done
> on machines of the 
> SunFire 6800-15K class since they have the biggest
> memories;

Comparing SF68k/SF15k with Niagara is problematic. The broken MMU design in the 
US3/4 CPU models used in these machines is not able to use a significant amount 
of 64k pages. If you still got a small performance win there then this would 
prove that an all-64k kernel has significant performance advanges over the 
stock version with it's 8k "dwarf page" size. 

> this was 
> before Niagara silicon even taped out!

Could Sun get the project code released into Opensolaris, please? I agree with 
both David Miller and Roland Mainz that a kernel which uses 64k pages by 
default will have significant performance advantages over the kernel which uses 
"dwarfpages". The 8k page size is a significant limitation.

Holger
This message posted from opensolaris.org
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to