Agreed.  But it was my understanding the performance benefit was the NOT having 
to do it 256 times.
And reconstructing the TLB process would likely break current implementation 
and cause a compatibility
issue. It is also not backed on DASD; but neither is ESO Dataspaces.

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Gerhard Adam
Sent: Sunday, August 15, 2010 Sunday 5:58 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Anyone Using z10 Large Pages in anger

>In theory, getting 1 Storage Obtain for 1M should be less overhead than 256 4K 
>>versions. The virtual storage algorithms to get the data is supposed to be 
>less (need >less levels of hashing).

It is my understanding that the primary "benefit" is to extend the reach of the 
TLB so that effectively, address translation occurs at the segment rather than 
page boundary (in terms of TLB entries).

This seems like a difficult enough metric to obtain and other than theoretical 
explanations, doesn't seem likely to become easier.  It is all predicated on 
the notion that for many large memory applications, the concept of 
"super-pages" can increase the practical usefulness of the TLB without have to 
make the size any larger.

In that way a 2x256 way TLB implementation can suddenly span 512 MB instead of 
only 2 MB. 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at 
http://bama.ua.edu/archives/ibm-main.html

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to