On Mon, 16 Aug 2010 12:09:04 -0500, Elpida Tzortzatos. wrote: >I do want to emphasize that even with the APAR, capacity planning needs to >be done before selecting the LFAREA size. The optimal configuration is when >there is enough in the 4K memory pool to handle the 4K workload and enough >in the 1MB memory pool to handle the 1MB workload. The APAR corrects for >an over-specification of the LFAREA size (and under-specification of the 4K >memory pool). However you still need to ensure that the 4K memory pool is >large enough to accommodate the 4K workload, especially since large pages >can not be used to back 4K fixed storage (SQA or fix requests from non- >swappable address spaces like DB2).
Workloads vary. I'm wondering if a better approach would be for a system component, such as WLM for example, to periodically set an LFAREA size appropriate for the now-running workload. Asking me to predict how many large pages I will need for the life of my NEXT IPL is a strategy that can only work by accident, in my opinion. I would prefer an approach that is self-tuning. Brian ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html