>>> On 13.10.17 at 10:49, <jbeul...@suse.com> wrote: >>>> On 29.09.17 at 13:25, <roger....@citrix.com> wrote: >> nr_pages doesn't take into account holes or MMIO regions, and >> underestimates the amount of memory needed for paging. Be on the safe >> side and use max_pdx instead. >> >> Note that both cases are just approximations, but using max_pdx yields >> a number of free pages after Dom0 build always greater than the >> minimum reserve (either 1/16 of memory or 128MB, whatever is >> smaller). >> >> Without this patch on a 16GB box the amount of free memory after >> building Dom0 without specifying any dom0_mem parameter would be >> 122MB, with this patch applied the amount of free memory after Dom0 >> build is 144MB, which is greater than the reserved 128MB. > > For the case of there not being a "dom0_mem=" this may indeed > be acceptable (albeit I notice the gap is larger than before, just > this time in the right direction). For the supposedly much more > common case of there being "dom0_mem=" (and with a positive > value), however, not using nr_pages ... > >> @@ -288,7 +289,7 @@ unsigned long __init dom0_compute_nr_pages( >> break; >> >> /* Reserve memory for shadow or HAP. */ >> - avail -= dom0_paging_pages(d, nr_pages); >> + avail -= paging_pgs; > > ... here is likely going to result in a huge overestimation.
Which I realize may or may not be a problem - the question is whether and if so how far the clamping done by nr_pages = min(nr_pages, avail); above here would result in a meaningfully different amount of memory Dom0 may get for certain command line option / total amount of memory combinations. I.e. quite a bit more than a single data point would need to be provided to prove this isn't going to be perceived as a regression by anyone. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel