>>> On 12.12.18 at 11:04, <roger....@citrix.com> wrote:
> On Wed, Dec 12, 2018 at 02:53:26AM -0700, Jan Beulich wrote:
>> >>> On 12.12.18 at 10:14, <roger....@citrix.com> wrote:
>> > On Tue, Dec 11, 2018 at 08:33:08AM -0700, Jan Beulich wrote:
>> >> >>> On 11.12.18 at 16:19, <roger....@citrix.com> wrote:
>> >> > On Tue, Dec 11, 2018 at 08:08:51AM -0700, Jan Beulich wrote:
>> >> >> >>> On 05.12.18 at 15:54, <roger....@citrix.com> wrote:
>> >> >> > To note it's calculating the approximate amount of memory required by
>> >> >> > shadow paging.
>> >> >> 
>> >> >> I don't understand this logic, and ...
>> >> >> 
>> >> >> > @@ -325,7 +325,7 @@ unsigned long __init dom0_compute_nr_pages(
>> >> >> >              break;
>> >> >> >  
>> >> >> >          /* Reserve memory for shadow or HAP. */
>> >> >> > -        avail -= dom0_paging_pages(d, nr_pages);
>> >> >> > +        avail -= dom0_shadow_pages(d, nr_pages);
>> >> >> >      }
>> >> >> 
>> >> >> ... the comment here (and lack of conditional restricting the
>> >> >> code to shadow mode) appear to support me: Have you
>> >> >> been mislead by the function having a comment referring
>> >> >> to libxl_get_required_shadow_memory()? I think if anything
>> >> >> that libxl function would want to be renamed (to replace
>> >> >> "shadow" by something more generic in its name).
>> >> > 
>> >> > But the logic in dom0_shadow_pages to calculate the size of the paging
>> >> > memory pool is specifically for shadow AFAICT, I don't think HAP needs
>> >> > to take the number of vCPUs into account, since there's only a
>> >> > single p2m for the whole domain. OTOH shadow needs to take the number
>> >> > of vCPUs into account because each one will have a different shadow.
>> >> 
>> >> Yes, the vCPU count aspect is indeed shadow specific. However,
>> >> as said in reply to the other patch, the calculation here was at
>> >> least supposed to also take into account the P2M part of the
>> >> needed allocations. Yet the P2M part ought to be similar between
>> >> both modes.
>> >> 
>> >> > Note that patch 2 in this series adds a function to calculate the size
>> >> > of the paging memory pool for HAP, and a conditional is added to the
>> >> > expression above that takes into account whether shadow or HAP is in
>> >> > use when subtracting from the amount of available memory.
>> >> 
>> >> Well, assuming we can settle on what shape patch 2 should take
>> >> I can see the point in doing the rename here, but then with an
>> >> adjusted description: Especially in light of the code comment still
>> >> visible above you'll want to point out that the rename is in
>> >> preparation of splitting the calculations. Since I question the split,
>> >> though, the rename (in a separate patch) is questionable to me
>> >> too. If we used uniform P2M calculations and added just shadow's
>> >> per-vCPU extra on top, no rename in a separate patch would
>> >> seem warranted.
>> > 
>> > The current calculations in dom0_paging_pages assume 1 page is needed
>> > for each 1MB of guest memory for the p2m, do you think this is OK?
>> > (and suitable to be used for HAP/IOMMU page tables also)
>> 
>> Well, 1 page per 1Mb means the same as your current 8 bytes
>> per page times 2 (for separate P2M and IOMMU tables), afaict.
> 
> I was planning to use 1 page per 1Mb for the p2m, and then 1 page per
> 1Mb for the IOMMU, so 16 bytes per page.

Well, that's (as said for patch 2) quite a bit of an over-estimate,
but then again reserving a little too much is perhaps better than
reserving too little.

> You mentioned there's some code (for PV?) to calculate the size of the
> page tables but I'm having trouble finding it (mainly because I'm not
> that familiar with PV), could you point me to it?

In dom0_construct_pv() you'll find a loop starting with
"for ( nr_pt_pages = 2; ; nr_pt_pages++ )". It's not the neatest,
but at least we've never had reports of failure.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to