On Mon, 2019-02-04 at 11:44 -0800, Dave Hansen wrote:
> On 2/4/19 10:15 AM, Alexander Duyck wrote:
> > +#ifdef CONFIG_KVM_GUEST
> > +#include <linux/jump_label.h>
> > +extern struct static_key_false pv_free_page_hint_enabled;
> > +
> > +#define HAVE_ARCH_FREE_PAGE
> > +void __arch_free_page(struct page *page, unsigned int order);
> > +static inline void arch_free_page(struct page *page, unsigned int order)
> > +{
> > +   if (static_branch_unlikely(&pv_free_page_hint_enabled))
> > +           __arch_free_page(page, order);
> > +}
> > +#endif
> 
> So, this ends up with at least a call, a branch and a ret added to the
> order-0 paths, including freeing pages to the per-cpu-pageset lists.
> That seems worrisome.
> 
> What performance testing has been performed to look into the overhead
> added to those paths?

So far I haven't done much in the way of actual performance testing.
Most of my tests have been focused on "is this doing what I think it is
supposed to be doing".

I have been debating if I want to just move the order checks to include
them in the inline functions. In that case we would end up essentially
just jumping over the call code.

Reply via email to