>>> On 06.11.15 at 20:39, wrote:
> On Thu, Nov 05, 2015 at 10:12:26AM -0700, Jan Beulich wrote:
>> >>> On 02.11.15 at 18:12, wrote:
>> > @@ -247,10 +248,12 @@ struct domain *alloc_domain_struct(void)
>> > bits = _domain_struct_bits();
>> > #endif
>> >
>> > -BUILD_BUG_ON(sizeof(*d
On Thu, Nov 05, 2015 at 10:12:26AM -0700, Jan Beulich wrote:
> >>> On 02.11.15 at 18:12, wrote:
> > --- a/xen/arch/x86/domain.c
> > +++ b/xen/arch/x86/domain.c
> > @@ -237,6 +237,7 @@ struct domain *alloc_domain_struct(void)
> > #ifdef CONFIG_BIGMEM
> > const unsigned int bits = 0;
> > #els
>>> On 02.11.15 at 18:12, wrote:
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -237,6 +237,7 @@ struct domain *alloc_domain_struct(void)
> #ifdef CONFIG_BIGMEM
> const unsigned int bits = 0;
> #else
> +int order = get_order_from_bytes(sizeof(*d));
unsigned int
> @@
Our 'struct domain' has when lock profiling is enabled is bigger than
one page.
We can't use vmap nor vzalloc as both of those stash the
physical address in struct page which makes the assumptions
in 'arch_init_memory' trip over ASSERTs.
Signed-off-by: Konrad Rzeszutek Wilk
---
xen/arch/x86/dom