>>> On 11.09.15 at 14:05, wrote:
> The flush_all(FLUSH_CACHE) in mtrr.c will result in a flush_area_mask for
> all CPU's in the host.
> It will more time to issue a IPI to all logical cores the more core's there
> are. I admit that
> x2apic_cluster mode may speed this up but not all hosts will h
On 11/09/15 12:11, Jan Beulich wrote:
On 11.09.15 at 12:28, wrote:
>> On 11/09/15 10:17, Jan Beulich wrote:
>> On 11.09.15 at 02:59, wrote:
If you want a formula I would do:
#define MAX_SOCKETS 8
max_pfns = pow(2,(MAX_SOCKETS - (max(nr_iommus(), MAX_SOCKETS
>>> On 11.09.15 at 12:28, wrote:
> On 11/09/15 10:17, Jan Beulich wrote:
> On 11.09.15 at 02:59, wrote:
>>> If you want a formula I would do:
>>>
>>> #define MAX_SOCKETS 8
>>>
>>> max_pfns = pow(2,(MAX_SOCKETS - (max(nr_iommus(), MAX_SOCKETS * 64;
>>>
>>> Where nr_iommus would have to be
On 11/09/15 10:17, Jan Beulich wrote:
On 11.09.15 at 02:59, wrote:
>> If you want a formula I would do:
>>
>> #define MAX_SOCKETS 8
>>
>> max_pfns = pow(2,(MAX_SOCKETS - (max(nr_iommus(), MAX_SOCKETS * 64;
>>
>> Where nr_iommus would have to be somehow implemented, ditto for pow.
>>
>> T
>>> On 11.09.15 at 02:59, wrote:
> If you want a formula I would do:
>
> #define MAX_SOCKETS 8
>
> max_pfns = pow(2,(MAX_SOCKETS - (max(nr_iommus(), MAX_SOCKETS * 64;
>
> Where nr_iommus would have to be somehow implemented, ditto for pow.
>
> This should give you:
> 8-> 64
> 7-
On Fri, Sep 11, 2015 at 08:44:50AM +0800, Chen, Tiejun wrote:
> >>Right, that's one of the things that would need taking care of.
> >>(Whether enforcing an upper limit is actually needed I'm not
> >>sure - we generally allow the admin to shoot himself in the foot
> >>if he wants to. And whether the
Right, that's one of the things that would need taking care of.
(Whether enforcing an upper limit is actually needed I'm not
sure - we generally allow the admin to shoot himself in the foot
if he wants to. And whether the lower limit should be 64 instead
of just ensuring the limit is not zero is a
On Thu, Sep 10, 2015 at 02:59:17AM -0600, Jan Beulich wrote:
> >>> On 10.09.15 at 10:55, wrote:
> >> Sort of (the patch has the intended effect, but for its size very
> >> many rough edges).
> >>
> >
> > I guess we need to amend the original parameter, once_mapping_mfns, like
> > this,
> >
> >
>>> On 10.09.15 at 10:55, wrote:
>> Sort of (the patch has the intended effect, but for its size very
>> many rough edges).
>>
>
> I guess we need to amend the original parameter, once_mapping_mfns, like
> this,
>
> /* xen_once_mapping_mfns: memory mapping mfn bumbers once. */
> unsigned int x
Sort of (the patch has the intended effect, but for its size very
many rough edges).
I guess we need to amend the original parameter, once_mapping_mfns, like
this,
/* xen_once_mapping_mfns: memory mapping mfn bumbers once. */
unsigned int xen_once_mapping_mfns;
size_param("once_mapping_mfns"
>>> On 10.09.15 at 07:28, wrote:
>> > If the 64 limit was arbitrary then I would suggest increasing it to at
>> > least
>>> 1024 so that
>>> at least 4M of BAR can be mapped in one go and it reduces the overhead by a
>>> factor of 16.
>>
>> 1024 may be a little much, but 256 is certainly a possib
If the 64 limit was arbitrary then I would suggest increasing it to at least
1024 so that
at least 4M of BAR can be mapped in one go and it reduces the overhead by a
factor of 16.
1024 may be a little much, but 256 is certainly a possibility, plus
Konrad's suggestion to allow this limit to be co
>>> On 09.09.15 at 17:19, wrote:
> On 09/09/15 15:50, Konrad Rzeszutek Wilk wrote:
>> On Wed, Sep 09, 2015 at 08:33:52AM -0600, Jan Beulich wrote:
>> On 09.09.15 at 16:20, wrote:
Perhaps the solution is remove the first printk(s) and just have them
once the operation has completed?
On 09/09/15 15:50, Konrad Rzeszutek Wilk wrote:
> On Wed, Sep 09, 2015 at 08:33:52AM -0600, Jan Beulich wrote:
> On 09.09.15 at 16:20, wrote:
>>> Perhaps the solution is remove the first printk(s) and just have them
>>> once the operation has completed? That may fix the outstanding tasklet
>>>
On Wed, Sep 09, 2015 at 08:55:38AM -0600, Jan Beulich wrote:
> >>> On 09.09.15 at 16:50, wrote:
> > On Wed, Sep 09, 2015 at 08:33:52AM -0600, Jan Beulich wrote:
> >> >>> On 09.09.15 at 16:20, wrote:
> >> > Perhaps the solution is remove the first printk(s) and just have them
> >> > once the opera
>>> On 09.09.15 at 16:50, wrote:
> On Wed, Sep 09, 2015 at 08:33:52AM -0600, Jan Beulich wrote:
>> >>> On 09.09.15 at 16:20, wrote:
>> > Perhaps the solution is remove the first printk(s) and just have them
>> > once the operation has completed? That may fix the outstanding tasklet
>> > problem?
On Wed, Sep 09, 2015 at 08:33:52AM -0600, Jan Beulich wrote:
> >>> On 09.09.15 at 16:20, wrote:
> > Perhaps the solution is remove the first printk(s) and just have them
> > once the operation has completed? That may fix the outstanding tasklet
> > problem?
>
> Considering that this is a tool sta
>>> On 09.09.15 at 16:20, wrote:
> Perhaps the solution is remove the first printk(s) and just have them
> once the operation has completed? That may fix the outstanding tasklet
> problem?
Considering that this is a tool stack based retry, how would the
hypervisor know when the _whole_ operation
On Wed, Sep 09, 2015 at 02:50:25PM +0800, Tiejun Chen wrote:
> We should lower loglevel to XENLOG_G_DEBUG while mapping or
> unmapping memory via XEN_DOMCTL_memory_mapping since its
> fair enough to check this info just while debugging.
The issue you folks are hitting where it takes eons to boot
w
We should lower loglevel to XENLOG_G_DEBUG while mapping or
unmapping memory via XEN_DOMCTL_memory_mapping since its
fair enough to check this info just while debugging.
CC: Ian Campbell
CC: Ian Jackson
CC: Jan Beulich
CC: Keir Fraser
CC: Tim Deegan
Signed-off-by: Tiejun Chen
---
xen/common
20 matches
Mail list logo