On Thu, Sep 28, 2017 at 11:10:38PM +0200, Premysl Kouril wrote:
> >
> > Only the memory mapped for the guest is striclty allocated from the
> > NUMA node selected. The QEMU overhead should float on the host NUMA
> > nodes. So it seems that the "reserved_host_memory_mb" is enough.
> >
>
> Even if t
On Thu, 28 Sep 2017, Premysl Kouril wrote:
Only the memory mapped for the guest is striclty allocated from the
NUMA node selected. The QEMU overhead should float on the host NUMA
nodes. So it seems that the "reserved_host_memory_mb" is enough.
Even if that would be true and overhead memory co
>
> Only the memory mapped for the guest is striclty allocated from the
> NUMA node selected. The QEMU overhead should float on the host NUMA
> nodes. So it seems that the "reserved_host_memory_mb" is enough.
>
Even if that would be true and overhead memory could float in NUMA
nodes it generally d
On 09/28/2017 05:29 AM, Sahid Orentino Ferdjaoui wrote:
Only the memory mapped for the guest is striclty allocated from the
NUMA node selected. The QEMU overhead should float on the host NUMA
nodes. So it seems that the "reserved_host_memory_mb" is enough.
What I see in the code/docs doesn't m
On Wed, Sep 27, 2017 at 11:10:40PM +0200, Premysl Kouril wrote:
> > Lastly, qemu has overhead that varies depending on what you're doing in the
> > guest. In particular, there are various IO queues that can consume
> > significant amounts of memory. The company that I work for put in a good
> > b
On 09/27/2017 04:55 PM, Blair Bethwaite wrote:
Hi Prema
On 28 September 2017 at 07:10, Premysl Kouril wrote:
Hi, I work with Jakub (the op of this thread) and here is my two
cents: I think what is critical to realize is that KVM virtual
machines can have substantial memory overhead of up to 25
Hi Prema
On 28 September 2017 at 07:10, Premysl Kouril wrote:
> Hi, I work with Jakub (the op of this thread) and here is my two
> cents: I think what is critical to realize is that KVM virtual
> machines can have substantial memory overhead of up to 25% of memory,
> allocated to KVM virtual mach
On 09/27/2017 03:10 PM, Premysl Kouril wrote:
Lastly, qemu has overhead that varies depending on what you're doing in the
guest. In particular, there are various IO queues that can consume
significant amounts of memory. The company that I work for put in a good
bit of effort engineering things
> Lastly, qemu has overhead that varies depending on what you're doing in the
> guest. In particular, there are various IO queues that can consume
> significant amounts of memory. The company that I work for put in a good
> bit of effort engineering things so that they work more reliably, and par
On 09/27/2017 08:01 AM, Blair Bethwaite wrote:
On 27 September 2017 at 23:19, Jakub Jursa wrote:
'hw:cpu_policy=dedicated' (while NOT setting 'hw:numa_nodes') results in
libvirt pinning CPU in 'strict' memory mode
(from libvirt xml for given instance)
...
...
So yeah, the
On 09/27/2017 03:12 AM, Jakub Jursa wrote:
On 27.09.2017 10:40, Blair Bethwaite wrote:
On 27 September 2017 at 18:14, Stephen Finucane wrote:
What you're probably looking for is the 'reserved_host_memory_mb' option. This
defaults to 512 (at least in the latest master) so if you up this to 41
On 27 September 2017 at 23:19, Jakub Jursa wrote:
> 'hw:cpu_policy=dedicated' (while NOT setting 'hw:numa_nodes') results in
> libvirt pinning CPU in 'strict' memory mode
>
> (from libvirt xml for given instance)
> ...
>
>
>
>
> ...
>
> So yeah, the instance is not able to allocate
On 27.09.2017 14:46, Sahid Orentino Ferdjaoui wrote:
> On Mon, Sep 25, 2017 at 05:36:44PM +0200, Jakub Jursa wrote:
>> Hello everyone,
>>
>> We're experiencing issues with running large instances (~60GB RAM) on
>> fairly large NUMA nodes (4 CPUs, 256GB RAM) while using cpu pinning. The
>> problem
On Mon, Sep 25, 2017 at 05:36:44PM +0200, Jakub Jursa wrote:
> Hello everyone,
>
> We're experiencing issues with running large instances (~60GB RAM) on
> fairly large NUMA nodes (4 CPUs, 256GB RAM) while using cpu pinning. The
> problem is that it seems that in some extreme cases qemu/KVM can hav
On Wed, Sep 27, 2017 at 11:58 AM, Jakub Jursa
wrote:
On 27.09.2017 11:12, Jakub Jursa wrote:
On 27.09.2017 10:40, Blair Bethwaite wrote:
On 27 September 2017 at 18:14, Stephen Finucane
wrote:
What you're probably looking for is the 'reserved_host_memory_mb'
option. This
defaults t
On 27.09.2017 11:12, Jakub Jursa wrote:
>
>
> On 27.09.2017 10:40, Blair Bethwaite wrote:
>> On 27 September 2017 at 18:14, Stephen Finucane wrote:
>>> What you're probably looking for is the 'reserved_host_memory_mb' option.
>>> This
>>> defaults to 512 (at least in the latest master) so if
On 27.09.2017 10:40, Blair Bethwaite wrote:
> On 27 September 2017 at 18:14, Stephen Finucane wrote:
>> What you're probably looking for is the 'reserved_host_memory_mb' option.
>> This
>> defaults to 512 (at least in the latest master) so if you up this to 4192 or
>> similar you should resolve
On 27.09.2017 10:14, Stephen Finucane wrote:
> On Mon, 2017-09-25 at 17:36 +0200, Jakub Jursa wrote:
>> Hello everyone,
>>
>> We're experiencing issues with running large instances (~60GB RAM) on
>> fairly large NUMA nodes (4 CPUs, 256GB RAM) while using cpu pinning. The
>> problem is that it see
Also CC-ing os-ops as someone else may have encountered this before
and have further/better advice...
On 27 September 2017 at 18:40, Blair Bethwaite
wrote:
> On 27 September 2017 at 18:14, Stephen Finucane wrote:
>> What you're probably looking for is the 'reserved_host_memory_mb' option.
>> Th
On 27 September 2017 at 18:14, Stephen Finucane wrote:
> What you're probably looking for is the 'reserved_host_memory_mb' option. This
> defaults to 512 (at least in the latest master) so if you up this to 4192 or
> similar you should resolve the issue.
I don't see how this would help given the
On Mon, 2017-09-25 at 17:36 +0200, Jakub Jursa wrote:
> Hello everyone,
>
> We're experiencing issues with running large instances (~60GB RAM) on
> fairly large NUMA nodes (4 CPUs, 256GB RAM) while using cpu pinning. The
> problem is that it seems that in some extreme cases qemu/KVM can have
> sig
Hello everyone,
We're experiencing issues with running large instances (~60GB RAM) on
fairly large NUMA nodes (4 CPUs, 256GB RAM) while using cpu pinning. The
problem is that it seems that in some extreme cases qemu/KVM can have
significant memory overhead (10-15%?) which nova-compute service does
22 matches
Mail list logo