Do You see similar results at Your side?
Best regards
Would you mind to share you argument set to an emulator? As far as I
understood you are using plain ballooning with most results from above
for which those numbers are expected. The case with 5+gig memory
consumption for deflated 1G guest
On Thu, Jun 18, 2015 at 12:09 PM, Daniel P. Berrange
berra...@redhat.com wrote:
On Wed, Jun 17, 2015 at 10:55:35PM +0300, Andrey Korolyov wrote:
Sorry for a delay, the 'perf numa numa-mem -p 8 -t 2 -P 384 -C 0 -M 0
-s 200 -zZq --thp 1 --no-data_rand_walk' exposes a difference of value
0.96
On Thu, Jun 11, 2015 at 4:30 PM, Daniel P. Berrange berra...@redhat.com wrote:
On Thu, Jun 11, 2015 at 04:24:18PM +0300, Andrey Korolyov wrote:
On Thu, Jun 11, 2015 at 4:13 PM, Daniel P. Berrange berra...@redhat.com
wrote:
On Thu, Jun 11, 2015 at 04:06:59PM +0300, Andrey Korolyov wrote
On Thu, Jun 18, 2015 at 12:21 AM, Vasiliy Tolstov v.tols...@selfip.ru wrote:
2015-06-17 19:26 GMT+03:00 Vasiliy Tolstov v.tols...@selfip.ru:
This is band news =( i have debian wheezy that have old kernel...
Does it possible to get proper results with balloon ? For example by
patching qemu or
On Thu, Jun 18, 2015 at 1:44 AM, Vasiliy Tolstov v.tols...@selfip.ru wrote:
2015-06-18 1:40 GMT+03:00 Andrey Korolyov and...@xdel.ru:
Yes, but I`m afraid that I don`t fully understand why do you need this
when pure hotplug mechanism is available, aside may be nice memory
stats from balloon
On Wed, Jun 17, 2015 at 4:35 PM, Vasiliy Tolstov v.tols...@selfip.ru wrote:
Hi. I have issue with incorrect memory side inside vm. I'm try utilize
memory balloon (not memory hotplug, because i have guest without
memory hotplug (may be)).
When domain started with static memory all works fine,
On Wed, Jun 17, 2015 at 6:33 PM, Vasiliy Tolstov v.tols...@selfip.ru wrote:
2015-06-17 17:09 GMT+03:00 Andrey Korolyov and...@xdel.ru:
The rest of visible memory is eaten by reserved kernel areas, for us
this was a main reason to switch to a hotplug a couple of years ago.
You would not be able
Hi Daniel,
would it possible to adopt an optional tunable for a virCgroup
mechanism which targets to a disablement of a nested (per-thread)
cgroup creation? Those are bringing visible overhead for many-threaded
guest workloads, almost 5% in non-congested host CPU state, primarily
because the host
On Thu, Jun 11, 2015 at 2:09 PM, Daniel P. Berrange berra...@redhat.com wrote:
On Thu, Jun 11, 2015 at 01:50:24PM +0300, Andrey Korolyov wrote:
Hi Daniel,
would it possible to adopt an optional tunable for a virCgroup
mechanism which targets to a disablement of a nested (per-thread)
cgroup
On Thu, Jun 11, 2015 at 2:33 PM, Daniel P. Berrange berra...@redhat.com wrote:
On Thu, Jun 11, 2015 at 02:16:50PM +0300, Andrey Korolyov wrote:
On Thu, Jun 11, 2015 at 2:09 PM, Daniel P. Berrange berra...@redhat.com
wrote:
On Thu, Jun 11, 2015 at 01:50:24PM +0300, Andrey Korolyov wrote:
Hi
On Thu, Jun 11, 2015 at 4:13 PM, Daniel P. Berrange berra...@redhat.com wrote:
On Thu, Jun 11, 2015 at 04:06:59PM +0300, Andrey Korolyov wrote:
On Thu, Jun 11, 2015 at 2:33 PM, Daniel P. Berrange berra...@redhat.com
wrote:
On Thu, Jun 11, 2015 at 02:16:50PM +0300, Andrey Korolyov wrote
Hello,
I think it would be useful if libvirt will be able to prefix all
messages from emulator pipes with the date stamping, for example I am
trying to catch very rare and non-fatal race with
virtio-serial-bus: Guest failure in adding device virtio-serial0.0
which is specific to the Windows
On Thu, Feb 26, 2015 at 5:36 PM, Daniel P. Berrange berra...@redhat.com wrote:
On Thu, Feb 26, 2015 at 06:29:49PM +0400, Andrey Korolyov wrote:
Hello,
I think it would be useful if libvirt will be able to prefix all
messages from emulator pipes with the date stamping, for example I am
trying
On Mon, Nov 24, 2014 at 3:02 PM, Vasiliy Tolstov v.tols...@selfip.ru wrote:
Hi. I'm try to shape disk via total_iops_sec in libvirt
libvirt 1.2.10
qemu 2.0.0
Firstly when i'm run vm with predefined
total_iops_sec5000/total_iops_sec i have around 11000 iops (dd
if=/dev/sda bs=512K
On Mon, Nov 24, 2014 at 5:09 PM, Vasiliy Tolstov v.tols...@selfip.ru wrote:
2014-11-24 16:57 GMT+03:00 Andrey Korolyov and...@xdel.ru:
Hello Vasiliy,
can you please check actual values via qemu-monitor-command domid '{
execute: query-block}', just to be sure to pin the potential
problem
Sorry in advance for possible top-post, I`m not able to add proper
messageid here.
Does it ever occur if you don't run with DHCP snooping enabled?
Stefan
No, please disregard those errors. We don`t run DHCP snooping/IP
learning on interfaces but only modified clean-traffic rules, current
:26 PM, Stefan Berger
stef...@linux.vnet.ibm.com wrote:
On 03/02/2013 09:39 AM, Andrey Korolyov wrote:
Sorry in advance for possible top-post, I`m not able to add proper
messageid here.
Does it ever occur if you don't run with DHCP snooping enabled?
Stefan
No, please disregard those
17 matches
Mail list logo