Re: [kvm] Re: [kvm] Re: [kvm] Re: [kvm] Re: Questions about duplicate memory work

2011-10-02 Thread Avi Kivity

On 09/29/2011 09:46 PM, Robin Lee Powell wrote:

On Thu, Sep 29, 2011 at 02:22:43PM -0300, Marcelo Tosatti wrote:
  On Wed, Sep 28, 2011 at 05:14:47PM -0700, Robin Lee Powell wrote:
Please post the contents of /proc/meminfo and /proc/zoneinfo
when this is happening.

  I just noticed that the amount of RAM the VMs had in VIRT
  added up to considerably more than the host's actual RAM;
  hard_limit is now on.  So I may not be able to replicate this.
  :)
  
Or not; even with hard_limit the VIRT value goes to hundreds of
MiB more than the limit.  Is that expected?

  Yes, VIRT field refers to total memory mapped by the process, not
  paged-in memory, which is indicated by the RES field.

Yes, I'm aware of that; that isn't relevant to my question.

I would expect the *total* memory requested by a VM to never go over
the hard_limit value set in the XML file.  I mean, isn't that what
the hard_limit *means*?  If not, what does it mean?




VIRT memory includes both guest memory, and memory reserved (usually not 
used) by qemu.  Don't read too much into it.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kvm] Re: [kvm] Re: [kvm] Re: Questions about duplicate memory work

2011-09-29 Thread Marcelo Tosatti
On Wed, Sep 28, 2011 at 05:14:47PM -0700, Robin Lee Powell wrote:
   Please post the contents of /proc/meminfo and /proc/zoneinfo when
   this is happening.
  
  I just noticed that the amount of RAM the VMs had in VIRT added up
  to considerably more than the host's actual RAM; hard_limit is now
  on.  So I may not be able to replicate this.  :)
 
 Or not; even with hard_limit the VIRT value goes to hundreds of MiB
 more than the limit.  Is that expected?

Yes, VIRT field refers to total memory mapped by the process, not paged-in
memory, which is indicated by the RES field.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kvm] Re: [kvm] Re: [kvm] Re: [kvm] Re: Questions about duplicate memory work

2011-09-29 Thread Robin Lee Powell
On Thu, Sep 29, 2011 at 02:22:43PM -0300, Marcelo Tosatti wrote:
 On Wed, Sep 28, 2011 at 05:14:47PM -0700, Robin Lee Powell wrote:
Please post the contents of /proc/meminfo and /proc/zoneinfo
when this is happening.
   
   I just noticed that the amount of RAM the VMs had in VIRT
   added up to considerably more than the host's actual RAM;
   hard_limit is now on.  So I may not be able to replicate this.
   :)
  
  Or not; even with hard_limit the VIRT value goes to hundreds of
  MiB more than the limit.  Is that expected?
 
 Yes, VIRT field refers to total memory mapped by the process, not
 paged-in memory, which is indicated by the RES field.

Yes, I'm aware of that; that isn't relevant to my question.

I would expect the *total* memory requested by a VM to never go over
the hard_limit value set in the XML file.  I mean, isn't that what
the hard_limit *means*?  If not, what does it mean?

That's certainly what
http://libvirt.org/formatdomain.html#elementsMemoryTuning *implies*,
anyways.

-Robin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kvm] Re: [kvm] Re: Questions about duplicate memory work

2011-09-28 Thread Robin Lee Powell
On Tue, Sep 27, 2011 at 12:49:29PM +0300, Avi Kivity wrote:
 On 09/27/2011 12:00 PM, Robin Lee Powell wrote:
 On Tue, Sep 27, 2011 at 01:48:43AM -0700, Robin Lee Powell wrote:
   On Tue, Sep 27, 2011 at 04:41:33PM +0800, Emmanuel Noobadmin wrote:
 On 9/27/11, Robin Lee Powellrlpow...@digitalkingdom.org  wrote:
   On Mon, Sep 26, 2011 at 04:15:37PM +0800, Emmanuel Noobadmin
   wrote:
   It's unrelated to what you're actually using as the disks,
   whether file or block devices like LVs. I think it just makes
   KVM tell the host not to cache I/O done on the storage device.
 
   Wait, hold on, I think I had it backwards.
 
   It tells the *host* to not cache the device in question, or the
   *VMs* to not cache the device in question?
   
 I'm fairly certain it tells the qemu not to cache the device in
 question. If you don't want the guest to cache their i/o, then the
 guest OS should be configured if it allows that. Although I'm not
 sure if it's possible to disable disk buffering/caching system
 wide in Linux.
 
   OK, great, thanks.
 
   Now if I could just figure out how to stop the host from swapping
   out much of the VMs' qemu-kvm procs when it has almost a GiB of RAM
   left.  -_-  swappiness 0 doesn't seem to help there.
 
 Grrr.
 
 I turned swap off to clear it.  A few minutes ago, this host was at
 zero swap:
 
 top - 01:59:10 up 10 days, 15:17,  3 users,  load average: 6.39, 4.26, 3.24
 Tasks: 151 total,   1 running, 150 sleeping,   0 stopped,   0 zombie
 Cpu(s):  6.6%us,  1.0%sy,  0.0%ni, 85.9%id,  6.3%wa,  0.0%hi,  0.2%si,  
 0.0%st
 Mem:   8128772k total,  656k used,  1617656k free,14800k buffers
 Swap:  8388604k total,   672828k used,  7715776k free,97536k cached
 
PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
   2504 qemu  20   0 2425m 1.8g  448 S 10.0 23.4   3547:59 qemu-kvm
   2258 qemu  20   0 2425m 1.7g  444 S  2.7 21.7   1288:15 qemu-kvm
 18061 qemu  20   0 2433m 1.8g  428 S  2.3 23.4 401:01.99 qemu-kvm
 10335 qemu  20   0 1864m 861m  456 S  1.0 10.9   2:04.26 qemu-kvm
 [snip]
 
 Why is it doing this?!?  ;'(
 
 
 Please post the contents of /proc/meminfo and /proc/zoneinfo when
 this is happening.

I just noticed that the amount of RAM the VMs had in VIRT added up
to considerably more than the host's actual RAM; hard_limit is now
on.  So I may not be able to replicate this.  :)

-Robin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kvm] Re: [kvm] Re: [kvm] Re: Questions about duplicate memory work

2011-09-28 Thread Robin Lee Powell
On Wed, Sep 28, 2011 at 05:11:06PM -0700, Robin Lee Powell wrote:
 On Tue, Sep 27, 2011 at 12:49:29PM +0300, Avi Kivity wrote:
  On 09/27/2011 12:00 PM, Robin Lee Powell wrote:
  On Tue, Sep 27, 2011 at 01:48:43AM -0700, Robin Lee Powell wrote:
On Tue, Sep 27, 2011 at 04:41:33PM +0800, Emmanuel Noobadmin wrote:
  On 9/27/11, Robin Lee Powellrlpow...@digitalkingdom.org  wrote:
On Mon, Sep 26, 2011 at 04:15:37PM +0800, Emmanuel Noobadmin
wrote:
It's unrelated to what you're actually using as the disks,
whether file or block devices like LVs. I think it just makes
KVM tell the host not to cache I/O done on the storage device.
  
Wait, hold on, I think I had it backwards.
  
It tells the *host* to not cache the device in question, or the
*VMs* to not cache the device in question?

  I'm fairly certain it tells the qemu not to cache the device in
  question. If you don't want the guest to cache their i/o, then the
  guest OS should be configured if it allows that. Although I'm not
  sure if it's possible to disable disk buffering/caching system
  wide in Linux.
  
OK, great, thanks.
  
Now if I could just figure out how to stop the host from swapping
out much of the VMs' qemu-kvm procs when it has almost a GiB of RAM
left.  -_-  swappiness 0 doesn't seem to help there.
  
  Grrr.
  
  I turned swap off to clear it.  A few minutes ago, this host was at
  zero swap:
  
  top - 01:59:10 up 10 days, 15:17,  3 users,  load average: 6.39, 4.26, 3.24
  Tasks: 151 total,   1 running, 150 sleeping,   0 stopped,   0 zombie
  Cpu(s):  6.6%us,  1.0%sy,  0.0%ni, 85.9%id,  6.3%wa,  0.0%hi,  0.2%si,  
  0.0%st
  Mem:   8128772k total,  656k used,  1617656k free,14800k buffers
  Swap:  8388604k total,   672828k used,  7715776k free,97536k cached
  
 PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
2504 qemu  20   0 2425m 1.8g  448 S 10.0 23.4   3547:59 qemu-kvm
2258 qemu  20   0 2425m 1.7g  444 S  2.7 21.7   1288:15 qemu-kvm
  18061 qemu  20   0 2433m 1.8g  428 S  2.3 23.4 401:01.99 qemu-kvm
  10335 qemu  20   0 1864m 861m  456 S  1.0 10.9   2:04.26 qemu-kvm
  [snip]
  
  Why is it doing this?!?  ;'(
  
  
  Please post the contents of /proc/meminfo and /proc/zoneinfo when
  this is happening.
 
 I just noticed that the amount of RAM the VMs had in VIRT added up
 to considerably more than the host's actual RAM; hard_limit is now
 on.  So I may not be able to replicate this.  :)

Or not; even with hard_limit the VIRT value goes to hundreds of MiB
more than the limit.  Is that expected?

-Robin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kvm] Re: [kvm] Re: Questions about duplicate memory work

2011-09-27 Thread Robin Lee Powell
On Mon, Sep 26, 2011 at 04:15:37PM +0800, Emmanuel Noobadmin wrote:
 On 9/26/11, Robin Lee Powell rlpow...@digitalkingdom.org wrote:
  On Mon, Sep 26, 2011 at 01:49:18PM +0800, Emmanuel Noobadmin wrote:
  On 9/25/11, Robin Lee Powell rlpow...@digitalkingdom.org wrote:
  
   OK, so I've got a Linux host, and a bunch of Linux VMs.
  
   This means that the host *and* all tho VMs do their own disk
   caches/buffers and do their own swap as well.
 
  If I'm not wrong, that's why the recommended and current default
  in libvirtd is to create storage devices with no caching to remove
  one layer of duplication.
 
  How do you do that?  I have my VMs using LVs created on the host as
  their disks, but I'm open to other methods if there are significant
  advantages.
 
 It's unrelated to what you're actually using as the disks, whether
 file or block devices like LVs. I think it just makes KVM tell the
 host not to cache I/O done on the storage device. 

Wait, hold on, I think I had it backwards.

It tells the *host* to not cache the device in question, or the
*VMs* to not cache the device in question?

-Robin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kvm] Re: [kvm] Re: Questions about duplicate memory work

2011-09-27 Thread Emmanuel Noobadmin
On 9/27/11, Robin Lee Powell rlpow...@digitalkingdom.org wrote:
 On Mon, Sep 26, 2011 at 04:15:37PM +0800, Emmanuel Noobadmin wrote:
 It's unrelated to what you're actually using as the disks, whether
 file or block devices like LVs. I think it just makes KVM tell the
 host not to cache I/O done on the storage device.

 Wait, hold on, I think I had it backwards.

 It tells the *host* to not cache the device in question, or the
 *VMs* to not cache the device in question?

I'm fairly certain it tells the qemu not to cache the device in
question. If you don't want the guest to cache their i/o, then the
guest OS should be configured if it allows that. Although I'm not sure
if it's possible to disable disk buffering/caching system wide in
Linux.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kvm] Re: [kvm] Re: Questions about duplicate memory work

2011-09-27 Thread Robin Lee Powell
On Tue, Sep 27, 2011 at 04:41:33PM +0800, Emmanuel Noobadmin wrote:
 On 9/27/11, Robin Lee Powell rlpow...@digitalkingdom.org wrote:
  On Mon, Sep 26, 2011 at 04:15:37PM +0800, Emmanuel Noobadmin
  wrote:
  It's unrelated to what you're actually using as the disks,
  whether file or block devices like LVs. I think it just makes
  KVM tell the host not to cache I/O done on the storage device.
 
  Wait, hold on, I think I had it backwards.
 
  It tells the *host* to not cache the device in question, or the
  *VMs* to not cache the device in question?
 
 I'm fairly certain it tells the qemu not to cache the device in
 question. If you don't want the guest to cache their i/o, then the
 guest OS should be configured if it allows that. Although I'm not
 sure if it's possible to disable disk buffering/caching system
 wide in Linux.

OK, great, thanks.

Now if I could just figure out how to stop the host from swapping
out much of the VMs' qemu-kvm procs when it has almost a GiB of RAM
left.  -_-  swappiness 0 doesn't seem to help there.

-Robin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kvm] Re: [kvm] Re: Questions about duplicate memory work

2011-09-27 Thread Robin Lee Powell
On Tue, Sep 27, 2011 at 01:48:43AM -0700, Robin Lee Powell wrote:
 On Tue, Sep 27, 2011 at 04:41:33PM +0800, Emmanuel Noobadmin wrote:
  On 9/27/11, Robin Lee Powell rlpow...@digitalkingdom.org wrote:
   On Mon, Sep 26, 2011 at 04:15:37PM +0800, Emmanuel Noobadmin
   wrote:
   It's unrelated to what you're actually using as the disks,
   whether file or block devices like LVs. I think it just makes
   KVM tell the host not to cache I/O done on the storage device.
  
   Wait, hold on, I think I had it backwards.
  
   It tells the *host* to not cache the device in question, or the
   *VMs* to not cache the device in question?
  
  I'm fairly certain it tells the qemu not to cache the device in
  question. If you don't want the guest to cache their i/o, then the
  guest OS should be configured if it allows that. Although I'm not
  sure if it's possible to disable disk buffering/caching system
  wide in Linux.
 
 OK, great, thanks.
 
 Now if I could just figure out how to stop the host from swapping
 out much of the VMs' qemu-kvm procs when it has almost a GiB of RAM
 left.  -_-  swappiness 0 doesn't seem to help there.

Grrr.

I turned swap off to clear it.  A few minutes ago, this host was at
zero swap:

top - 01:59:10 up 10 days, 15:17,  3 users,  load average: 6.39, 4.26, 3.24
Tasks: 151 total,   1 running, 150 sleeping,   0 stopped,   0 zombie
Cpu(s):  6.6%us,  1.0%sy,  0.0%ni, 85.9%id,  6.3%wa,  0.0%hi,  0.2%si,  0.0%st
Mem:   8128772k total,  656k used,  1617656k free,14800k buffers
Swap:  8388604k total,   672828k used,  7715776k free,97536k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 2504 qemu  20   0 2425m 1.8g  448 S 10.0 23.4   3547:59 qemu-kvm
 2258 qemu  20   0 2425m 1.7g  444 S  2.7 21.7   1288:15 qemu-kvm
18061 qemu  20   0 2433m 1.8g  428 S  2.3 23.4 401:01.99 qemu-kvm
10335 qemu  20   0 1864m 861m  456 S  1.0 10.9   2:04.26 qemu-kvm
[snip]

Why is it doing this?!?  ;'(

(I don't know if anyone really has an answer, just wanted to rant)

-Robin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [kvm] Re: [kvm] Re: Questions about duplicate memory work

2011-09-27 Thread Avi Kivity

On 09/27/2011 12:00 PM, Robin Lee Powell wrote:

On Tue, Sep 27, 2011 at 01:48:43AM -0700, Robin Lee Powell wrote:
  On Tue, Sep 27, 2011 at 04:41:33PM +0800, Emmanuel Noobadmin wrote:
On 9/27/11, Robin Lee Powellrlpow...@digitalkingdom.org  wrote:
  On Mon, Sep 26, 2011 at 04:15:37PM +0800, Emmanuel Noobadmin
  wrote:
  It's unrelated to what you're actually using as the disks,
  whether file or block devices like LVs. I think it just makes
  KVM tell the host not to cache I/O done on the storage device.

  Wait, hold on, I think I had it backwards.

  It tells the *host* to not cache the device in question, or the
  *VMs* to not cache the device in question?
  
I'm fairly certain it tells the qemu not to cache the device in
question. If you don't want the guest to cache their i/o, then the
guest OS should be configured if it allows that. Although I'm not
sure if it's possible to disable disk buffering/caching system
wide in Linux.

  OK, great, thanks.

  Now if I could just figure out how to stop the host from swapping
  out much of the VMs' qemu-kvm procs when it has almost a GiB of RAM
  left.  -_-  swappiness 0 doesn't seem to help there.

Grrr.

I turned swap off to clear it.  A few minutes ago, this host was at
zero swap:

top - 01:59:10 up 10 days, 15:17,  3 users,  load average: 6.39, 4.26, 3.24
Tasks: 151 total,   1 running, 150 sleeping,   0 stopped,   0 zombie
Cpu(s):  6.6%us,  1.0%sy,  0.0%ni, 85.9%id,  6.3%wa,  0.0%hi,  0.2%si,  0.0%st
Mem:   8128772k total,  656k used,  1617656k free,14800k buffers
Swap:  8388604k total,   672828k used,  7715776k free,97536k cached

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
  2504 qemu  20   0 2425m 1.8g  448 S 10.0 23.4   3547:59 qemu-kvm
  2258 qemu  20   0 2425m 1.7g  444 S  2.7 21.7   1288:15 qemu-kvm
18061 qemu  20   0 2433m 1.8g  428 S  2.3 23.4 401:01.99 qemu-kvm
10335 qemu  20   0 1864m 861m  456 S  1.0 10.9   2:04.26 qemu-kvm
[snip]

Why is it doing this?!?  ;'(



Please post the contents of /proc/meminfo and /proc/zoneinfo when this 
is happening.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html