Re: [Autotest] [KVM-AUTOTEST] KSM-overcommit test v.2 (python version)

2009-12-18 Thread Lukáš Doktor

Hello ppl,

as we promised there is a new version with modifications you wanted us 
to do. It's complete package based on the newest GIT version.


[Changelog]
- new structure (tests_base.cfg, ...)
- improved log
- function get_stat() raise an error when access death VM
- get_stat() splitted into 2 functions, _get_stat() returns int, 
get_stat returns log string

- use of session.close() instead of get_command_status('exit;')
- PID of VM is taken using -pidfile option (RFC: It would be nice to 
have this in framework by default)

- possible infinite loop (i = i + 1)
- 32bit host supports 3.1GB guest, 64bit without limitation, detection 
using image file_name


[Not changed]
- We skip the merge of serial and parallel init functions as the result 
would be way more complicated (= more possible errors)
From 3420916facae18f45617e3c25c365eaa59c0374c Mon Sep 17 00:00:00 2001
From: =?utf-8?q?Luk=C3=A1=C5=A1=20Doktor?= me...@book.localdomain
Date: Fri, 18 Dec 2009 15:56:31 +0100
Subject: [KSM-autotest] KSM overcommit v2 modification
[Changelog]
 - new structure (tests_base.cfg, ...)
 - improved log
 - function get_stat() raise an error when access death VM
 - get_stat() splitted into 2 functions, _get_stat() returns int, get_stat returns log string
 - PID of VM is taken using -pidfile option (RFC: It would be nice to have this in framework by default)
 - possible infinite loop (i = i + 1)
 - 32bit host supports 3.1GB guest, 64bit without limitation, detection using image file_name

[Not changed]
- We skip the merge of serial and parallel init functions as the result would be way more complicated (= more possible errors)
---
 client/tests/kvm/tests/ksm_overcommit.py |  616 ++
 client/tests/kvm/tests_base.cfg.sample   |   18 +
 client/tests/kvm/unattended/allocator.py |  213 ++
 3 files changed, 847 insertions(+), 0 deletions(-)
 create mode 100644 client/tests/kvm/tests/ksm_overcommit.py
 create mode 100644 client/tests/kvm/unattended/allocator.py

diff --git a/client/tests/kvm/tests/ksm_overcommit.py b/client/tests/kvm/tests/ksm_overcommit.py
new file mode 100644
index 000..a726e1c
--- /dev/null
+++ b/client/tests/kvm/tests/ksm_overcommit.py
@@ -0,0 +1,616 @@
+import logging, time
+from autotest_lib.client.common_lib import error
+import kvm_subprocess, kvm_test_utils, kvm_utils
+import kvm_preprocessing
+import random, string, math, os
+
+def run_ksm_overcommit(test, params, env):
+
+Test how KSM (Kernel Shared Memory) act with more than physical memory is
+used. In second part is also tested, how KVM can handle the situation,
+when the host runs out of memory (expected is to pause the guest system,
+wait until some process returns the memory and bring the guest back to life)
+
+@param test: kvm test object.
+@param params: Dictionary with test parameters.
+@param env: Dictionary with the test wnvironment.
+
+
+def parse_meminfo(rowName):
+
+Function get date from file /proc/meminfo
+
+@param rowName: Name of line in meminfo 
+
+for line in open('/proc/meminfo').readlines():
+if line.startswith(rowName+:):
+name, amt, unit = line.split()
+return name, amt, unit   
+
+def parse_meminfo_value(rowName):
+
+Function convert meminfo value to int
+
+@param rowName: Name of line in meminfo  
+
+name, amt, unit = parse_meminfo(rowName)
+return amt
+
+def _get_stat(vm):
+if vm.is_dead():
+error.TestError(_get_stat: Trying to get informations of death\
+VM: %s % vm.name)
+try:
+cmd = cat /proc/%d/statm % params.get('pid_'+vm.name)
+shm = int(os.popen(cmd).readline().split()[2])
+# statm stores informations in pages, recalculate to MB
+shm = shm * 4 / 1024
+except:
+raise error.TestError(_get_stat: Could not fetch shmem info from\
+  VM: %s % vm.name)
+return shm
+
+def get_stat(lvms):
+
+Get statistics in format:
+Host: memfree = XXXM; Guests memsh = {XXX,XXX,...}
+
+@params lvms: List of VMs
+
+if not isinstance(lvms, list):
+raise error.TestError(get_stat: parameter have to be proper list)
+
+try:
+stat = Host: memfree = 
+stat += str(int(parse_meminfo_value(MemFree)) / 1024) + M; 
+stat += swapfree = 
+stat += str(int(parse_meminfo_value(SwapFree)) / 1024) + M; 
+except:
+raise error.TestFail(Could not fetch free memory info)
+
+
+stat += Guests memsh = {
+for vm in lvms:
+stat += %dM;  % (_get_stat(vm))
+stat = stat[0:-2] + }
+return stat
+
+def tmp_file(file, ext=None, dir='/tmp/'):
+while True:
+

Re: [Autotest] [KVM-AUTOTEST] KSM-overcommit test v.2 (python version)

2009-12-01 Thread Lukáš Doktor

Dne 29.11.2009 17:17, Dor Laor napsal(a):

On 11/26/2009 12:11 PM, Lukáš Doktor wrote:

Hello Dor,

Thank you for your review. I have few questions about your comments:

--- snip ---

+ stat += Guests memsh = {
+ for vm in lvms:
+ if vm.is_dead():
+ logging.info(Trying to get informations of death VM: %s
+ % vm.name)
+ continue


You can fail the entire test. Afterwards it will be hard to find the
issue.



Well if it's what the community wants, we can change it. We just didn't
want to lose information about the rest of the systems. Perhaps we can
set some DIE flag and after collecting all statistics raise an Error.


I don't think we need to continue testing if some thing as basic as VM
died upon us.

OK, we are going to change this.





--- snip ---

+ def get_true_pid(vm):
+ pid = vm.process.get_pid()
+ for i in range(1,10):
+ pid = pid + 1


What are you trying to do here? It's seems like a nasty hack that might
fail on load.




qemu has -pifile option. It works fine.

Oh my, I haven't thought on this. Of course I'm going to use -pidfile 
instead of this silly thing...




Yes and I'm really sorry for this ugly hack. The qemu command has
changed since the first patch was made. Nowadays the vm.pid returns
PID of the command itself, not the actual qemu process.
We need to have the PID of the actual qemu process, which is executed by
the command with PID vm.pid. That's why first I try finding the qemu
process as the following vm.pid PID. I haven't found another solution
yet (in case we don't want to change the qemu command back in the
framework).
We have tested this solution under heavy process load and either first
or second part always finds the right value.

--- snip ---

+ if (params['ksm_test_size'] == paralel) :
+ vmsc = 1
+ overcommit = 1
+ mem = host_mem
+ # 32bit system adjustment
+ if not params['image_name'].endswith(64):
+ logging.debug(Probably i386 guest architecture, \
+ max allocator mem = 2G)


Better not to relay on the guest name. You can test percentage of the
guest mem.



What do you mean by percentage of the guest mem? This adjustment is
made because the maximum memory for 1 process in 32 bit OS is 2GB.
Testing of the 'image_name' showed to be most reliable method we found.



It's not that important but it should be a convention of kvm autotest.
If that's acceptable, fine, otherwise, each VM will define it in the
config file

Yes kvm-autotest definitely need a way to decide whether this is 32 or 
64 bit guest. I'll send a separate email to KVM-autotest mailing list to 
let others express their opinions.




--- snip ---

+ # Guest can have more than 2G but kvm mem + 1MB (allocator itself)
+ # can't
+ if (host_mem 2048):
+ mem = 2047
+
+
+ if os.popen(uname -i).readline().startswith(i386):
+ logging.debug(Host is i386 architecture, max guest mem is 2G)


There are bigger 32 bit guests.


How do you mean this note? We are testing whether the host machine is 32
bit. If so, the maximum process allocation is 2GB (similar case to 32
bit guest) but this time the whole qemu process (2GB qemu machine + 64
MB qemu overhead) can't exceeded 2GB.
Still the maximum memory used in test is the same (as we increase the VM
count - host_mem = quest_mem * vm_count; quest_mem is decreased,
vm_count is increased)


i386 guests with PAE mode (additional 4 bits) can have up to 16G ram on
theory.

OK so we should first check whether PAE is on and separate into 3 groups 
(64bit-unlimited, PAE-16G, 32bit-2G).




--- snip ---

+
+ # Copy the allocator.c into guests


.py


yes indeed.

--- snip ---

+ # Let kksmd works (until shared mem rich expected value)
+ shm = 0
+ i = 0
+ cmd = cat/proc/%d/statm % get_true_pid(vm)
+ while shm ksm_size:
+ if i 64:
+ logging.info(get_stat(lvms))
+ raise error.TestError(SHM didn't merged the memory until \
+ the DL on guest: %s% (vm.name))
+ logging.debug(Sleep(%d) % (ksm_size / 200 * perf_ratio))
+ time.sleep(ksm_size / 200 * perf_ratio)
+ try:
+ shm = int(os.popen(cmd).readline().split()[2])
+ shm = shm * 4 / 1024
+ i = i + 1


Either you have nice statistic calculation function or not.
I vote for the first case.



Yes, we are using the statistics function for the output. But in this
case we just need to know the shm value, not to log anything.
If this is a big problem even for others, we can split the statistics
function into 2:
int = _get_stat(vm) - returns shm value
string = get_stat(vm) - Uses _get_stats and creates a nice log output

--- snip ---

+  Check if memory in max loading guest is allright
+ logging.info(Starting phase 3b)
+
+  Kill rest of machine


We should have a function for it for all kvm autotest



you think lsessions[i].close() instead of (status,data) =
lsessions[i].get_command_status_output(exit;,20)?
Yes, it would be better.


+ for i in range(last_vm+1, vmsc):
+ (status,data) = lsessions[i].get_command_status_output(exit;,20)
+ 

Re: [Autotest] [KVM-AUTOTEST] KSM-overcommit test v.2 (python version)

2009-11-29 Thread Dor Laor

On 11/26/2009 12:11 PM, Lukáš Doktor wrote:

Hello Dor,

Thank you for your review. I have few questions about your comments:

--- snip ---

+ stat += Guests memsh = {
+ for vm in lvms:
+ if vm.is_dead():
+ logging.info(Trying to get informations of death VM: %s
+ % vm.name)
+ continue


You can fail the entire test. Afterwards it will be hard to find the
issue.



Well if it's what the community wants, we can change it. We just didn't
want to lose information about the rest of the systems. Perhaps we can
set some DIE flag and after collecting all statistics raise an Error.


I don't think we need to continue testing if some thing as basic as VM 
died upon us.




--- snip ---

+ def get_true_pid(vm):
+ pid = vm.process.get_pid()
+ for i in range(1,10):
+ pid = pid + 1


What are you trying to do here? It's seems like a nasty hack that might
fail on load.




qemu has -pifile option. It works fine.



Yes and I'm really sorry for this ugly hack. The qemu command has
changed since the first patch was made. Nowadays the vm.pid returns
PID of the command itself, not the actual qemu process.
We need to have the PID of the actual qemu process, which is executed by
the command with PID vm.pid. That's why first I try finding the qemu
process as the following vm.pid PID. I haven't found another solution
yet (in case we don't want to change the qemu command back in the
framework).
We have tested this solution under heavy process load and either first
or second part always finds the right value.

--- snip ---

+ if (params['ksm_test_size'] == paralel) :
+ vmsc = 1
+ overcommit = 1
+ mem = host_mem
+ # 32bit system adjustment
+ if not params['image_name'].endswith(64):
+ logging.debug(Probably i386 guest architecture, \
+ max allocator mem = 2G)


Better not to relay on the guest name. You can test percentage of the
guest mem.



What do you mean by percentage of the guest mem? This adjustment is
made because the maximum memory for 1 process in 32 bit OS is 2GB.
Testing of the 'image_name' showed to be most reliable method we found.



It's not that important but it should be a convention of kvm autotest.
If that's acceptable, fine, otherwise, each VM will define it in the 
config file




--- snip ---

+ # Guest can have more than 2G but kvm mem + 1MB (allocator itself)
+ # can't
+ if (host_mem 2048):
+ mem = 2047
+
+
+ if os.popen(uname -i).readline().startswith(i386):
+ logging.debug(Host is i386 architecture, max guest mem is 2G)


There are bigger 32 bit guests.


How do you mean this note? We are testing whether the host machine is 32
bit. If so, the maximum process allocation is 2GB (similar case to 32
bit guest) but this time the whole qemu process (2GB qemu machine + 64
MB qemu overhead) can't exceeded 2GB.
Still the maximum memory used in test is the same (as we increase the VM
count - host_mem = quest_mem * vm_count; quest_mem is decreased,
vm_count is increased)


i386 guests with PAE mode (additional 4 bits) can have up to 16G ram on 
theory.




--- snip ---

+
+ # Copy the allocator.c into guests


.py


yes indeed.

--- snip ---

+ # Let kksmd works (until shared mem rich expected value)
+ shm = 0
+ i = 0
+ cmd = cat/proc/%d/statm % get_true_pid(vm)
+ while shm ksm_size:
+ if i 64:
+ logging.info(get_stat(lvms))
+ raise error.TestError(SHM didn't merged the memory until \
+ the DL on guest: %s% (vm.name))
+ logging.debug(Sleep(%d) % (ksm_size / 200 * perf_ratio))
+ time.sleep(ksm_size / 200 * perf_ratio)
+ try:
+ shm = int(os.popen(cmd).readline().split()[2])
+ shm = shm * 4 / 1024
+ i = i + 1


Either you have nice statistic calculation function or not.
I vote for the first case.



Yes, we are using the statistics function for the output. But in this
case we just need to know the shm value, not to log anything.
If this is a big problem even for others, we can split the statistics
function into 2:
int = _get_stat(vm) - returns shm value
string = get_stat(vm) - Uses _get_stats and creates a nice log output

--- snip ---

+  Check if memory in max loading guest is allright
+ logging.info(Starting phase 3b)
+
+  Kill rest of machine


We should have a function for it for all kvm autotest



you think lsessions[i].close() instead of (status,data) =
lsessions[i].get_command_status_output(exit;,20)?
Yes, it would be better.


+ for i in range(last_vm+1, vmsc):
+ (status,data) = lsessions[i].get_command_status_output(exit;,20)
+ if i == (vmsc-1):
+ logging.info(get_stat([lvms[i]]))
+ lvms[i].destroy(gracefully = False)


--- snip ---

+ def phase_paralel():
+  Paralel page spliting 
+ logging.info(Phase 1: Paralel page spliting)
+ # We have to wait until allocator is finished (it waits 5 seconds to
+ # clean the socket
+


The whole function is very similar to phase_separate_first_guest please
refactor them.


Yes, those functions are a bit similar. On the other hand there are 

Re: [Autotest] [KVM-AUTOTEST] KSM-overcommit test v.2 (python version)

2009-11-26 Thread Lukáš Doktor

Hello Dor,

Thank you for your review. I have few questions about your comments:

--- snip ---

+ stat += Guests memsh = {
+ for vm in lvms:
+ if vm.is_dead():
+ logging.info(Trying to get informations of death VM: %s
+ % vm.name)
+ continue


You can fail the entire test. Afterwards it will be hard to find the issue.



Well if it's what the community wants, we can change it. We just didn't 
want to lose information about the rest of the systems. Perhaps we can 
set some DIE flag and after collecting all statistics raise an Error.


--- snip ---

+ def get_true_pid(vm):
+ pid = vm.process.get_pid()
+ for i in range(1,10):
+ pid = pid + 1


What are you trying to do here? It's seems like a nasty hack that might
fail on load.



Yes and I'm really sorry for this ugly hack. The qemu command has 
changed since the first patch was made. Nowadays the vm.pid returns 
PID of the command itself, not the actual qemu process.
We need to have the PID of the actual qemu process, which is executed by 
the command with PID vm.pid. That's why first I try finding the qemu 
process as the following vm.pid PID. I haven't found another solution 
yet (in case we don't want to change the qemu command back in the 
framework).
We have tested this solution under heavy process load and either first 
or second part always finds the right value.


--- snip ---

+ if (params['ksm_test_size'] == paralel) :
+ vmsc = 1
+ overcommit = 1
+ mem = host_mem
+ # 32bit system adjustment
+ if not params['image_name'].endswith(64):
+ logging.debug(Probably i386 guest architecture, \
+ max allocator mem = 2G)


Better not to relay on the guest name. You can test percentage of the
guest mem.



What do you mean by percentage of the guest mem? This adjustment is 
made because the maximum memory for 1 process in 32 bit OS is 2GB.

Testing of the 'image_name' showed to be most reliable method we found.

--- snip ---

+ # Guest can have more than 2G but kvm mem + 1MB (allocator itself)
+ # can't
+ if (host_mem 2048):
+ mem = 2047
+
+
+ if os.popen(uname -i).readline().startswith(i386):
+ logging.debug(Host is i386 architecture, max guest mem is 2G)


There are bigger 32 bit guests.

How do you mean this note? We are testing whether the host machine is 32 
bit. If so, the maximum process allocation is 2GB (similar case to 32 
bit guest) but this time the whole qemu process (2GB qemu machine + 64 
MB qemu overhead) can't exceeded 2GB.
Still the maximum memory used in test is the same (as we increase the VM 
count - host_mem = quest_mem * vm_count; quest_mem is decreased, 
vm_count is increased)


--- snip ---

+
+ # Copy the allocator.c into guests


.py


yes indeed.

--- snip ---

+ # Let kksmd works (until shared mem rich expected value)
+ shm = 0
+ i = 0
+ cmd = cat/proc/%d/statm % get_true_pid(vm)
+ while shm ksm_size:
+ if i 64:
+ logging.info(get_stat(lvms))
+ raise error.TestError(SHM didn't merged the memory until \
+ the DL on guest: %s% (vm.name))
+ logging.debug(Sleep(%d) % (ksm_size / 200 * perf_ratio))
+ time.sleep(ksm_size / 200 * perf_ratio)
+ try:
+ shm = int(os.popen(cmd).readline().split()[2])
+ shm = shm * 4 / 1024
+ i = i + 1


Either you have nice statistic calculation function or not.
I vote for the first case.



Yes, we are using the statistics function for the output. But in this 
case we just need to know the shm value, not to log anything.
If this is a big problem even for others, we can split the statistics 
function into 2:

int = _get_stat(vm) - returns shm value
string = get_stat(vm) - Uses _get_stats and creates a nice log output

--- snip ---

+  Check if memory in max loading guest is allright
+ logging.info(Starting phase 3b)
+
+  Kill rest of machine


We should have a function for it for all kvm autotest



you think lsessions[i].close() instead of (status,data) = 
lsessions[i].get_command_status_output(exit;,20)?

Yes, it would be better.


+ for i in range(last_vm+1, vmsc):
+ (status,data) = lsessions[i].get_command_status_output(exit;,20)
+ if i == (vmsc-1):
+ logging.info(get_stat([lvms[i]]))
+ lvms[i].destroy(gracefully = False)


--- snip ---

+ def phase_paralel():
+  Paralel page spliting 
+ logging.info(Phase 1: Paralel page spliting)
+ # We have to wait until allocator is finished (it waits 5 seconds to
+ # clean the socket
+


The whole function is very similar to phase_separate_first_guest please
refactor them.

Yes, those functions are a bit similar. On the other hand there are some 
differences. Instead of creating more complex function we agreed to 
split them for better readability of the code.


--- snip ---

+ while shm ksm_size:
+ if i 64:
+ logging.info(get_stat(lvms))
+ raise error.TestError(SHM didn't merged the memory until DL)
+ logging.debug(Sleep(%d) % (ksm_size / 200 * perf_ratio))
+ time.sleep(ksm_size / 200 * perf_ratio)
+ try:
+ shm = 

Re: [Autotest] [KVM-AUTOTEST] KSM-overcommit test v.2 (python version)

2009-11-22 Thread Dor Laor

On 11/17/2009 04:49 PM, Jiri Zupka wrote:

Hi,
   We find a little mistake with ending of allocator.py.
Because I send this patch today. I resend whole repaired patch again.



It sure is big improvment from the previous.
There are still many refactoring to be made to make it more readable.
Comments embedded.


- Original Message -
From: Jiri Zupkajzu...@redhat.com
To: autotestautot...@test.kernel.org, kvmkvm@vger.kernel.org
Cc:u...@redhat.com
Sent: Tuesday, November 17, 2009 12:52:28 AM GMT +01:00 Amsterdam / Berlin / 
Bern / Rome / Stockholm / Vienna
Subject: [Autotest] [KVM-AUTOTEST] KSM-overcommit test v.2 (python version)

Hi,
   based on your requirements we have created new version
of KSM-overcommit patch (submitted in September).

Describe:
   It tests KSM (kernel shared memory) with overcommit of memory.

Changelog:
   1) Based only on python (remove C code)
   2) Add new test (check last 96B)
   3) Separate test to (serial,parallel,both)
   4) Improve log and documentation
   5) Add perf constat to change time limit for waiting. (slow computer problem)

Functionality:
   KSM test start guests. They are connect to guest over ssh.
   Copy and run allocator.py to guests.
   Host can run any python command over Allocator.py loop on client side.

   Start run_ksm_overcommit.
   Define host and guest reserve variables (host_reserver,guest_reserver).
   Calculate amount of virtual machine and their memory based on variables
   host_mem and overcommit.
   Check KSM status.
   Create and start virtual guests.
   Test :
a] serial
 1) initialize, merge all mem to single page
 2) separate first guset mem
 3) separate rest of guest up to fill all mem
 4) kill all guests except for the last
 5) check if mem of last guest is ok
 6) kill guest
b] parallel
 1) initialize, merge all mem to single page
 2) separate mem of guest
 3) verification of guest mem
 4) merge mem to one block
 5) verification of guests mem
 6) separate mem of guests by 96B
 7) check if mem is all right
 8) kill guest
   allocator.py (client side script)
 After start they wait for command witch they make in client side.
 mem_fill class implement commands to fill, check mem and return
 error to host.

We need client side script because we need generate lot of GB of special
data.

Future plane:
   We want to add to log information about time spend in task.
   Information from log we want to use to automatic compute perf contant.
   And add New tests.










___
Autotest mailing list
autot...@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest


ksm_overcommit.patch


diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample
index ac9ef66..90f62bb 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -118,6 +118,23 @@ variants:
  test_name = npb
  test_control_file = npb.control

+- ksm_overcommit:
+# Don't preprocess any vms as we need to change it's params
+vms = ''
+image_snapshot = yes
+kill_vm_gracefully = no
+type = ksm_overcommit
+ksm_swap = yes   # yes | no
+no hugepages
+# Overcommit of host memmory
+ksm_overcommit_ratio = 3
+# Max paralel runs machine
+ksm_paralel_ratio = 4
+variants:
+- serial
+ksm_test_size = serial
+- paralel
+ksm_test_size = paralel

  - linux_s3: install setup unattended_install
  type = linux_s3
diff --git a/client/tests/kvm/tests/ksm_overcommit.py 
b/client/tests/kvm/tests/ksm_overcommit.py
new file mode 100644
index 000..408e711
--- /dev/null
+++ b/client/tests/kvm/tests/ksm_overcommit.py
@@ -0,0 +1,605 @@
+import logging, time
+from autotest_lib.client.common_lib import error
+import kvm_subprocess, kvm_test_utils, kvm_utils
+import kvm_preprocessing
+import random, string, math, os
+
+def run_ksm_overcommit(test, params, env):
+
+Test how KSM (Kernel Shared Memory) act with more than physical memory is
+used. In second part is also tested, how KVM can handle the situation,
+when the host runs out of memory (expected is to pause the guest system,
+wait until some process returns the memory and bring the guest back to 
life)
+
+@param test: kvm test object.
+@param params: Dictionary with test parameters.
+@param env: Dictionary with the test wnvironment.
+
+
+def parse_meminfo(rowName):
+
+Function get date from file /proc/meminfo
+
+@param rowName: Name of line in meminfo
+
+for line in open('/proc/meminfo').readlines():
+if line.startswith(rowName+:):
+name, amt, unit = line.split()
+return name, amt, unit
+
+def parse_meminfo_value(rowName

Re: [Autotest] [KVM-AUTOTEST] KSM-overcommit test v.2 (python version)

2009-11-17 Thread Jiri Zupka
Hi,
  We find a little mistake with ending of allocator.py. 
Because I send this patch today. I resend whole repaired patch again. 


- Original Message -
From: Jiri Zupka jzu...@redhat.com
To: autotest autot...@test.kernel.org, kvm kvm@vger.kernel.org
Cc: u...@redhat.com
Sent: Tuesday, November 17, 2009 12:52:28 AM GMT +01:00 Amsterdam / Berlin / 
Bern / Rome / Stockholm / Vienna
Subject: [Autotest] [KVM-AUTOTEST] KSM-overcommit test v.2 (python version)

Hi,  
  based on your requirements we have created new version 
of KSM-overcommit patch (submitted in September). 

Describe:
  It tests KSM (kernel shared memory) with overcommit of memory.

Changelog:
  1) Based only on python (remove C code)
  2) Add new test (check last 96B)
  3) Separate test to (serial,parallel,both)
  4) Improve log and documentation 
  5) Add perf constat to change time limit for waiting. (slow computer problem)

Functionality:
  KSM test start guests. They are connect to guest over ssh.
  Copy and run allocator.py to guests. 
  Host can run any python command over Allocator.py loop on client side. 

  Start run_ksm_overcommit.
  Define host and guest reserve variables (host_reserver,guest_reserver).
  Calculate amount of virtual machine and their memory based on variables
  host_mem and overcommit. 
  Check KSM status.
  Create and start virtual guests.
  Test :
   a] serial
1) initialize, merge all mem to single page
2) separate first guset mem
3) separate rest of guest up to fill all mem
4) kill all guests except for the last
5) check if mem of last guest is ok
6) kill guest
   b] parallel 
1) initialize, merge all mem to single page
2) separate mem of guest
3) verification of guest mem
4) merge mem to one block
5) verification of guests mem
6) separate mem of guests by 96B
7) check if mem is all right 
8) kill guest
  allocator.py (client side script) 
After start they wait for command witch they make in client side.
mem_fill class implement commands to fill, check mem and return 
error to host.

We need client side script because we need generate lot of GB of special 
data. 

Future plane:
  We want to add to log information about time spend in task. 
  Information from log we want to use to automatic compute perf contant.
  And add New tests.
  






  


___
Autotest mailing list
autot...@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
diff --git a/client/tests/kvm/kvm_tests.cfg.sample b/client/tests/kvm/kvm_tests.cfg.sample
index ac9ef66..90f62bb 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -118,6 +118,23 @@ variants:
 test_name = npb
 test_control_file = npb.control
 
+- ksm_overcommit:
+# Don't preprocess any vms as we need to change it's params
+vms = ''
+image_snapshot = yes
+kill_vm_gracefully = no
+type = ksm_overcommit
+ksm_swap = yes   # yes | no
+no hugepages
+# Overcommit of host memmory
+ksm_overcommit_ratio = 3
+# Max paralel runs machine
+ksm_paralel_ratio = 4
+variants:
+- serial
+ksm_test_size = serial
+- paralel
+ksm_test_size = paralel
 
 - linux_s3: install setup unattended_install
 type = linux_s3
diff --git a/client/tests/kvm/tests/ksm_overcommit.py b/client/tests/kvm/tests/ksm_overcommit.py
new file mode 100644
index 000..408e711
--- /dev/null
+++ b/client/tests/kvm/tests/ksm_overcommit.py
@@ -0,0 +1,605 @@
+import logging, time
+from autotest_lib.client.common_lib import error
+import kvm_subprocess, kvm_test_utils, kvm_utils
+import kvm_preprocessing
+import random, string, math, os
+
+def run_ksm_overcommit(test, params, env):
+
+Test how KSM (Kernel Shared Memory) act with more than physical memory is
+used. In second part is also tested, how KVM can handle the situation,
+when the host runs out of memory (expected is to pause the guest system,
+wait until some process returns the memory and bring the guest back to life)
+
+@param test: kvm test object.
+@param params: Dictionary with test parameters.
+@param env: Dictionary with the test wnvironment.
+
+
+def parse_meminfo(rowName):
+
+Function get date from file /proc/meminfo
+
+@param rowName: Name of line in meminfo 
+
+for line in open('/proc/meminfo').readlines():
+if line.startswith(rowName+:):
+name, amt, unit = line.split()
+return name, amt, unit   
+
+def parse_meminfo_value(rowName):
+
+Function convert meminfo value to int
+
+@param rowName: Name of line in meminfo  
+
+name, amt, unit = parse_meminfo(rowName)
+return amt