[PATCH] KVM test: use new functions in cdrom_test

2012-05-30 Thread Lukáš Doktor
Use get_block and other framework functions in cdrom test. Also
don't fail the whole test when tray-status reporting is not supported
by qemu and other cleanups.

Signed-off-by: Lukáš Doktor ldok...@redhat.com
---
 client/tests/kvm/tests/cdrom.py |  118 ---
 client/virt/subtests.cfg.sample |2 +
 2 files changed, 50 insertions(+), 70 deletions(-)

diff --git a/client/tests/kvm/tests/cdrom.py b/client/tests/kvm/tests/cdrom.py
index 089150b..4390796 100644
--- a/client/tests/kvm/tests/cdrom.py
+++ b/client/tests/kvm/tests/cdrom.py
@@ -21,7 +21,7 @@ def run_cdrom(test, params, env):
 3) * If cdrom_test_autounlock is set, verifies that device is unlocked
300s after boot
 4) Eject cdrom using monitor and change with another iso several times.
-5) Eject cdrom in guest and check tray status reporting.
+5) * If cdrom_test_tray_status = yes, tests tray reporting.
 6) Try to format cdrom and check the return string.
 7) Mount cdrom device.
 8) Copy file from cdrom and compare files using diff.
@@ -35,6 +35,10 @@ def run_cdrom(test, params, env):
 eject CDROM directly after insert
 @param cfg: cdrom_test_autounlock - Test whether guest OS unlocks cdrom
 after boot (300s after VM is booted)
+@param cfg: cdrom_test_tray_status - Test tray reporting (eject and insert
+ CD couple of times in guest).
+
+@warning: Check dmesg for block device failures
 
 def master_cdroms(params):
  Creates 'new' cdrom with one file on it 
@@ -43,7 +47,7 @@ def run_cdrom(test, params, env):
 cdrom_cd1 = params.get(cdrom_cd1)
 if not os.path.isabs(cdrom_cd1):
 cdrom_cd1 = os.path.join(test.bindir, cdrom_cd1)
-cdrom_dir = os.path.realpath(os.path.dirname(cdrom_cd1))
+cdrom_dir = os.path.dirname(cdrom_cd1)
 utils.run(dd if=/dev/urandom of=orig bs=10M count=1)
 utils.run(dd if=/dev/urandom of=new bs=10M count=1)
 utils.run(mkisofs -o %s/orig.iso orig % cdrom_dir)
@@ -55,57 +59,27 @@ def run_cdrom(test, params, env):
 error.context(cleaning up temp cdrom images)
 os.remove(%s/new.iso % cdrom_dir)
 
-def get_block_info(re_device='[^\n][^:]+'):
- Gets device string and file from kvm-monitor 
-blocks = vm.monitor.info(block)
-devices = []
-files = []
-if isinstance(blocks, str):
-devices = re.findall('(%s): .*' % re_device, blocks)
-if devices:
-for dev in devices:
-cdfile = re.findall('%s: .*file=(\S*) ' % dev, blocks)
-if cdfile:
-cdfile = os.path.realpath(cdfile[0])
-else:
-cdfile = None
-files.append(cdfile)
-else:
-for block in blocks:
-if re.match(re_device, block['device']):
-devices.append(block['device'])
-try:
-cdfile = block['inserted']['file']
-if cdfile:
-cdfile = os.path.realpath(cdfile)
-except KeyError:
-cdfile = None
-files.append(cdfile)
-return (devices, files)
-
-def get_cdrom_info(device):
+def get_cdrom_file(device):
 
 @param device: qemu monitor device
 @return: file associated with $device device
 
-(_, cdfile) = get_block_info(device)
-logging.debug(Device name: %s, ISO: %s, device, cdfile[0])
-return cdfile[0]
-
-def check_cdrom_locked(cdrom):
- Checks whether the cdrom is locked 
 blocks = vm.monitor.info(block)
+cdfile = None
 if isinstance(blocks, str):
-lock_str = locked=1
-for block in blocks.splitlines():
-if cdrom in block and lock_str in block:
-return True
+cdfile = re.findall('%s: .*file=(\S*) ' % device, blocks)
+if not cdfile:
+return None
+else:
+cdfile = cdfile[0]
 else:
 for block in blocks:
-if ('inserted' in block.keys() and
-block['inserted']['file'] == cdrom):
-return block['locked']
-return False
+if block['device'] == device:
+try:
+cdfile = block['inserted']['file']
+except KeyError:
+continue
+return cdfile
 
 def check_cdrom_tray(cdrom):
  Checks whether the tray is opend 
@@ -121,7 +95,7 @@ def run_cdrom(test, params, env):
 for block in blocks:
 if block['device'] == cdrom and 'tray_open' in block.keys

Re: [PATCH] KVM test: use new functions in cdrom_test

2012-05-30 Thread Lukáš Doktor

I forgot add pull request link:
https://github.com/autotest/autotest/pull/368

Dne 30.5.2012 16:43, Lukáš Doktor napsal(a):

Use get_block and other framework functions in cdrom test. Also
don't fail the whole test when tray-status reporting is not supported
by qemu and other cleanups.

Signed-off-by: Lukáš Doktorldok...@redhat.com
---
  client/tests/kvm/tests/cdrom.py |  118 ---
  client/virt/subtests.cfg.sample |2 +
  2 files changed, 50 insertions(+), 70 deletions(-)

diff --git a/client/tests/kvm/tests/cdrom.py b/client/tests/kvm/tests/cdrom.py
index 089150b..4390796 100644
--- a/client/tests/kvm/tests/cdrom.py
+++ b/client/tests/kvm/tests/cdrom.py
@@ -21,7 +21,7 @@ def run_cdrom(test, params, env):
  3) * If cdrom_test_autounlock is set, verifies that device is unlocked
 300s after boot
  4) Eject cdrom using monitor and change with another iso several times.
-5) Eject cdrom in guest and check tray status reporting.
+5) * If cdrom_test_tray_status = yes, tests tray reporting.
  6) Try to format cdrom and check the return string.
  7) Mount cdrom device.
  8) Copy file from cdrom and compare files using diff.
@@ -35,6 +35,10 @@ def run_cdrom(test, params, env):
  eject CDROM directly after insert
  @param cfg: cdrom_test_autounlock - Test whether guest OS unlocks cdrom
  after boot (300s after VM is booted)
+@param cfg: cdrom_test_tray_status - Test tray reporting (eject and insert
+ CD couple of times in guest).
+
+@warning: Check dmesg for block device failures
  
  def master_cdroms(params):
   Creates 'new' cdrom with one file on it 
@@ -43,7 +47,7 @@ def run_cdrom(test, params, env):
  cdrom_cd1 = params.get(cdrom_cd1)
  if not os.path.isabs(cdrom_cd1):
  cdrom_cd1 = os.path.join(test.bindir, cdrom_cd1)
-cdrom_dir = os.path.realpath(os.path.dirname(cdrom_cd1))
+cdrom_dir = os.path.dirname(cdrom_cd1)
  utils.run(dd if=/dev/urandom of=orig bs=10M count=1)
  utils.run(dd if=/dev/urandom of=new bs=10M count=1)
  utils.run(mkisofs -o %s/orig.iso orig % cdrom_dir)
@@ -55,57 +59,27 @@ def run_cdrom(test, params, env):
  error.context(cleaning up temp cdrom images)
  os.remove(%s/new.iso % cdrom_dir)

-def get_block_info(re_device='[^\n][^:]+'):
- Gets device string and file from kvm-monitor 
-blocks = vm.monitor.info(block)
-devices = []
-files = []
-if isinstance(blocks, str):
-devices = re.findall('(%s): .*' % re_device, blocks)
-if devices:
-for dev in devices:
-cdfile = re.findall('%s: .*file=(\S*) ' % dev, blocks)
-if cdfile:
-cdfile = os.path.realpath(cdfile[0])
-else:
-cdfile = None
-files.append(cdfile)
-else:
-for block in blocks:
-if re.match(re_device, block['device']):
-devices.append(block['device'])
-try:
-cdfile = block['inserted']['file']
-if cdfile:
-cdfile = os.path.realpath(cdfile)
-except KeyError:
-cdfile = None
-files.append(cdfile)
-return (devices, files)
-
-def get_cdrom_info(device):
+def get_cdrom_file(device):
  
  @param device: qemu monitor device
  @return: file associated with $device device
  
-(_, cdfile) = get_block_info(device)
-logging.debug(Device name: %s, ISO: %s, device, cdfile[0])
-return cdfile[0]
-
-def check_cdrom_locked(cdrom):
- Checks whether the cdrom is locked 
  blocks = vm.monitor.info(block)
+cdfile = None
  if isinstance(blocks, str):
-lock_str = locked=1
-for block in blocks.splitlines():
-if cdrom in block and lock_str in block:
-return True
+cdfile = re.findall('%s: .*file=(\S*) ' % device, blocks)
+if not cdfile:
+return None
+else:
+cdfile = cdfile[0]
  else:
  for block in blocks:
-if ('inserted' in block.keys() and
-block['inserted']['file'] == cdrom):
-return block['locked']
-return False
+if block['device'] == device:
+try:
+cdfile = block['inserted']['file']
+except KeyError:
+continue
+return cdfile

  def check_cdrom_tray(cdrom):
   Checks whether the tray is opend 
@@ -121,7

[PATCH 1/3] KVM test: cdrom_test bugfixes

2012-05-26 Thread Lukáš Doktor
* fix another issues with symlinks and abs-paths
* detect the right cdrom device
* fix issue with locked cdrom (workaround not needed)
* improve comments and code-style (pylint)

Signed-off-by: Lukáš Doktor ldok...@redhat.com
---
 client/tests/kvm/tests/cdrom.py |  151 +++
 client/virt/subtests.cfg.sample |6 +-
 2 files changed, 79 insertions(+), 78 deletions(-)

diff --git a/client/tests/kvm/tests/cdrom.py b/client/tests/kvm/tests/cdrom.py
index 82aaa34..089150b 100644
--- a/client/tests/kvm/tests/cdrom.py
+++ b/client/tests/kvm/tests/cdrom.py
@@ -18,22 +18,31 @@ def run_cdrom(test, params, env):
 
 1) Boot up a VM with one iso.
 2) Check if VM identifies correctly the iso file.
-3) Eject cdrom using monitor and change with another iso several times.
-4) Eject cdrom in guest and check tray status reporting.
-5) Try to format cdrom and check the return string.
-6) Mount cdrom device.
-7) Copy file from cdrom and compare files using diff.
-8) Umount and mount several times.
+3) * If cdrom_test_autounlock is set, verifies that device is unlocked
+   300s after boot
+4) Eject cdrom using monitor and change with another iso several times.
+5) Eject cdrom in guest and check tray status reporting.
+6) Try to format cdrom and check the return string.
+7) Mount cdrom device.
+8) Copy file from cdrom and compare files using diff.
+9) Umount and mount several times.
 
 @param test: kvm test object
 @param params: Dictionary with the test parameters
 @param env: Dictionary with test environment.
+
+@param cfg: workaround_eject_time - Some versions of qemu are unable to
+eject CDROM directly after insert
+@param cfg: cdrom_test_autounlock - Test whether guest OS unlocks cdrom
+after boot (300s after VM is booted)
 
 def master_cdroms(params):
  Creates 'new' cdrom with one file on it 
 error.context(creating test cdrom)
 os.chdir(test.tmpdir)
 cdrom_cd1 = params.get(cdrom_cd1)
+if not os.path.isabs(cdrom_cd1):
+cdrom_cd1 = os.path.join(test.bindir, cdrom_cd1)
 cdrom_dir = os.path.realpath(os.path.dirname(cdrom_cd1))
 utils.run(dd if=/dev/urandom of=orig bs=10M count=1)
 utils.run(dd if=/dev/urandom of=new bs=10M count=1)
@@ -41,44 +50,47 @@ def run_cdrom(test, params, env):
 utils.run(mkisofs -o %s/new.iso new % cdrom_dir)
 return %s/new.iso % cdrom_dir
 
-
 def cleanup_cdroms(cdrom_dir):
  Removes created cdrom 
 error.context(cleaning up temp cdrom images)
 os.remove(%s/new.iso % cdrom_dir)
 
-
-def get_cdrom_info():
+def get_block_info(re_device='[^\n][^:]+'):
  Gets device string and file from kvm-monitor 
 blocks = vm.monitor.info(block)
-(device, file) = (None, None)
+devices = []
+files = []
 if isinstance(blocks, str):
-try:
-device = re.findall((\w+\d+-cd\d+): .*, blocks)[0]
-except IndexError:
-device = None
-try:
-file = re.findall(\w+\d+-cd\d+: .*file=(\S*) , blocks)[0]
-file = os.path.realpath(file)
-except IndexError:
-file = None
+devices = re.findall('(%s): .*' % re_device, blocks)
+if devices:
+for dev in devices:
+cdfile = re.findall('%s: .*file=(\S*) ' % dev, blocks)
+if cdfile:
+cdfile = os.path.realpath(cdfile[0])
+else:
+cdfile = None
+files.append(cdfile)
 else:
 for block in blocks:
-d = block['device']
-try:
-device = re.findall((\w+\d+-cd\d+), d)[0]
-except IndexError:
-device = None
-continue
-try:
-file = block['inserted']['file']
-file = os.path.realpath(file)
-except KeyError:
-file = None
-break
-logging.debug(Device name: %s, ISO: %s, device, file)
-return (device, file)
-
+if re.match(re_device, block['device']):
+devices.append(block['device'])
+try:
+cdfile = block['inserted']['file']
+if cdfile:
+cdfile = os.path.realpath(cdfile)
+except KeyError:
+cdfile = None
+files.append(cdfile)
+return (devices, files)
+
+def get_cdrom_info(device):
+
+@param device: qemu monitor device
+@return: file associated with $device device

[PATCH 2/3] virt.kvm_vm: Fix virtio_scsi cdrom in qemu_cmd

2012-05-26 Thread Lukáš Doktor
fixes incorrect bus name for virtio_scsi cdroms.

Signed-off-by: Lukáš Doktor ldok...@redhat.com
---
 client/virt/kvm_vm.py |   18 ++
 1 files changed, 6 insertions(+), 12 deletions(-)

diff --git a/client/virt/kvm_vm.py b/client/virt/kvm_vm.py
index 19d016f..6bc1ae6 100644
--- a/client/virt/kvm_vm.py
+++ b/client/virt/kvm_vm.py
@@ -272,7 +272,7 @@ class VM(virt_vm.BaseVM):
 def add_smp(help, smp):
 return  -smp %s % smp
 
-def add_cdrom(help, filename, index=None, format=None):
+def add_cdrom(help, filename, index=None, format=None, bus=None):
 if has_option(help, drive):
 name = None;
 dev = ;
@@ -290,8 +290,9 @@ class VM(virt_vm.BaseVM):
 if format is not None and format.startswith(scsi-):
 # handles scsi-{hd, cd, disk, block, generic} targets
 name = virtio-scsi-cd%s % index
-dev += ( -device %s,drive=%s,bus=virtio_scsi_pci.0 %
+dev += ( -device %s,drive=%s %
 (format, name))
+dev += _add_option(bus, virtio_scsi_pci%d.0 % bus)
 format = none
 index = None
 cmd =  -drive file='%s',media=cdrom % filename
@@ -853,18 +854,11 @@ class VM(virt_vm.BaseVM):
 cd_format = params.get(cd_format, )
 cdrom_params = params.object_params(cdrom)
 iso = cdrom_params.get(cdrom)
+bus = None
 if cd_format == ahci and not have_ahci:
 qemu_cmd +=  -device ahci,id=ahci
 have_ahci = True
-if cd_format.startswith(scsi-):
-bus = cdrom_params.get(drive_bus)
-if bus and bus not in virtio_scsi_pcis:
-qemu_cmd +=  -device virtio-scsi,id=%s % bus
-virtio_scsi_pcis.append(bus)
-elif not virtio_scsi_pcis:
-qemu_cmd +=  -device virtio-scsi,id=virtio_scsi_pci0
-virtio_scsi_pcis.append(virtio_scsi_pci0)
-if cd_format.startswith(scsi-):
+if cd_format and cd_format.startswith(scsi-):
 try:
 bus = int(cdrom_params.get(drive_bus, 0))
 except ValueError:
@@ -876,7 +870,7 @@ class VM(virt_vm.BaseVM):
 if iso:
 qemu_cmd += add_cdrom(help, virt_utils.get_path(root_dir, iso),
   cdrom_params.get(drive_index),
-  cd_format)
+  cd_format, bus)
 
 # We may want to add {floppy_otps} parameter for -fda
 # {fat:floppy:}/path/. However vvfat is not usually recommended.
-- 
1.7.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/3] virt.kvm_vm: Fix usb2 cdrom in qemu_cmd

2012-05-26 Thread Lukáš Doktor
fixes missing bus and port for usb2 cdroms.

Signed-off-by: Lukáš Doktor ldok...@redhat.com
---
 client/virt/guest-hw.cfg.sample |2 ++
 client/virt/kvm_vm.py   |   12 +---
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/client/virt/guest-hw.cfg.sample b/client/virt/guest-hw.cfg.sample
index 655ac9b..d6a5ae2 100644
--- a/client/virt/guest-hw.cfg.sample
+++ b/client/virt/guest-hw.cfg.sample
@@ -75,6 +75,8 @@ variants:
 usb_type_default-ehci = usb-ehci
 - usb_cdrom:
 cd_format=usb2
+usbs +=  default-ehci-cd
+usb_type_default-ehci-cd = usb-ehci
 - xenblk:
 # placeholder
 
diff --git a/client/virt/kvm_vm.py b/client/virt/kvm_vm.py
index 6bc1ae6..fcd3233 100644
--- a/client/virt/kvm_vm.py
+++ b/client/virt/kvm_vm.py
@@ -272,7 +272,8 @@ class VM(virt_vm.BaseVM):
 def add_smp(help, smp):
 return  -smp %s % smp
 
-def add_cdrom(help, filename, index=None, format=None, bus=None):
+def add_cdrom(help, filename, index=None, format=None, bus=None,
+  port=None):
 if has_option(help, drive):
 name = None;
 dev = ;
@@ -283,8 +284,10 @@ class VM(virt_vm.BaseVM):
 index = None
 if format == usb2:
 name = usb2.%s % index
-dev +=  -device usb-storage,bus=ehci.0,drive=%s % name
-dev += ,port=%d % (int(index) + 1)
+dev +=  -device usb-storage
+dev += _add_option(bus, bus)
+dev += _add_option(port, port)
+dev += _add_option(drive, name)
 format = none
 index = None
 if format is not None and format.startswith(scsi-):
@@ -855,6 +858,9 @@ class VM(virt_vm.BaseVM):
 cdrom_params = params.object_params(cdrom)
 iso = cdrom_params.get(cdrom)
 bus = None
+port = None
+if cd_format == usb2:
+bus, port = get_free_usb_port(image_name, ehci)
 if cd_format == ahci and not have_ahci:
 qemu_cmd +=  -device ahci,id=ahci
 have_ahci = True
-- 
1.7.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/4] [KVM-autotest] tests.cfg.sample: change import order

2012-03-12 Thread Lukáš Doktor

Hi,

it caused problems so I had to modify it a bit. It's already fixed and 
applied in upstream.


Regards,
Lukáš

Dne 12.3.2012 04:34, lei yang napsal(a):

Howerver, you did the opposite thing or you did it two times

commit 6e4b5cffe999714357116884fcc4eb27fae41260
Author: Lucas Meneghel Rodriguesl...@redhat.com
Date:   Wed Feb 29 18:47:14 2012 -0300

 Revert tests.cfg.sample: change import order

 This reverts commit e64b17d7a15602db0cd26ec55ccc902010985d0c,
 as it's causing problems with the test execution order.

 Signed-off-by: Lucas Meneghel Rodrigues

diff --git a/client/tests/kvm/tests-shared.cfg.sample
b/client/tests/kvm/tests-shared.cfg.sample
index bda982d..c6304b3 100644
--- a/client/tests/kvm/tests-shared.cfg.sample
+++ b/client/tests/kvm/tests-shared.cfg.sample
@@ -5,11 +5,11 @@

  # Include the base config files.
  include base.cfg
+include subtests.cfg
  include guest-os.cfg
  include guest-hw.cfg
  include cdkeys.cfg
  include virtio-win.cfg
-include subtests.cfg

  # Virtualization type (kvm or libvirt)
  vm_type = kvm


Lei


On Tue, Feb 28, 2012 at 2:42 AM, Lukas Doktorldok...@redhat.com  wrote:

Currently subtests.cfg is proceeded and then all other configs. My test
needs to override smp parameter in some variant which is currently
impossible.

Using words current order means: we define subtests variants, than we
specify base and guest and other details. In the end we limit what
we want to execute.

My proposed order enables forcing base/guest params in subtest variants.

By words this means we specify base, guest system, cdkeys, etc. and in
the end we define subtests with various variants. Then we limit what
we actually want to execute but now subtest can force varius base/guest
settings.

Signed-off-by: Lukas Doktorldok...@redhat.com
---
  client/tests/kvm/tests-shared.cfg.sample |2 +-
  1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/tests-shared.cfg.sample 
b/client/tests/kvm/tests-shared.cfg.sample
index c6304b3..bda982d 100644
--- a/client/tests/kvm/tests-shared.cfg.sample
+++ b/client/tests/kvm/tests-shared.cfg.sample
@@ -5,11 +5,11 @@

  # Include the base config files.
  include base.cfg
-include subtests.cfg
  include guest-os.cfg
  include guest-hw.cfg
  include cdkeys.cfg
  include virtio-win.cfg
+include subtests.cfg

  # Virtualization type (kvm or libvirt)
  vm_type = kvm
--
1.7.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/4] [KVM-autotest] tests.cfg.sample: change import order

2012-03-12 Thread Lukáš Doktor
Yes, as I mentioned before, I wanted to put subtests.cfg after the other 
imports, but it caused a lot of trouble. So I find another solution 
without changing the import order.


Dne 12.3.2012 08:49, lei yang napsal(a):

On Mon, Mar 12, 2012 at 3:15 PM, Lukáš Doktorldok...@redhat.com  wrote:

Hi,

it caused problems so I had to modify it a bit. It's already fixed and
applied in upstream.



You mean you want put include subtests.cfg on the top or in the end?
from your idea you seems want it to be the end to change some
parameter easily

after I pull the tree,

I got something like (git show 6e4b5cffe999714357116884fcc4eb27fae41260)

include base.cfg
include subtests.cfg
include guest-os.cfg
include guest-hw.cfg
include cdkeys.cfg
include virtio-win.cfg

but I thought you may want to it be like

include base.cfg
include guest-os.cfg
include guest-hw.cfg
include cdkeys.cfg
include virtio-win.cfg
include subtests.cfg

or I'm wrong?





Regards,
Lukáš

Dne 12.3.2012 04:34, lei yang napsal(a):


Howerver, you did the opposite thing or you did it two times

commit 6e4b5cffe999714357116884fcc4eb27fae41260
Author: Lucas Meneghel Rodriguesl...@redhat.com
Date:   Wed Feb 29 18:47:14 2012 -0300

 Revert tests.cfg.sample: change import order

 This reverts commit e64b17d7a15602db0cd26ec55ccc902010985d0c,
 as it's causing problems with the test execution order.

 Signed-off-by: Lucas Meneghel Rodrigues

diff --git a/client/tests/kvm/tests-shared.cfg.sample
b/client/tests/kvm/tests-shared.cfg.sample
index bda982d..c6304b3 100644
--- a/client/tests/kvm/tests-shared.cfg.sample
+++ b/client/tests/kvm/tests-shared.cfg.sample
@@ -5,11 +5,11 @@

  # Include the base config files.
  include base.cfg
+include subtests.cfg
  include guest-os.cfg
  include guest-hw.cfg
  include cdkeys.cfg
  include virtio-win.cfg
-include subtests.cfg

  # Virtualization type (kvm or libvirt)
  vm_type = kvm


Lei


On Tue, Feb 28, 2012 at 2:42 AM, Lukas Doktorldok...@redhat.comwrote:

Currently subtests.cfg is proceeded and then all other configs. My test
needs to override smp parameter in some variant which is currently
impossible.

Using words current order means: we define subtests variants, than we
specify base and guest and other details. In the end we limit what
we want to execute.

My proposed order enables forcing base/guest params in subtest variants.

By words this means we specify base, guest system, cdkeys, etc. and in
the end we define subtests with various variants. Then we limit what
we actually want to execute but now subtest can force varius base/guest
settings.

Signed-off-by: Lukas Doktorldok...@redhat.com
---
  client/tests/kvm/tests-shared.cfg.sample |2 +-
  1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/client/tests/kvm/tests-shared.cfg.sample
b/client/tests/kvm/tests-shared.cfg.sample
index c6304b3..bda982d 100644
--- a/client/tests/kvm/tests-shared.cfg.sample
+++ b/client/tests/kvm/tests-shared.cfg.sample
@@ -5,11 +5,11 @@

  # Include the base config files.
  include base.cfg
-include subtests.cfg
  include guest-os.cfg
  include guest-hw.cfg
  include cdkeys.cfg
  include virtio-win.cfg
+include subtests.cfg

  # Virtualization type (kvm or libvirt)
  vm_type = kvm
--
1.7.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html




--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] adds cgroup tests on KVM guests with first test

2011-11-03 Thread Lukáš Doktor

Dne 3.11.2011 07:04, Suqin napsal(a):

On 09/23/2011 12:29 AM, Lukas Doktor wrote:

basic structure:
  * similar to general client/tests/cgroup/ test (imports from the
cgroup_common.py)
  * uses classes for better handling
  * improved logging and error handling
  * checks/repair the guests after each subtest
  * subtest mapping is specified in test dictionary in cgroup.py
  * allows to specify tests/repetions in tests_base.cfg
 (cgroup_tests = re1[:loops] re2[:loops] ...)

TestBlkioBandwidthWeight{Read,Write}:
  * Two similar tests for blkio.weight functionality inside the guest 
using

direct io and virtio_blk driver
  * Function:
  1) On 2 VMs adds small (10MB) virtio_blk disk
  2) Assigns each to different cgroup and sets blkio.weight 100/1000
  3) Runs dd with flag=direct (read/write) from the virtio_blk disk
 repeatidly
  4) After 1 minute checks the results. If the ratio is better then 1:3,
 test passes

Signed-off-by: Lukas Doktorldok...@redhat.com
---
  client/tests/kvm/subtests.cfg.sample |7 +
  client/tests/kvm/tests/cgroup.py |  316 
++

  2 files changed, 323 insertions(+), 0 deletions(-)
  create mode 100644 client/tests/cgroup/__init__.py
  create mode 100644 client/tests/kvm/tests/cgroup.py

diff --git a/client/tests/cgroup/__init__.py 
b/client/tests/cgroup/__init__.py

new file mode 100644
index 000..e69de29
diff --git a/client/tests/kvm/subtests.cfg.sample 
b/client/tests/kvm/subtests.cfg.sample

index 74e550b..79e0656 100644
--- a/client/tests/kvm/subtests.cfg.sample
+++ b/client/tests/kvm/subtests.cfg.sample
@@ -848,6 +848,13 @@ variants:
  only Linux
  type = iofuzz

+- cgroup:
+type = cgroup
+# cgroup_tests = re1[:loops] re2[:loops] ...
+cgroup_tests = .*:1
+vms +=  vm2
+extra_params +=  -snapshot


you run blkio with snapshot ? sometimes we need to group diff real 
guests not snapshot
The actual tested disks are added inside the test with additional 
parameter snapshot=off. I'm using snapshot on the main disk only and 
because the VM dies quite often (usually during cleanup part).





+
  - virtio_console: install setup image_copy 
unattended_install.cdrom

  only Linux
  vms = ''
diff --git a/client/tests/kvm/tests/cgroup.py 
b/client/tests/kvm/tests/cgroup.py

new file mode 100644
index 000..4d0ec43
--- /dev/null
+++ b/client/tests/kvm/tests/cgroup.py
@@ -0,0 +1,316 @@
+
+cgroup autotest test (on KVM guest)
+@author: Lukas Doktorldok...@redhat.com
+@copyright: 2011 Red Hat, Inc.
+
+import logging, re, sys, tempfile, time, traceback
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+from autotest_lib.client.tests.cgroup.cgroup_common import Cgroup, 
CgroupModules

+
+def run_cgroup(test, params, env):
+
+Tests the cgroup functions on KVM guests.
+ * Uses variable tests (marked by TODO comment) to map the subtests
+
+vms = None
+tests = None
+
+# Tests
+class _TestBlkioBandwidth:
+
+BlkioBandwidth dummy test
+ * Use it as a base class to an actual test!
+ * self.dd_cmd and attr '_set_properties' have to be 
implemented
+ * It prepares 2 vms and run self.dd_cmd to simultaniously 
stress the
+machines. After 1 minute it kills the dd and gather the 
throughput

+informations.
+
+def __init__(self, vms, modules):
+
+Initialization
+@param vms: list of vms
+@param modules: initialized cgroup module class
+
+self.vms = vms  # Virt machines
+self.modules = modules  # cgroup module handler
+self.blkio = Cgroup('blkio', '')# cgroup blkio handler
+self.files = [] # Temporary files (files of virt disks)
+self.devices = []   # Temporary virt devices (PCI drive 
1 per vm)
+self.dd_cmd = None  # DD command used to test the 
throughput

+
+def cleanup(self):
+
+Cleanup
+
+err = 
+try:
+for i in range (2):
+vms[i].monitor.cmd(pci_del %s % self.devices[i])
+self.files[i].close()
+except Exception, inst:
+err += \nCan't remove PCI drive: %s % inst
+try:
+del(self.blkio)
+except Exception, inst:
+err += \nCan't remove Cgroup: %s % inst
+
+if err:
+logging.error(Some parts of cleanup failed:%s, err)
+raise error.TestError(Some parts of cleanup 
failed:%s % err)

+
+def init(self):
+
+Initialization
+ * assigns vm1 and vm2 into cgroups and sets the properties
+ * creates a new virtio device and adds it into vms
+
+if 

Re: [PATCH 1/4] [kvm-autotest] cgroup-kvm: add_*_drive / rm_drive

2011-10-10 Thread Lukáš Doktor
I thought about that. But pci_add is not much stable and it's not 
supported in QMP (as far as I read) with a note that this way is buggy 
and should be rewritten completely. So I placed it here to let it 
develop and then I can move it into utils.


Regards,
Lukáš

Dne 10.10.2011 12:26, Jiri Zupka napsal(a):

This is useful function. This function can be in kvm utils.

- Original Message -

* functions for adding and removal of drive to vm using host-file or
host-scsi_debug device.

Signed-off-by: Lukas Doktorldok...@redhat.com
---
  client/tests/kvm/tests/cgroup.py |  125
  -
  1 files changed, 108 insertions(+), 17 deletions(-)

diff --git a/client/tests/kvm/tests/cgroup.py
b/client/tests/kvm/tests/cgroup.py
index b9a10ea..d6418b5 100644
--- a/client/tests/kvm/tests/cgroup.py
+++ b/client/tests/kvm/tests/cgroup.py
@@ -17,6 +17,108 @@ def run_cgroup(test, params, env):
  vms = None
  tests = None

+# Func
+def get_device_driver():
+
+Discovers the used block device driver {ide, scsi,
virtio_blk}
+@return: Used block device driver {ide, scsi, virtio}
+
+if test.tagged_testname.count('virtio_blk'):
+return virtio
+elif test.tagged_testname.count('scsi'):
+return scsi
+else:
+return ide
+
+
+def add_file_drive(vm, driver=get_device_driver(),
host_file=None):
+
+Hot-add a drive based on file to a vm
+@param vm: Desired VM
+@param driver: which driver should be used (default: same as
in test)
+@param host_file: Which file on host is the image (default:
create new)
+@return: Tupple(ret_file, device)
+ret_file: created file handler (None if not
created)
+device: PCI id of the virtual disk
+
+if not host_file:
+host_file =
tempfile.NamedTemporaryFile(prefix=cgroup-disk-,
+   suffix=.iso)
+utils.system(dd if=/dev/zero of=%s bs=1M count=8
/dev/null
+ % (host_file.name))
+ret_file = host_file
+else:
+ret_file = None
+
+out = vm.monitor.cmd(pci_add auto storage
file=%s,if=%s,snapshot=off,
+ cache=off % (host_file.name, driver))
+dev = re.search(r'OK domain (\d+), bus (\d+), slot (\d+),
function \d+',
+out)
+if not dev:
+raise error.TestFail(Can't add device(%s, %s, %s): %s
% (vm,
+host_file.name,
driver, out))
+device = %s:%s:%s % dev.groups()
+return (ret_file, device)
+
+
+def add_scsi_drive(vm, driver=get_device_driver(),
host_file=None):
+
+Hot-add a drive based on scsi_debug device to a vm
+@param vm: Desired VM
+@param driver: which driver should be used (default: same as
in test)
+@param host_file: Which dev on host is the image (default:
create new)
+@return: Tupple(ret_file, device)
+ret_file: string of the created dev (None if not
created)
+device: PCI id of the virtual disk
+
+if not host_file:
+if utils.system_output(lsmod | grep scsi_debug -c) ==
0:
+utils.system(modprobe scsi_debug dev_size_mb=8
add_host=0)
+utils.system(echo 1
/sys/bus/pseudo/drivers/scsi_debug/add_host)
+host_file = utils.system_output(ls /dev/sd* | tail -n
1)
+# Enable idling in scsi_debug drive
+utils.system(echo 1  /sys/block/%s/queue/rotational %
host_file)
+ret_file = host_file
+else:
+# Don't remove this device during cleanup
+# Reenable idling in scsi_debug drive (in case it's not)
+utils.system(echo 1  /sys/block/%s/queue/rotational %
host_file)
+ret_file = None
+
+out = vm.monitor.cmd(pci_add auto storage
file=%s,if=%s,snapshot=off,
+ cache=off % (host_file, driver))
+dev = re.search(r'OK domain (\d+), bus (\d+), slot (\d+),
function \d+',
+out)
+if not dev:
+raise error.TestFail(Can't add device(%s, %s, %s): %s
% (vm,
+host_file,
driver, out))
+device = %s:%s:%s % dev.groups()
+return (ret_file, device)
+
+
+def rm_drive(vm, host_file, device):
+
+Remove drive from vm and device on disk
+! beware to remove scsi devices in reverse order !
+
+vm.monitor.cmd(pci_del %s % device)
+
+if isinstance(host_file, file): # file
+host_file.close()
+elif isinstance(host_file, str):# scsi device
+utils.system(echo -1
/sys/bus/pseudo/drivers/scsi_debug/add_host)
+else:# custom file, do nothing
+

Re: [kvm-autotest] cgroup-kvm: Four new BlkioThrottle tests

2011-10-07 Thread Lukáš Doktor

Dne 7.10.2011 20:24, Lukas Doktor napsal(a):

This is a patchset with four new tests to KVM specific cgroup testing. Also I 
made some modifications into (general) cgroup_common library which makes cgroup 
testing better readable and more safe to execute. Please find the details in 
each patch.

Also please beware of qemu-kvm bugs which occurred for me (qemu-kvm 0.15.0 F17) 
which led to qemu SEGFAULTS or even to dysfunction (qemu-kvm 0.14 F15). I'll 
fill in Bugzilla on Monday.

This was also sent as a github pull request, so if you feel like commenting on 
the pull request, be my guest:
https://github.com/autotest/autotest/pull/33

Best regards,
Lukáš

Already one minor change, please follow the patches on github...

diff --git a/client/tests/kvm/tests/cgroup.py 
b/client/tests/kvm/tests/cgroup.py

index 7f00a6b..7407e29 100644
--- a/client/tests/kvm/tests/cgroup.py
+++ b/client/tests/kvm/tests/cgroup.py
@@ -409,7 +409,13 @@ def run_cgroup(test, params, env):
 raise error.TestError(Corrupt class, aren't you 
trying to run
   parent _TestBlkioThrottle() 
function?)


-(self.files, self.devices) = add_scsi_drive(self.vm)
+if get_device_driver() == ide:
+logging.warn(The main disk for this VM is ide wich 
doesn't 

+ support hot-plug. Using virtio_blk instead)
+(self.files, self.devices) = add_scsi_drive(self.vm,
+
driver=virtio)

+else:
+(self.files, self.devices) = add_scsi_drive(self.vm)
 try:
 dev = utils.system_output(ls -l %s % 
self.files).split()[4:6]

 dev[0] = dev[0][:-1]# Remove tailing ','

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] virt.virt_env_process: Abstract screenshot production

2011-09-26 Thread Lukáš Doktor

Hi,

vm.screendump() doesn't have parameter 'debug'.

So you should either add debug parameter to kvm_vm.py or remove this 
parameter (and perhaps add debug=False into kvm_vm.py).


Regards,
Lukáš


Dne 24.9.2011 01:27, Lucas Meneghel Rodrigues napsal(a):

In order to ease work with other virtualization types,
make virt_env_process to call vm.screendump() instead
of vm.monitor.screendump().

Signed-off-by: Lucas Meneghel Rodriguesl...@redhat.com
---
  client/virt/virt_env_process.py |6 +++---
  1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/client/virt/virt_env_process.py b/client/virt/virt_env_process.py
index 789fa01..a2e 100644
--- a/client/virt/virt_env_process.py
+++ b/client/virt/virt_env_process.py
@@ -110,7 +110,7 @@ def preprocess_vm(test, params, env, name):
  scrdump_filename = os.path.join(test.debugdir, pre_%s.ppm % name)
  try:
  if vm.monitor and params.get(take_regular_screendumps) == yes:
-vm.monitor.screendump(scrdump_filename, debug=False)
+vm.screendump(scrdump_filename, debug=False)
  except kvm_monitor.MonitorError, e:
  logging.warn(e)

@@ -151,7 +151,7 @@ def postprocess_vm(test, params, env, name):
  scrdump_filename = os.path.join(test.debugdir, post_%s.ppm % name)
  try:
  if vm.monitor and params.get(take_regular_screenshots) == yes:
-vm.monitor.screendump(scrdump_filename, debug=False)
+vm.screendump(scrdump_filename, debug=False)
  except kvm_monitor.MonitorError, e:
  logging.warn(e)

@@ -460,7 +460,7 @@ def _take_screendumps(test, params, env):
  if not vm.is_alive():
  continue
  try:
-vm.monitor.screendump(filename=temp_filename, debug=False)
+vm.screendump(filename=temp_filename, debug=False)
  except kvm_monitor.MonitorError, e:
  logging.warn(e)
  continue


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] virt.virt_env_process: Abstract screenshot production

2011-09-26 Thread Lukáš Doktor

Dne 26.9.2011 15:10, Lucas Meneghel Rodrigues napsal(a):

On 09/26/2011 09:27 AM, Lukáš Doktor wrote:

Hi,

vm.screendump() doesn't have parameter 'debug'.


My fault, the screendump method on both qmp and human monitors does 
take this parameter, and since the implementation on virt_env_process 
was using the monitor method directly, I forgot to add the param to 
screendump.


It's fixed now. debug=True by default, the only place where it is 
False is during screendump thread (to avoid polluting the logs).


https://github.com/autotest/autotest/commit/49b1d9b65ab0061aaf631c19620987bc59592af6 


We used the same fix ;-)

Acked-by: Lukáš Doktor ldok...@redhat.com




So you should either add debug parameter to kvm_vm.py or remove this
parameter (and perhaps add debug=False into kvm_vm.py).

Regards,
Lukáš


Dne 24.9.2011 01:27, Lucas Meneghel Rodrigues napsal(a):

In order to ease work with other virtualization types,
make virt_env_process to call vm.screendump() instead
of vm.monitor.screendump().

Signed-off-by: Lucas Meneghel Rodriguesl...@redhat.com
---
client/virt/virt_env_process.py | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/client/virt/virt_env_process.py
b/client/virt/virt_env_process.py
index 789fa01..a2e 100644
--- a/client/virt/virt_env_process.py
+++ b/client/virt/virt_env_process.py
@@ -110,7 +110,7 @@ def preprocess_vm(test, params, env, name):
scrdump_filename = os.path.join(test.debugdir, pre_%s.ppm % name)
try:
if vm.monitor and params.get(take_regular_screendumps) == yes:
- vm.monitor.screendump(scrdump_filename, debug=False)
+ vm.screendump(scrdump_filename, debug=False)
except kvm_monitor.MonitorError, e:
logging.warn(e)

@@ -151,7 +151,7 @@ def postprocess_vm(test, params, env, name):
scrdump_filename = os.path.join(test.debugdir, post_%s.ppm % name)
try:
if vm.monitor and params.get(take_regular_screenshots) == yes:
- vm.monitor.screendump(scrdump_filename, debug=False)
+ vm.screendump(scrdump_filename, debug=False)
except kvm_monitor.MonitorError, e:
logging.warn(e)

@@ -460,7 +460,7 @@ def _take_screendumps(test, params, env):
if not vm.is_alive():
continue
try:
- vm.monitor.screendump(filename=temp_filename, debug=False)
+ vm.screendump(filename=temp_filename, debug=False)
except kvm_monitor.MonitorError, e:
logging.warn(e)
continue






--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM-autotest][PATCH] cgroup test with KVM guest +first subtests

2011-09-23 Thread Lukáš Doktor

Dne 23.9.2011 15:36, Lucas Meneghel Rodrigues napsal(a):

On 09/22/2011 01:29 PM, Lukas Doktor wrote:

Hi guys,

Do you remember the discussion about cgroup testing in autotest vs. 
LTP? I hope there won't be any doubts about this one as ground_test 
(+ first 2 subtests) are strictly focused on cgroups features 
enforced on KVM guest systems. Also more subtests will follow if you 
approve the test structure (blkio_throttle, memory, cpus...).


Yes, absolutely.



No matter whether we drop or keep the general 'cgroup' test. The 
'cgroup_common.py' library can be imported either from 
'client/tests/cgroup/' directory or directly from 
'client/tests/kvm/tests/' directory.


I don't think we really need to drop the test. It's useful anyway, 
even though there are LTP tests that sort of cover ir.
Well I have some other ones in a queue. My focus is now on the KVM 
specific tests, but I might send couple more general cgroup tests later...






The modifications of 'cgroup_common.py' library is backward 
compatible with general cgroup test.


See the commits for details.


Now that we moved to github, I'd like to go with the following model 
of contribution:


1) You create a user on github if you don't have one
2) Create a public autotest fork
3) Commit the changes to a topic branch appropriately named
4) Make a pull request to autotest:master
5) You still send the patches to the mailing list normally, but 
mention the pull request URL on the message.


That's it, we are still trying out things, so if this doesn't work 
out, we'll update the process. Is it possible that you do that and 
rebase your patches?


Oh, and since patchwork is still out due to DNS outage, could you guys 
re-spin your client-server patches using the same process I mentioned? 
Thank you!


Lucas


Hi Lucas,

pull request sent:
https://github.com/autotest/autotest/pull/6

I'll remind Jiří to do the same with the client-server patches...

Cheers,
Lukáš
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [AUTOTEST][PATCH][KVM] Add test for problem with killing guest when network is under load.

2011-08-29 Thread Lukáš Doktor

Thanks, this patch works well.

Acked-by: Lukas Doktor ldok...@redhat.com

Dne 26.8.2011 10:31, Jiří Župka napsal(a):

This patch contain two tests.
1) Try kill guest when guest netwok is under loading.
2) Try kill guest after multiple adding and removing network drivers.

Signed-off-by: Jiří Župkajzu...@redhat.com
---
  client/tests/kvm/tests_base.cfg.sample|   18 
  client/virt/tests/netstress_kill_guest.py |  147 +
  2 files changed, 165 insertions(+), 0 deletions(-)
  create mode 100644 client/virt/tests/netstress_kill_guest.py

diff --git a/client/tests/kvm/tests_base.cfg.sample 
b/client/tests/kvm/tests_base.cfg.sample
index ec1b48d..a7ff29f 100644
--- a/client/tests/kvm/tests_base.cfg.sample
+++ b/client/tests/kvm/tests_base.cfg.sample
@@ -845,6 +845,24 @@ variants:
  restart_vm = yes
  kill_vm_on_error = yes

+- netstress_kill_guest: install setup unattended_install.cdrom
+only Linux
+type = netstress_kill_guest
+image_snapshot = yes
+nic_mode = tap
+# There should be enough vms for build topology.
+variants:
+-driver:
+mode = driver
+-load:
+mode = load
+netperf_files = netperf-2.4.5.tar.bz2 wait_before_data.patch
+packet_size = 1500
+setup_cmd = cd %s  tar xvfj netperf-2.4.5.tar.bz2  cd netperf-2.4.5  patch 
-p0  ../wait_before_data.patch  ./configure  make
+clean_cmd =  while killall -9 netserver; do True test; done;
+netserver_cmd =  %s/netperf-2.4.5/src/netserver
+netperf_cmd = %s/netperf-2.4.5/src/netperf -t %s -H %s -l 60 
-- -m %s
+
  - set_link: install setup image_copy unattended_install.cdrom
  type = set_link
  test_timeout = 1000
diff --git a/client/virt/tests/netstress_kill_guest.py 
b/client/virt/tests/netstress_kill_guest.py
new file mode 100644
index 000..7452e09
--- /dev/null
+++ b/client/virt/tests/netstress_kill_guest.py
@@ -0,0 +1,147 @@
+import logging, os, signal, re, time
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+from autotest_lib.client.virt import aexpect, virt_utils
+
+
+def run_netstress_kill_guest(test, params, env):
+
+Try stop network interface in VM when other VM try to communicate.
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+def get_corespond_ip(ip):
+
+Get local ip address which is used for contact ip.
+
+@param ip: Remote ip
+@return: Local corespond IP.
+
+result = utils.run(ip route get %s % (ip)).stdout
+ip = re.search(src (.+), result)
+if ip is not None:
+ip = ip.groups()[0]
+return ip
+
+
+def get_ethernet_driver(session):
+
+Get driver of network cards.
+
+@param session: session to machine
+
+modules = []
+out = session.cmd(ls -l /sys/class/net/*/device/driver/module)
+for module in out.split(\n):
+modules.append(module.split(/)[-1])
+modules.remove()
+return set(modules)
+
+
+def kill_and_check(vm):
+vm_pid = vm.get_pid()
+vm.destroy(gracefully=False)
+time.sleep(2)
+try:
+os.kill(vm_pid, 0)
+logging.error(VM is not dead.)
+raise error.TestFail(Problem with killing guest.)
+except OSError:
+logging.info(VM is dead.)
+
+
+def netload_kill_problem(session_serial):
+netperf_dir = os.path.join(os.environ['AUTODIR'], tests/netperf2)
+setup_cmd = params.get(setup_cmd)
+clean_cmd = params.get(clean_cmd)
+firewall_flush = iptables -F
+
+try:
+utils.run(firewall_flush)
+except:
+logging.warning(Could not flush firewall rules on guest)
+
+try:
+session_serial.cmd(firewall_flush)
+except aexpect.ShellError:
+logging.warning(Could not flush firewall rules on guest)
+
+for i in params.get(netperf_files).split():
+vm.copy_files_to(os.path.join(netperf_dir, i), /tmp)
+
+guest_ip = vm.get_address(0)
+server_ip = get_corespond_ip(guest_ip)
+
+logging.info(Setup and run netperf on host and guest)
+session_serial.cmd(setup_cmd % /tmp, timeout=200)
+utils.run(setup_cmd % netperf_dir)
+
+try:
+session_serial.cmd(clean_cmd)
+except:
+pass
+session_serial.cmd(params.get(netserver_cmd) % /tmp)
+
+utils.run(clean_cmd, ignore_status=True)
+utils.run(params.get(netserver_cmd) % netperf_dir)
+
+server_netperf_cmd = params.get(netperf_cmd) % (netperf_dir, 
TCP_STREAM,
+guest_ip, 

Re: [AUTOTEST][KVM][PATCH] Add test for testing of killing guest when network is under usage.

2011-08-24 Thread Lukáš Doktor

Hi Jiří,

Do you have any further plans with this test? I'm not convinced that 
netperf only as a stress is necessarily. You can use netcat or simple 
python udp send/recv (flood attack ;-) ).


Dne 17.8.2011 16:17, Jiří Župka napsal(a):

This patch contain two tests.
1) Try kill guest when guest netwok is under loading.
2) Try kill guest after multiple adding and removing network drivers.

Signed-off-by: Jiří Župkajzu...@redhat.com
---
  client/tests/kvm/tests_base.cfg.sample|   23 +
  client/virt/tests/netstress_kill_guest.py |  146 +
  2 files changed, 169 insertions(+), 0 deletions(-)
  create mode 100644 client/virt/tests/netstress_kill_guest.py

diff --git a/client/tests/kvm/tests_base.cfg.sample 
b/client/tests/kvm/tests_base.cfg.sample
index ec1b48d..2c88088 100644
--- a/client/tests/kvm/tests_base.cfg.sample
+++ b/client/tests/kvm/tests_base.cfg.sample
@@ -845,6 +845,29 @@ variants:
  restart_vm = yes
  kill_vm_on_error = yes

+- netstress_kill_guest: install setup unattended_install.cdrom
+only Linux
+type = netstress_kill_guest
+image_snapshot = yes
+nic_mode = tap
+# There should be enough vms for build topology.
+variants:
+-driver:
+mode = driver
+-load:
+mode = load
+netperf_files = netperf-2.4.5.tar.bz2 wait_before_data.patch
+packet_size = 1500
+setup_cmd = cd %s  tar xvfj netperf-2.4.5.tar.bz2  cd netperf-2.4.5  patch 
-p0  ../wait_before_data.patch  ./configure  make
+clean_cmd =  while killall -9 netserver; do True test; done;
+netserver_cmd =  %s/netperf-2.4.5/src/netserver
+netperf_cmd = %s/netperf-2.4.5/src/netperf -t %s -H %s -l 60 
-- -m %s
+variants:
+- vhost:
+netdev_extra_params = vhost=on



You might add modprobe vhost-net command as vhost-net might not be 
loaded by default.



+- vhost-no:
+netdev_extra_params = 
+
  - set_link: install setup image_copy unattended_install.cdrom
  type = set_link
  test_timeout = 1000
diff --git a/client/virt/tests/netstress_kill_guest.py 
b/client/virt/tests/netstress_kill_guest.py
new file mode 100644
index 000..7daec95
--- /dev/null
+++ b/client/virt/tests/netstress_kill_guest.py
@@ -0,0 +1,146 @@
+import logging, os, signal, re, time
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+from autotest_lib.client.virt import aexpect, virt_utils
+
+
+def run_netstress_kill_guest(test, params, env):
+
+Try stop network interface in VM when other VM try to communicate.
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+def get_corespond_ip(ip):
+
+Get local ip address which is used for contact ip.
+
+@param ip: Remote ip
+@return: Local corespond IP.
+
+result = utils.run(ip route get %s % (ip)).stdout
+ip = re.search(src (.+), result)
+if ip is not None:
+ip = ip.groups()[0]
+return ip
+
+
+def get_ethernet_driver(session):
+
+Get driver of network cards.
+
+@param session: session to machine
+
+modules = []
+out = session.cmd(ls -l /sys/class/net/*/device/driver/module)
+for module in out.split(\n):
+modules.append(module.split(/)[-1])
+modules.remove()
+return set(modules)
+
+
+def kill_and_check(vm):
+vm_pid = vm.get_pid()
+vm.destroy(gracefully=False)
+time.sleep(2)
+try:
+os.kill(vm_pid, 0)
+logging.error(VM is not dead.)
+raise error.TestFail(Problem with killing guest.)
+except OSError:
+logging.info(VM is dead.)
+
+
+def netload_kill_problem(session_serial):


I think you should clean this function. I belive it would be better and 
more readable, if you first get all the params/variables, than prepare 
the host/guests and after all of this start the guest. See the comments 
further...



+netperf_dir = os.path.join(os.environ['AUTODIR'], tests/netperf2)
+setup_cmd = params.get(setup_cmd)
+clean_cmd = params.get(clean_cmd)
+
+firewall_flush = iptables -F
+session_serial.cmd_output(firewall_flush)
+try:
+utils.run(iptables -F)
you have firewall_flush command-string, why not to use it here to. Also 
you should either warn everywhere or not at all... (you log the failure 
when flushing the guest but not here)



+except:
+pass
+
+for i in params.get(netperf_files).split():
+vm.copy_files_to(os.path.join(netperf_dir, i), /tmp)
+
+try:
+session_serial.cmd(firewall_flush)
+ 

Re: [PATCH 1/4] [NEW] cgroup test * general smoke_test + module dependend subtests (memory test included) * library for future use in other tests (kvm)

2011-08-21 Thread Lukáš Doktor

#SNIP


+ pwd = item.mk_cgroup()
+ if pwd == None:
+ logging.error(test_memory: Can't create cgroup)
+ return -1
+
+ logging.debug(test_memory: Memory filling test)
+
+ f = open('/proc/meminfo','r')


Not clean way how to do this.. It is better to use regular expression.
But this is absolutely no important.



OKi, anyway Ypu is trying to get get_mem_usage() function into utils. 
I'll use it then.

+ mem = f.readline()
+ while not mem.startswith(MemFree):
+ mem = f.readline()


#SNIP


+ logging.error(cg.smoke_test[%d]: Can't remove cgroup direcotry,
+ part)
+ return -1
+
+ # Finish the process
+ part += 1
+ ps.stdin.write('\n')
+ time.sleep(2)


There should be bigger timeout. This is sometime make problem.
Process ends correct way but not in timeout.



OK, Lucas, can you please change it in patchset (if you intend to accept 
it?). 10 seconds seems to be more safe DL, thanks.



+ if (ps.poll() == None):
+ logging.error(cg.smoke_test[%d]: Process is not finished, part)
+ return -1
+
+ return 0
+
+


#SNIP

Thank you, Jiří.

kind regards,
Lukáš
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] KVM-test: Add hdparm subtest

2011-08-03 Thread Lukáš Doktor

Reviewed, looks sane.

Lukáš

Dne 2.8.2011 05:39, Amos Kong napsal(a):

This test uses 'hdparm' to set disk device to low/high
performance status, and compare the reading speed.
The emulated device should pass all the tests.

Signed-off-by: Feng Yangfy...@redhat.com
Signed-off-by: Amos Kongak...@redhat.com
---
  client/tests/kvm/tests/hdparm.py   |   84 
  client/tests/kvm/tests_base.cfg.sample |   13 +
  2 files changed, 97 insertions(+), 0 deletions(-)
  create mode 100644 client/tests/kvm/tests/hdparm.py

diff --git a/client/tests/kvm/tests/hdparm.py b/client/tests/kvm/tests/hdparm.py
new file mode 100644
index 000..79ce5db
--- /dev/null
+++ b/client/tests/kvm/tests/hdparm.py
@@ -0,0 +1,84 @@
+import re, logging
+from autotest_lib.client.common_lib import error
+
+
+@error.context_aware
+def run_hdparm(test, params, env):
+
+Test hdparm setting on linux guest os, this case will:
+1) Set/record parameters value of hard disk to low performance status.
+2) Perform device/cache read timings then record the results.
+3) Set/record parameters value of hard disk to high performance status.
+4) Perform device/cache read timings then compare two results.
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environmen.
+
+
+def check_setting_result(set_cmd, timeout):
+params = re.findall((-[a-zA-Z])([0-9]*), set_cmd)
+disk = re.findall((\/+[a-z]*\/[a-z]*$), set_cmd)[0]
+for (param, value) in params:
+cmd = hdparm %s %s % (param, disk)
+(s, output) = session.cmd_status_output(cmd, timeout)
+if s != 0:
+raise error.TestError(Fail to get %s parameter value\n
+ Output is: %s % (param, output))
+if value not in output:
+ raise error.TestFail(Fail to set %s parameter to value: %s
+  % (param, value))
+
+
+def perform_read_timing(disk, timeout, num=5):
+results = 0
+for i in range(num):
+cmd = params.get(device_cache_read_cmd) % disk
+(s, output) = session.cmd_status_output(cmd, timeout)
+if s != 0:
+raise error.TestFail(Fail to perform device/cache read
+  timings \nOutput is: %s\n % output)
+logging.info(Output of device/cache read timing check(%s of %s):
+   %s % (i + 1, num, output))
+(result, post) = re.findall(= *([0-9]*.+[0-9]*) ([a-zA-Z]*),
+ output)[1]
+if post == kB:
+result = float(result)/1024.0
+results += float(result)
+return results/num
+
+vm = env.get_vm(params[main_vm])
+vm.create()
+session = vm.wait_for_login(timeout=int(params.get(login_timeout, 360)))
+try:
+timeout = float(params.get(cmd_timeout, 60))
+cmd = params.get(get_disk_cmd)
+(s, output) = session.cmd_status_output(cmd)
+disk = output.strip()
+
+error.context(Setting hard disk to lower performance)
+cmd = params.get(low_status_cmd) % disk
+session.cmd(cmd, timeout)
+
+error.context(Checking hard disk keyval under lower performance)
+check_setting_result(cmd, timeout)
+low_result = perform_read_timing(disk, timeout)
+logging.info(Buffered disk read speed under low performance
+  configuration: %s % low_result)
+error.context(Setting hard disk to higher performance)
+cmd = params.get(high_status_cmd) % disk
+session.cmd(cmd, timeout)
+
+error.context(Checking hard disk keyval under higher performance)
+check_setting_result(cmd, timeout)
+high_result = perform_read_timing(disk, timeout)
+logging.info(Buffered disk read speed under high performance
+  configuration: %s % high_result)
+if not float(high_result)  float(low_result):
+raise error.TestFail(High performance setting does not 
+ increase read speed\n)
+logging.debug(High performance setting increased read speed!)
+
+finally:
+if session:
+session.close()
diff --git a/client/tests/kvm/tests_base.cfg.sample 
b/client/tests/kvm/tests_base.cfg.sample
index d597b52..5491630 100644
--- a/client/tests/kvm/tests_base.cfg.sample
+++ b/client/tests/kvm/tests_base.cfg.sample
@@ -1115,6 +1115,19 @@ variants:
  image_snapshot = yes
  only Linux

+- hdparm:
+type = hdparm
+get_disk_cmd = \ls /dev/[vhs]da
+low_status_cmd = hdparm -a64 -d0 -u0 %s
+device_cache_read_cmd = hdparm -tT %s
+high_status_cmd = hdparm -a256 -d1 -u1 %s
+cmd_timeout = 540
+only Linux
+virtio_blk:
+

Re: [KVM-AUTOTEST][PATCH 2/2][virtio-console] Fix compatibility with python 2.4.

2010-11-25 Thread Lukáš Doktor

Dne 23.11.2010 03:30, Amos Kong napsal(a):

- Jiří Župkajzu...@redhat.com  wrote:


---


After loading your this patch, virtio_console also could not work with older 
python.

Something are not fixed, such as:
   return True if self.failed  0 else False
   PASSif result[0] else FAIL
   ...

I'm testing with 'Python 2.4.3'


Hi,

this fixes only the GUEST (virtio_guest.py) side of the virtio_console 
test. (tested with python 2.4.3 and 2.4.6)


It's possible to fix the host side too but distributions which supports 
only python 2.4 usually supports older versions of KVM without the 
'-device' option we use in the HOST side of the test for creating the 
devices. This change would only make the code less readable with minimal 
income.


Anyway if there's a real demand, we can fix the HOST side too.

Cheers,
Lukáš



  def close(self, file):
@@ -339,7 +341,7 @@ class VirtioGuest:
  if descriptor != None:



--
1.7.3.2

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM test: virtio_console test v2

2010-09-02 Thread Lukáš Doktor

Dne 2.9.2010 04:58, Lucas Meneghel Rodrigues napsal(a):

From: Lukáš Doktorldok...@redhat.com

1) Starts VMs with the specified number of virtio console devices
2) Start smoke test
3) Start loopback test
4) Start performance test

This test uses an auxiliary script, console_switch.py, that is
copied to guests. This script has functions to send and write
data to virtio console ports. Details of each test can be found
on the docstrings for the test_* functions.

Changes from v1:
  * Style fixes
  * Whitespace cleanup
  * Docstring and message fixes
  * ID for char devices can't be a simple number, fix it

Tested with Fedora 13 guest/host, -smp 1.

Signed-off-by: Lukas Doktorldok...@redhat.com
Signed-off-by: Jiri Zupkajzu...@redhat.com
Signed-off-by: Lucas Meneghel Rodriguesl...@redhat.com


Thank you, Lucas, we are currently working on another version which is 
going to be more similar to Amit Shah's C tests in order to fasten 
further tests development. We'll use your comments and python style.


Regards,
Lukáš
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM-autotest] virtio_console test

2010-08-23 Thread Lukáš Doktor

Hi Amit,

Dne 23.8.2010 09:53, Amit Shah napsal(a):

On (Fri) Aug 20 2010 [16:12:51], Lukáš Doktor wrote:

Dne 20.8.2010 15:40, Lukas Doktor napsal(a):

Hi,

This patch adds new test for virtio_console. It supports booth, serialport and 
console, virtio_console types and it contains three tests:
1) smoke
2) loopback
3) perf


This is great, thanks for the tests.

I was working with Lucas at the KVM Forum to get my virtio-console tests
integrated upstream. I have a few micro tests that test correctness of
various bits in the virtio_console code:

http://fedorapeople.org/gitweb?p=amitshah/public_git/test-virtserial.git


yes, I'm aware of your tests and the fedora page about virtio-console. I 
took an inspiration from them ;-) (thanks)


It would be great to sync up with Lucas and add those tests to autotest
as well.


I went through the code trying to find tests missing in our autotest 
virtio_console.py test. Correct me if I'm wrong, but the missing tests are:


* variations on opening/writing/reading host/guest closed/open consoles 
and check the right handling

* guest caching (if necessarily)


Eventually, I'd also like to adapt the C code to python so that it
integrates with KVM-autotest.



Yes, either you or I can do that. I would just need the 
test-requirement-list.

Just a note about kernels:
serialport works great but console have big issues. Use kernels=
2.6.35 for testing.


Can you tell me what the issues are? I test console io in the testsuite
mentioned above and it's been passing fine.



Sometimes on F13 there were Oops while booting guests with virtconsoles. 
Also the sysfs informations about virtconsoles were sometimes 
mismatched. With vanilla kernel 2.6.35 it worked better with occasional 
Oops while booting the guest.
My colleague was working on that part so when he's back from holiday he 
can provide more information.


With virtserialport it always worked correctly.

Amit


Lukáš
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM-autotest] virtio_console test

2010-08-20 Thread Lukáš Doktor

Dne 20.8.2010 15:40, Lukas Doktor napsal(a):

Hi,

This patch adds new test for virtio_console. It supports booth, serialport and 
console, virtio_console types and it contains three tests:
1) smoke
2) loopback
3) perf


Before any tests are executed it starts the machine with required number of 
virtio_consoles. Than it allows user to run all three tests. Using the 
parameters user can control which tests are executed and what setting is used. 
All tests supports multiple run using ';' separated list of settings. Most of 
the settings are optional only. The mandatory ones are written in CAPITALS.

ad1) virtio_console_smoke format:
$VIRTIO_CONSOLE_TYPE:$custom_data

It creates a loopback via $VIRTIO_CONSOLE_TYPE console and sends $custom_data. 
If the received data match the original test pass

ad2) virtio_console_loopback format:
$source_console_t...@$buffer_length:$destination_console1_t...@$buffer_length:...:$destination_consolex_t...@$buffer_length:$loopback_buffer_length

Creates loopback between the $SOURCE_CONSOLE_TYPE console and all following 
$DESTINATION_CONSOLEn_TYPE consoles. Than it sends data by $buffer_length to 
the source port. The loopback resends the data by $loopback_buffer_length to 
all destination consoles. The test listens on the destination consoles and 
controls the received data.

NOTE: in the debug mode you can see the send/received data's buffers in every 
second during the test.

ad3) virtio_console_perf format:
$virtio_console_t...@$buffer_size:$test_duration

First it sends the prepared data in a loop over $VIRTIO_CONSOLE_TYPE console 
from host to guest. Guest only reads all the data and throw them away. This 
part runs $test_duration seconds.
Second it does the same from guest to host.

For booth runs it provides information of minimum/median/maximum throughput and 
guest/host average loads.


Best regards,
Lukas Doktor
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



Just a note about kernels:
serialport works great but console have big issues. Use kernels = 
2.6.35 for testing.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST] KSM-overcommit test v.2 (python version)

2009-12-18 Thread Lukáš Doktor

Hello ppl,

as we promised there is a new version with modifications you wanted us 
to do. It's complete package based on the newest GIT version.


[Changelog]
- new structure (tests_base.cfg, ...)
- improved log
- function get_stat() raise an error when access death VM
- get_stat() splitted into 2 functions, _get_stat() returns int, 
get_stat returns log string

- use of session.close() instead of get_command_status('exit;')
- PID of VM is taken using -pidfile option (RFC: It would be nice to 
have this in framework by default)

- possible infinite loop (i = i + 1)
- 32bit host supports 3.1GB guest, 64bit without limitation, detection 
using image file_name


[Not changed]
- We skip the merge of serial and parallel init functions as the result 
would be way more complicated (= more possible errors)
From 3420916facae18f45617e3c25c365eaa59c0374c Mon Sep 17 00:00:00 2001
From: =?utf-8?q?Luk=C3=A1=C5=A1=20Doktor?= me...@book.localdomain
Date: Fri, 18 Dec 2009 15:56:31 +0100
Subject: [KSM-autotest] KSM overcommit v2 modification
[Changelog]
 - new structure (tests_base.cfg, ...)
 - improved log
 - function get_stat() raise an error when access death VM
 - get_stat() splitted into 2 functions, _get_stat() returns int, get_stat returns log string
 - PID of VM is taken using -pidfile option (RFC: It would be nice to have this in framework by default)
 - possible infinite loop (i = i + 1)
 - 32bit host supports 3.1GB guest, 64bit without limitation, detection using image file_name

[Not changed]
- We skip the merge of serial and parallel init functions as the result would be way more complicated (= more possible errors)
---
 client/tests/kvm/tests/ksm_overcommit.py |  616 ++
 client/tests/kvm/tests_base.cfg.sample   |   18 +
 client/tests/kvm/unattended/allocator.py |  213 ++
 3 files changed, 847 insertions(+), 0 deletions(-)
 create mode 100644 client/tests/kvm/tests/ksm_overcommit.py
 create mode 100644 client/tests/kvm/unattended/allocator.py

diff --git a/client/tests/kvm/tests/ksm_overcommit.py b/client/tests/kvm/tests/ksm_overcommit.py
new file mode 100644
index 000..a726e1c
--- /dev/null
+++ b/client/tests/kvm/tests/ksm_overcommit.py
@@ -0,0 +1,616 @@
+import logging, time
+from autotest_lib.client.common_lib import error
+import kvm_subprocess, kvm_test_utils, kvm_utils
+import kvm_preprocessing
+import random, string, math, os
+
+def run_ksm_overcommit(test, params, env):
+
+Test how KSM (Kernel Shared Memory) act with more than physical memory is
+used. In second part is also tested, how KVM can handle the situation,
+when the host runs out of memory (expected is to pause the guest system,
+wait until some process returns the memory and bring the guest back to life)
+
+@param test: kvm test object.
+@param params: Dictionary with test parameters.
+@param env: Dictionary with the test wnvironment.
+
+
+def parse_meminfo(rowName):
+
+Function get date from file /proc/meminfo
+
+@param rowName: Name of line in meminfo 
+
+for line in open('/proc/meminfo').readlines():
+if line.startswith(rowName+:):
+name, amt, unit = line.split()
+return name, amt, unit   
+
+def parse_meminfo_value(rowName):
+
+Function convert meminfo value to int
+
+@param rowName: Name of line in meminfo  
+
+name, amt, unit = parse_meminfo(rowName)
+return amt
+
+def _get_stat(vm):
+if vm.is_dead():
+error.TestError(_get_stat: Trying to get informations of death\
+VM: %s % vm.name)
+try:
+cmd = cat /proc/%d/statm % params.get('pid_'+vm.name)
+shm = int(os.popen(cmd).readline().split()[2])
+# statm stores informations in pages, recalculate to MB
+shm = shm * 4 / 1024
+except:
+raise error.TestError(_get_stat: Could not fetch shmem info from\
+  VM: %s % vm.name)
+return shm
+
+def get_stat(lvms):
+
+Get statistics in format:
+Host: memfree = XXXM; Guests memsh = {XXX,XXX,...}
+
+@params lvms: List of VMs
+
+if not isinstance(lvms, list):
+raise error.TestError(get_stat: parameter have to be proper list)
+
+try:
+stat = Host: memfree = 
+stat += str(int(parse_meminfo_value(MemFree)) / 1024) + M; 
+stat += swapfree = 
+stat += str(int(parse_meminfo_value(SwapFree)) / 1024) + M; 
+except:
+raise error.TestFail(Could not fetch free memory info)
+
+
+stat += Guests memsh = {
+for vm in lvms:
+stat += %dM;  % (_get_stat(vm))
+stat = stat[0:-2] + }
+return stat
+
+def tmp_file(file, ext=None, dir='/tmp/'):
+while True:
+

Re: [Autotest] [KVM-AUTOTEST] KSM-overcommit test v.2 (python version)

2009-12-01 Thread Lukáš Doktor

Dne 29.11.2009 17:17, Dor Laor napsal(a):

On 11/26/2009 12:11 PM, Lukáš Doktor wrote:

Hello Dor,

Thank you for your review. I have few questions about your comments:

--- snip ---

+ stat += Guests memsh = {
+ for vm in lvms:
+ if vm.is_dead():
+ logging.info(Trying to get informations of death VM: %s
+ % vm.name)
+ continue


You can fail the entire test. Afterwards it will be hard to find the
issue.



Well if it's what the community wants, we can change it. We just didn't
want to lose information about the rest of the systems. Perhaps we can
set some DIE flag and after collecting all statistics raise an Error.


I don't think we need to continue testing if some thing as basic as VM
died upon us.

OK, we are going to change this.





--- snip ---

+ def get_true_pid(vm):
+ pid = vm.process.get_pid()
+ for i in range(1,10):
+ pid = pid + 1


What are you trying to do here? It's seems like a nasty hack that might
fail on load.




qemu has -pifile option. It works fine.

Oh my, I haven't thought on this. Of course I'm going to use -pidfile 
instead of this silly thing...




Yes and I'm really sorry for this ugly hack. The qemu command has
changed since the first patch was made. Nowadays the vm.pid returns
PID of the command itself, not the actual qemu process.
We need to have the PID of the actual qemu process, which is executed by
the command with PID vm.pid. That's why first I try finding the qemu
process as the following vm.pid PID. I haven't found another solution
yet (in case we don't want to change the qemu command back in the
framework).
We have tested this solution under heavy process load and either first
or second part always finds the right value.

--- snip ---

+ if (params['ksm_test_size'] == paralel) :
+ vmsc = 1
+ overcommit = 1
+ mem = host_mem
+ # 32bit system adjustment
+ if not params['image_name'].endswith(64):
+ logging.debug(Probably i386 guest architecture, \
+ max allocator mem = 2G)


Better not to relay on the guest name. You can test percentage of the
guest mem.



What do you mean by percentage of the guest mem? This adjustment is
made because the maximum memory for 1 process in 32 bit OS is 2GB.
Testing of the 'image_name' showed to be most reliable method we found.



It's not that important but it should be a convention of kvm autotest.
If that's acceptable, fine, otherwise, each VM will define it in the
config file

Yes kvm-autotest definitely need a way to decide whether this is 32 or 
64 bit guest. I'll send a separate email to KVM-autotest mailing list to 
let others express their opinions.




--- snip ---

+ # Guest can have more than 2G but kvm mem + 1MB (allocator itself)
+ # can't
+ if (host_mem 2048):
+ mem = 2047
+
+
+ if os.popen(uname -i).readline().startswith(i386):
+ logging.debug(Host is i386 architecture, max guest mem is 2G)


There are bigger 32 bit guests.


How do you mean this note? We are testing whether the host machine is 32
bit. If so, the maximum process allocation is 2GB (similar case to 32
bit guest) but this time the whole qemu process (2GB qemu machine + 64
MB qemu overhead) can't exceeded 2GB.
Still the maximum memory used in test is the same (as we increase the VM
count - host_mem = quest_mem * vm_count; quest_mem is decreased,
vm_count is increased)


i386 guests with PAE mode (additional 4 bits) can have up to 16G ram on
theory.

OK so we should first check whether PAE is on and separate into 3 groups 
(64bit-unlimited, PAE-16G, 32bit-2G).




--- snip ---

+
+ # Copy the allocator.c into guests


.py


yes indeed.

--- snip ---

+ # Let kksmd works (until shared mem rich expected value)
+ shm = 0
+ i = 0
+ cmd = cat/proc/%d/statm % get_true_pid(vm)
+ while shm ksm_size:
+ if i 64:
+ logging.info(get_stat(lvms))
+ raise error.TestError(SHM didn't merged the memory until \
+ the DL on guest: %s% (vm.name))
+ logging.debug(Sleep(%d) % (ksm_size / 200 * perf_ratio))
+ time.sleep(ksm_size / 200 * perf_ratio)
+ try:
+ shm = int(os.popen(cmd).readline().split()[2])
+ shm = shm * 4 / 1024
+ i = i + 1


Either you have nice statistic calculation function or not.
I vote for the first case.



Yes, we are using the statistics function for the output. But in this
case we just need to know the shm value, not to log anything.
If this is a big problem even for others, we can split the statistics
function into 2:
int = _get_stat(vm) - returns shm value
string = get_stat(vm) - Uses _get_stats and creates a nice log output

--- snip ---

+  Check if memory in max loading guest is allright
+ logging.info(Starting phase 3b)
+
+  Kill rest of machine


We should have a function for it for all kvm autotest



you think lsessions[i].close() instead of (status,data) =
lsessions[i].get_command_status_output(exit;,20)?
Yes, it would be better.


+ for i in range(last_vm+1, vmsc):
+ (status,data) = lsessions[i].get_command_status_output(exit;,20

[KVM-autotest][RFC] 32/64 bit guest system definition

2009-12-01 Thread Lukáš Doktor

Hello,

In our test (KSM-overcommit) we need to know whether the guest system is 
32, 32-PAE, or 64bit system. Nowadays we are using params['image_name'] 
string which ends with 32 or 64.


Can we confine on this 'image_name' parameter string ending or do you 
think KSM-autotest should define this in separate option in the 
configuration file?


Best regards,
Lukáš Doktor
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST] KSM-overcommit test v.2 (python version)

2009-11-26 Thread Lukáš Doktor
(os.popen(cmd).readline().split()[2])
+ shm = shm * 4 / 1024
+ except:
+ raise error.TestError(Could not fetch shmem info from proc)


Didn't you needed to increase i?


yes, you are right. This line somehow disappeard i = i + 1...

--- snip ---

+ def compare_page(self,original,inmem):
+ 
+ compare memory
+



Why do you need it? Is it to really check ksm didn't do damage?
Interesting, I never doubted ksm for that. Actually it is good idea to
test...



We were asked to do so (be paranoid, everything could happened). We can 
make this voluntary in the config.



Once again thanks, I'm looking forward to your replay.

Best regards,
Lukáš Doktor
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM-AUTOTEST PATCH 1/2] Add KSM test

2009-09-01 Thread Lukáš Doktor
I'm sorry but thunderbird apparently crippled the path. Resending as the 
attachment.
diff --git a/client/tests/kvm/allocator.c b/client/tests/kvm/allocator.c
new file mode 100644
index 000..89e8ce4
--- /dev/null
+++ b/client/tests/kvm/allocator.c
@@ -0,0 +1,571 @@
+/*
+ * KSM test program.
+ * Copyright(C) 2009 Redhat
+ * Jason Wang (jasow...@redhat.com)
+ */
+
+#include sys/types.h
+#include sys/stat.h
+#include sys/mman.h
+#include stdio.h
+#include stdlib.h
+#include unistd.h
+#include fcntl.h
+#include string.h
+#include errno.h
+#include syscall.h
+#include time.h
+#include stdint.h
+//socket linux
+#include sys/socket.h
+#include arpa/inet.h
+#include netinet/in.h
+#include signal.h
+//TODO: socket windows
+
+
+
+#define PS (4096)
+long PAGE_SIZE = PS;
+long intInPage = PS/sizeof(int);
+#define MAP_FLAGS ( MAP_ANON | MAP_SHARED )
+#define PROT_FLAGS ( PROT_WRITE )
+#define FILE_MODE ( O_RDWR | O_CREAT )
+#define LOG_FILE /var/log/vksmd
+#define FIFO_FILE /tmp/vksmd
+#define MODE 0666
+#define FILE_BASE /tmp/ksm_file
+#define MAX_SIZESIZE 6
+#define MAX_COMMANDSIZE 50
+#define BLOCK_COUNT 8
+
+int log_fd = -1;
+int base_fd = -1;
+int checkvalue = 0;
+
+
+//Socket
+struct sockaddr_in sockName;
+struct sockaddr_in clientInfo;
+int mainSocket,clientSocket;
+int port;
+
+socklen_t addrlen;
+
+
+
+
+const uint32_t random_mask = UINT32_MAX1;
+uint32_t random_x = 0;
+const uint32_t random_a = 1103515245;
+const uint32_t random_m = 2^32;
+const uint32_t random_c = 12345;
+
+int statickey = 0;
+int dynamickey = 0;
+
+typedef enum _COMMANDS
+{
+  wrongcommad,
+  ninit,
+  nrandom,
+  nexit,
+  nsrandom,
+  nsrverify,
+  nfillzero,
+  nfillvalue,
+  ndfill,
+  nverify
+} COMMANDS;
+
+void sigpipe (int param)
+{
+  fprintf(stderr,write error\n);
+  //exit(-1); //uncomment end if network connetion is down
+}
+
+int writefull(int socket,char * data,int size){
+  int sz = 0;
+  while (sz  size)
+sz += write(socket, data+sz, size-sz);
+  return sz;
+}
+
+
+int write_message(int s,char * message){
+  size_t len = strlen(message);
+  char buf[10];
+  sprintf(buf,%d:,(unsigned int)len);
+  size_t size = strlen(buf);
+
+  struct timeval tv;
+  fd_set writeset;
+  fd_set errorset;
+  FD_ZERO(writeset);
+  FD_ZERO(errorset);
+  FD_SET(clientSocket, writeset);
+  FD_SET(clientSocket, errorset);
+  tv.tv_sec = 0;
+  tv.tv_usec = 100;
+  int max = s+1;
+  tv.tv_sec = 10;
+  tv.tv_usec = 0;
+  int ret = select(max, NULL, writeset, NULL, tv);
+  if (ret == -1)
+  {
+return -1;
+  }
+  if (ret == 0)
+  {
+return -1;
+  }
+  if (FD_ISSET(s, writeset))
+  {
+if (writefull(s, buf, size) != size){
+  return -1;
+}
+if (writefull(s, message, len) != len){
+  return -1;
+}
+  }
+  return 0;
+}
+
+void log_info(char *str)
+{
+  if (write_message(clientSocket, str) != 0){
+fprintf(stderr,write error\n);
+  }
+}
+
+/* fill pages with zero */
+void zero_pages(void **page_array,int npages)
+{
+  int n = 0;
+  for(n=0;nnpages;n++)
+memset(page_array[n],0,intInPage);
+}
+
+/* fill pages with zero */
+void value_to_pages(void **page_array,int npages,char value)
+{
+  int n = 0;
+  for(n=0;nnpages;n++)
+memset(page_array[n],value,PAGE_SIZE/sizeof(char));
+}
+
+/* initialise page_array */
+void **map_zero_page(unsigned long npages)
+{
+  void **page_array=(void **)malloc(sizeof(void *)*npages);
+  long n = 0;
+
+  if ( page_array == NULL ) {
+log_info(page array allocated failed\n);
+return NULL;
+  }
+
+#if 0
+  /* Map the /dev/zero in order to be detected by KSM */
+  for( n=0 ; n  npages; n++){
+int i;
+void *addr=(void *)mmap(0,PAGE_SIZE,PROT_FLAGS,MAP_FLAGS,0,0);
+if ( addr == MAP_FAILED ){
+  log_info(map failed!\n);
+  for (i=0;in;i++)
+	munmap( page_array[i], 0);
+  free(page_array);
+  return NULL;
+}
+
+page_array[n] = addr;
+  }
+#endif
+
+ void *addr = (void *)mmap(0,PAGE_SIZE*npages,PROT_FLAGS,MAP_FLAGS,0,0);
+  if (addr == MAP_FAILED){
+log_info(FAIL: map failed!\n);
+free(page_array);
+return NULL;
+  }
+
+  for (n=0;nnpages;n++)
+page_array[n] = addr+PAGE_SIZE*n;
+
+  zero_pages(page_array,npages);
+
+  return page_array;
+}
+
+/* fill page with random data */
+void random_fill(void **page_array, unsigned long npages)
+{
+  int n = 0;
+  int value = 0;
+  int offset = 0;
+  void *addr = NULL;
+
+  for( n = 0; n  npages; n++){
+offset = rand() % (intInPage);
+value = rand();
+addr = page_array[n] + offset;
+*((int *)addr) = value;
+  }
+}
+
+
+/*set random series seed*/
+void mrseed(int seed){
+  random_x = seed;
+}
+
+/*Generate random number*/
+int mrand(){
+  random_x  = random_a*random_x+random_c;
+  return random_x  random_mask;
+}
+
+/* Generate randomcode array*/
+int* random_code_array(int nblock)
+{
+  int * randArray = malloc(PAGE_SIZE*nblock);
+  int n = 0;
+  for (;n  nblock;n++){
+int i = 0;
+for (;i  intInPage;i++){
+  randArray[n*intInPage+i]=mrand();
+}
+  }
+  return 

Re: [KVM-AUTOTEST PATCH 2/2] Add KSM test

2009-09-01 Thread Lukáš Doktor
I'm sorry but thunderbird apparently crippled the path. Resending as the 
attachment.
diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
index 4930e80..b9839df 100644
--- a/client/tests/kvm/kvm.py
+++ b/client/tests/kvm/kvm.py
@@ -53,6 +53,8 @@ class kvm(test.test):
 yum_update:   test_routine(kvm_tests, run_yum_update),
 autotest: test_routine(kvm_tests, run_autotest),
 kvm_install:  test_routine(kvm_install, run_kvm_install),
+ksm:
+test_routine(kvm_tests, run_ksm),
 linux_s3: test_routine(kvm_tests, run_linux_s3),
 stress_boot:  test_routine(kvm_tests, run_stress_boot),
 timedrift:test_routine(kvm_tests, run_timedrift),
diff --git a/client/tests/kvm/kvm_tests.cfg.sample b/client/tests/kvm/kvm_tests.cfg.sample
index a83ef9b..f4a41b9 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -100,6 +100,23 @@ variants:
 test_name = disktest
 test_control_file = disktest.control
 
+- ksm:
+# Don't preprocess any vms as we need to change it's params
+vms = ''
+image_snapshot = yes
+kill_vm_gracefully = no
+type = ksm
+variants:
+- ratio_3:
+ksm_ratio = 3
+- ratio_10:
+ksm_ratio = 10
+variants:
+- serial 
+ksm_test_size = serial
+- paralel
+ksm_test_size = paralel
+
 - linux_s3: install setup
 type = linux_s3
 
diff --git a/client/tests/kvm/kvm_tests.py b/client/tests/kvm/kvm_tests.py
index b100269..ada4c6b 100644
--- a/client/tests/kvm/kvm_tests.py
+++ b/client/tests/kvm/kvm_tests.py
@@ -462,6 +462,554 @@ def run_yum_update(test, params, env):
 
 session.close()
 
+def run_ksm(test, params, env):
+
+Test how KSM (Kernel Shared Memory) act with more than physical memory is
+used. In second part is also tested, how KVM can handle the situation,
+when the host runs out of memory (expected is to pause the guest system,
+wait until some process returns the memory and bring the guest back to life)
+
+@param test: kvm test object.
+@param params: Dictionary with test parameters.
+@param env: Dictionary with the test wnvironment.
+
+# We are going to create the main VM so we use kvm_preprocess functions
+# FIXME: not a nice thing
+import kvm_preprocessing
+import random
+import socket
+import select
+import math
+
+class allocator_com:
+
+This class is used for communication with the allocator
+
+def __init__(self, vm, _port, _host='127.0.0.1'):
+self.vm = vm
+self.PORT = _port
+self.HOST = _host
+self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+self.isConnect = False
+
+def __str__(self):
+return self.vm + : + self.HOST + : + str(self.PORT)
+
+def connect(self):
+print self
+logging.debug(ALLOC: connect to %s, self.vm)
+try:
+self.socket.connect((self.HOST, self.PORT))
+except:
+raise error.TestFail(ALLOC: Could not establish the \
+ communication with %s % (self.vm))
+self.isConnect = True
+
+def isConnected(self):
+return self.isConnect;
+
+def readsize(self):
+read,write,error = select.select([self.socket.fileno()],[],[],0.5)
+size = 0
+if (self.socket.fileno() in read):
+data = self.socket.recv(1);
+size = ;
+while data[0] != ':':
+size = size + data[0]
+data = self.socket.recv(1)
+return int(size)
+
+def _recv(self):
+msg = 
+read, write, error = select.select([self.socket.fileno()],\
+   [], [], 0.5)
+if (self.socket.fileno() in read):
+size = self.readsize()
+msg = self.socket.recv(size)
+if (len(msg)  size):
+raise error.TestFail(ALLOC: Could not recive the message)
+
+logging.debug(ALLOC: output '%s' from %s % (msg, self.vm))
+return msg
+
+def recv(self, wait=1, loops=20):
+out = 
+log = 
+while not out.startswith(PASS) and not out.startswith(FAIL):
+logging.debug(Sleep(%d) % (wait))
+time.sleep(wait)
+log += out
+out = self._recv()
+
+if loops == 0:
+logging.error(repr(out))
+raise error.TestFail(Command wasn't finished until DL)
+  

[KVM-AUTOTEST PATCH 0/2] Add KSM test

2009-08-31 Thread Lukáš Doktor
This patch adds KSM test. We faced many difficulties which weren't 
solvable by regular ways so please take a look and comment.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM-AUTOTEST PATCH 1/2] Add KSM test

2009-08-31 Thread Lukáš Doktor
allocator.c is a program, which allocates pages in the memory and allow 
us to fill or test those pages. It's controlled using sockets.


Signed-off-by: Lukáš Doktor ldok...@redhat.com
Signed-off-by: Jiří Župka jzu...@redhat.com
---
 client/tests/kvm/allocator.c |  571 
++

 1 files changed, 571 insertions(+), 0 deletions(-)
 create mode 100644 client/tests/kvm/allocator.c

diff --git a/client/tests/kvm/allocator.c b/client/tests/kvm/allocator.c
new file mode 100644
index 000..89e8ce4
--- /dev/null
+++ b/client/tests/kvm/allocator.c
@@ -0,0 +1,571 @@
+/*
+ * KSM test program.
+ * Copyright(C) 2009 Redhat
+ * Jason Wang (jasow...@redhat.com)
+ */
+
+#include sys/types.h
+#include sys/stat.h
+#include sys/mman.h
+#include stdio.h
+#include stdlib.h
+#include unistd.h
+#include fcntl.h
+#include string.h
+#include errno.h
+#include syscall.h
+#include time.h
+#include stdint.h
+//socket linux
+#include sys/socket.h
+#include arpa/inet.h
+#include netinet/in.h
+#include signal.h
+//TODO: socket windows
+
+
+
+#define PS (4096)
+long PAGE_SIZE = PS;
+long intInPage = PS/sizeof(int);
+#define MAP_FLAGS ( MAP_ANON | MAP_SHARED )
+#define PROT_FLAGS ( PROT_WRITE )
+#define FILE_MODE ( O_RDWR | O_CREAT )
+#define LOG_FILE /var/log/vksmd
+#define FIFO_FILE /tmp/vksmd
+#define MODE 0666
+#define FILE_BASE /tmp/ksm_file
+#define MAX_SIZESIZE 6
+#define MAX_COMMANDSIZE 50
+#define BLOCK_COUNT 8
+
+int log_fd = -1;
+int base_fd = -1;
+int checkvalue = 0;
+
+
+//Socket
+struct sockaddr_in sockName;
+struct sockaddr_in clientInfo;
+int mainSocket,clientSocket;
+int port;
+
+socklen_t addrlen;
+
+
+
+
+const uint32_t random_mask = UINT32_MAX1;
+uint32_t random_x = 0;
+const uint32_t random_a = 1103515245;
+const uint32_t random_m = 2^32;
+const uint32_t random_c = 12345;
+
+int statickey = 0;
+int dynamickey = 0;
+
+typedef enum _COMMANDS
+{
+  wrongcommad,
+  ninit,
+  nrandom,
+  nexit,
+  nsrandom,
+  nsrverify,
+  nfillzero,
+  nfillvalue,
+  ndfill,
+  nverify
+} COMMANDS;
+
+void sigpipe (int param)
+{
+  fprintf(stderr,write error\n);
+  //exit(-1); //uncomment end if network connetion is down
+}
+
+int writefull(int socket,char * data,int size){
+  int sz = 0;
+  while (sz  size)
+sz += write(socket, data+sz, size-sz);
+  return sz;
+}
+
+
+int write_message(int s,char * message){
+  size_t len = strlen(message);
+  char buf[10];
+  sprintf(buf,%d:,(unsigned int)len);
+  size_t size = strlen(buf);
+
+  struct timeval tv;
+  fd_set writeset;
+  fd_set errorset;
+  FD_ZERO(writeset);
+  FD_ZERO(errorset);
+  FD_SET(clientSocket, writeset);
+  FD_SET(clientSocket, errorset);
+  tv.tv_sec = 0;
+  tv.tv_usec = 100;
+  int max = s+1;
+  tv.tv_sec = 10;
+  tv.tv_usec = 0;
+  int ret = select(max, NULL, writeset, NULL, tv);
+  if (ret == -1)
+  {
+return -1;
+  }
+  if (ret == 0)
+  {
+return -1;
+  }
+  if (FD_ISSET(s, writeset))
+  {
+if (writefull(s, buf, size) != size){
+  return -1;
+}
+if (writefull(s, message, len) != len){
+  return -1;
+}
+  }
+  return 0;
+}
+
+void log_info(char *str)
+{
+  if (write_message(clientSocket, str) != 0){
+fprintf(stderr,write error\n);
+  }
+}
+
+/* fill pages with zero */
+void zero_pages(void **page_array,int npages)
+{
+  int n = 0;
+  for(n=0;nnpages;n++)
+memset(page_array[n],0,intInPage);
+}
+
+/* fill pages with zero */
+void value_to_pages(void **page_array,int npages,char value)
+{
+  int n = 0;
+  for(n=0;nnpages;n++)
+memset(page_array[n],value,PAGE_SIZE/sizeof(char));
+}
+
+/* initialise page_array */
+void **map_zero_page(unsigned long npages)
+{
+  void **page_array=(void **)malloc(sizeof(void *)*npages);
+  long n = 0;
+
+  if ( page_array == NULL ) {
+log_info(page array allocated failed\n);
+return NULL;
+  }
+
+#if 0
+  /* Map the /dev/zero in order to be detected by KSM */
+  for( n=0 ; n  npages; n++){
+int i;
+void *addr=(void *)mmap(0,PAGE_SIZE,PROT_FLAGS,MAP_FLAGS,0,0);
+if ( addr == MAP_FAILED ){
+  log_info(map failed!\n);
+  for (i=0;in;i++)
+   munmap( page_array[i], 0);
+  free(page_array);
+  return NULL;
+}
+
+page_array[n] = addr;
+  }
+#endif
+
+ void *addr = (void *)mmap(0,PAGE_SIZE*npages,PROT_FLAGS,MAP_FLAGS,0,0);
+  if (addr == MAP_FAILED){
+log_info(FAIL: map failed!\n);
+free(page_array);
+return NULL;
+  }
+
+  for (n=0;nnpages;n++)
+page_array[n] = addr+PAGE_SIZE*n;
+
+  zero_pages(page_array,npages);
+
+  return page_array;
+}
+
+/* fill page with random data */
+void random_fill(void **page_array, unsigned long npages)
+{
+  int n = 0;
+  int value = 0;
+  int offset = 0;
+  void *addr = NULL;
+
+  for( n = 0; n  npages; n++){
+offset = rand() % (intInPage);
+value = rand();
+addr = page_array[n] + offset;
+*((int *)addr) = value;
+  }
+}
+
+
+/*set random series seed*/
+void mrseed(int seed){
+  random_x = seed;
+}
+
+/*Generate random number*/
+int mrand(){
+  random_x

Re: [KVM-AUTOTEST PATCH 2/2] Add KSM test

2009-08-31 Thread Lukáš Doktor

This is an actual KSM test.

It allows to test merging resp splitting the pages in serial, parallel 
or both. Also you can specify an overcommit ratio for KSM overcommit 
testing.


We were forced to destroy all previous defined vms and to create them 
inside the test (similar to stress_boot), because we don't know how many 
machines will be required during the vm preparation.


Second nasty thing is filling the memory by the guests. We didn't find 
better way to test filled memory without the python(kvm-autotest) fall. 
This version continue filling until a small reserve than destroy 
previous machines and let the actual machine finish the work.


Signed-off-by: Lukáš Doktor ldok...@redhat.com
Signed-off-by: Jiří Župka jzu...@redhat.com
---
 client/tests/kvm/kvm.py   |2 +
 client/tests/kvm/kvm_tests.cfg.sample |   17 +
 client/tests/kvm/kvm_tests.py |  548 
+

 3 files changed, 567 insertions(+), 0 deletions(-)

diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
index 4930e80..b9839df 100644
--- a/client/tests/kvm/kvm.py
+++ b/client/tests/kvm/kvm.py
@@ -53,6 +53,8 @@ class kvm(test.test):
 yum_update:   test_routine(kvm_tests, 
run_yum_update),

 autotest: test_routine(kvm_tests, run_autotest),
 kvm_install:  test_routine(kvm_install, 
run_kvm_install),

+ksm:
+test_routine(kvm_tests, run_ksm),
 linux_s3: test_routine(kvm_tests, run_linux_s3),
 stress_boot:  test_routine(kvm_tests, 
run_stress_boot),
 timedrift:test_routine(kvm_tests, 
run_timedrift),
diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample

index a83ef9b..f4a41b9 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -100,6 +100,23 @@ variants:
 test_name = disktest
 test_control_file = disktest.control

+- ksm:
+# Don't preprocess any vms as we need to change it's params
+vms = ''
+image_snapshot = yes
+kill_vm_gracefully = no
+type = ksm
+variants:
+- ratio_3:
+ksm_ratio = 3
+- ratio_10:
+ksm_ratio = 10
+variants:
+- serial
+ksm_test_size = serial
+- paralel
+ksm_test_size = paralel
+
 - linux_s3: install setup
 type = linux_s3

diff --git a/client/tests/kvm/kvm_tests.py b/client/tests/kvm/kvm_tests.py
index b100269..ada4c6b 100644
--- a/client/tests/kvm/kvm_tests.py
+++ b/client/tests/kvm/kvm_tests.py
@@ -462,6 +462,554 @@ def run_yum_update(test, params, env):

 session.close()

+def run_ksm(test, params, env):
+
+Test how KSM (Kernel Shared Memory) act with more than physical 
memory is

+used. In second part is also tested, how KVM can handle the situation,
+when the host runs out of memory (expected is to pause the guest 
system,
+wait until some process returns the memory and bring the guest back 
to life)

+
+@param test: kvm test object.
+@param params: Dictionary with test parameters.
+@param env: Dictionary with the test wnvironment.
+
+# We are going to create the main VM so we use kvm_preprocess functions
+# FIXME: not a nice thing
+import kvm_preprocessing
+import random
+import socket
+import select
+import math
+
+class allocator_com:
+
+This class is used for communication with the allocator
+
+def __init__(self, vm, _port, _host='127.0.0.1'):
+self.vm = vm
+self.PORT = _port
+self.HOST = _host
+self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+self.isConnect = False
+
+def __str__(self):
+return self.vm + : + self.HOST + : + str(self.PORT)
+
+def connect(self):
+print self
+logging.debug(ALLOC: connect to %s, self.vm)
+try:
+self.socket.connect((self.HOST, self.PORT))
+except:
+raise error.TestFail(ALLOC: Could not establish the \
+ communication with %s % (self.vm))
+self.isConnect = True
+
+def isConnected(self):
+return self.isConnect;
+
+def readsize(self):
+read,write,error = 
select.select([self.socket.fileno()],[],[],0.5)

+size = 0
+if (self.socket.fileno() in read):
+data = self.socket.recv(1);
+size = ;
+while data[0] != ':':
+size = size + data[0]
+data = self.socket.recv(1)
+return int(size)
+
+def _recv(self):
+msg = 
+read, write, error = select.select

Re: [KVM-AUTOTEST PATCH] KVM test: Add hugepage variant

2009-08-04 Thread Lukáš Doktor

Hello Ryan,

see below...

Dne 29.7.2009 16:41, Ryan Harper napsal(a):

* Lucas Meneghel Rodriguesl...@redhat.com  [2009-07-28 22:40]:

This patch adds a small setup script to set up huge memory
pages during the kvm tests execution. Also, added hugepage setup to the
fc8_quick sample.

Signed-off-by: Luká?? Doktorldok...@redhat.com
Signed-off-by: Lucas Meneghel Rodriguesl...@redhat.com


Looks good.  one nit below.

Signed-off-by: Ryan Harperry...@us.ibm.com


---
+
+def get_target_hugepages(self):
+
+Calculate the target number of hugepages for testing purposes.
+
+if self.vms  self.max_vms:
+self.vms = self.max_vms
+vmsm = (self.vms * self.mem) + (self.vms * 64)


Nit: Maybe a comment about the fudge factor being added in?


It's qemu-kvm overhead. Should I change the patch or is this explanation 
sufficient?


Thanks for the feedback,
bye, Lukáš
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[KVM-AUTOTEST PATCH] FIX add a comment to hugepage variant

2009-08-04 Thread Lukáš Doktor

This adds an explanation of the 64 constant in vmsm equation.
diff --git a/client/tests/kvm/scripts/hugepage.py 
b/client/tests/kvm/scripts/hugepage.py
index dc36da4..3828533 100644
--- a/client/tests/kvm/scripts/hugepage.py
+++ b/client/tests/kvm/scripts/hugepage.py
@@ -57,6 +57,7 @@ class HugePage:
 
 if self.vms  self.max_vms:
 self.vms = self.max_vms
+# memory of all VMs plus qemu overhead of 64MB per guest
 vmsm = (self.vms * self.mem) + (self.vms * 64)
 return int(vmsm * 1024 / self.hugepage_size)
 


Re: [KVM-AUTOTEST PATCH] KVM test: Add hugepage variant

2009-07-29 Thread Lukáš Doktor
. %
+self.target_hugepages)
+hugepage_cfg.close()
+
+
+def mount_hugepage_fs(self):
+
+Verify if there's a hugetlbfs mount set. If there's none, will set up
+a hugetlbfs mount using the class attribute that defines the mount
+point.
+
+if not os.path.ismount(self.hugepage_path):
+if not os.path.isdir(self.hugepage_path):
+os.makedirs(self.hugepage_path)
+cmd = mount -t hugetlbfs none %s % self.hugepage_path
+if os.system(cmd):
+raise HugePageError(Cannot mount hugetlbfs path %s %
+self.hugepage_path)
+
+
+def setup(self):
+self.set_hugepages()
+self.mount_hugepage_fs()
+
+
+if __name__ == __main__:
+if len(sys.argv)  2:
+huge_page = HugePage()
+else:
+huge_page = HugePage(sys.argv[1])
+
+huge_page.setup()



Acked-by: Lukáš Doktor ldok...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM AUTOTEST PATCH] [RFC] KVM test: keep record of supported qemu options

2009-07-29 Thread Lukáš Doktor

Hello Lucas,

I like your patch but I'm not entirely convinced about it necessity. 
Stable version of KVM should have this fixed and unstable ones are for 
developers, who are skilled enough to fix this using kvm_test.cfg.


On the other hand keep this patch somewhere. Eventually if qemu started 
to be naughty, we would have something useful in our pocket.


Best regards,
Lukáš

Dne 29.7.2009 05:40, Lucas Meneghel Rodrigues napsal(a):

In order to make it easier to figure out problems and
also to avoid aborting tests prematurely due to
incompatible qemu options, keep record of supported
qemu options, and if extra options are passed to qemu,
verify if they are amongst the supported options. Also,
try to replace known misspelings on options in case
something goes wrong, and be generous logging any problems.

This first version of the patch gets supported flags from
the output of qemu --help. I thought this would be good
enough for a first start. I am asking for input on whether
this is needed, and if yes, if the approach looks good.

Signed-off-by: Lucas Meneghel Rodriguesl...@redhat.com
---
  client/tests/kvm/kvm_vm.py |   79 ++-
  1 files changed, 77 insertions(+), 2 deletions(-)

diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index eba9b84..0dd34c2 100644
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -121,6 +121,7 @@ class VM:
  self.qemu_path = qemu_path
  self.image_dir = image_dir
  self.iso_dir = iso_dir
+self.qemu_supported_flags = self.get_qemu_supported_flags()


  # Find available monitor filename
@@ -258,7 +259,7 @@ class VM:

  extra_params = params.get(extra_params)
  if extra_params:
-qemu_cmd +=  %s % extra_params
+qemu_cmd +=  %s % self.process_qemu_extra_params(extra_params)

  for redir_name in kvm_utils.get_sub_dict_names(params, redirs):
  redir_params = kvm_utils.get_sub_dict(params, redir_name)
@@ -751,7 +752,7 @@ class VM:
  else:
  self.send_key(char)

-
+
  def get_uuid(self):
  
  Catch UUID of the VM.
@@ -762,3 +763,77 @@ class VM:
  return self.uuid
  else:
  return self.params.get(uuid, None)
+
+
+def get_qemu_supported_flags(self):
+
+Gets all supported qemu options from qemu-help. This is a useful
+procedure to quickly spot problems with incompatible qemu flags.
+
+cmd = self.qemu_path + ' --help'
+(status, output) = kvm_subprocess.run_fg(cmd)
+supported_flags = []
+
+if status:
+logging.error('Process qemu --help ended with exit code !=0. '
+  'No supported qemu flags will be recorded.')
+return supported_flags
+
+for line in output.split('\n'):
+if line and line.startswith('-'):
+flag = line.split()[0]
+if flag not in supported_flags:
+supported_flags.append(flag)
+
+return supported_flags
+
+
+def process_qemu_extra_params(self, extra_params):
+
+Verifies an extra param passed to qemu to see if it's supported by the
+current qemu version. If it's not supported, try to find an appropriate
+replacement on a list of known option misspellings.
+
+@param extra_params: String with a qemu command line option.
+
+flag = extra_params.split()[0]
+
+if flag not in self.qemu_supported_flags:
+logging.error(Flag %s does not seem to be supported by the 
+  current qemu version. Looking for a replacement...,
+  flag)
+supported_flag = self.get_qemu_flag_replacement(flag)
+if supported_flag:
+logging.debug(Replacing flag %s with %s, flag,
+  supported_flag)
+extra_params = extra_params.replace(flag, supported_flag)
+else:
+logging.error(No valid replacement was found for flag %s.,
+  flag)
+
+return extra_params
+
+
+def get_qemu_flag_replacement(self, option):
+
+Searches on a list of known misspellings for qemu options and returns
+a replacement. If no replacement can be found, return None.
+
+@param option: String representing qemu option (such as -mem).
+
+@return: Option replacement, or None, if none found.
+
+list_mispellings = [['-mem-path', '-mempath'],]
+replacement = None
+
+for mispellings in list_mispellings:
+if option in mispellings:
+option_position = mispellings.index(option)
+replacement = mispellings[1 - option_position]
+
+if replacement not in self.qemu_supported_flags:
+logging.error(Replacement %s also 

Re: [KVM AUTOTEST PATCH] KVM test: Add hugepage variant

2009-07-28 Thread Lukáš Doktor
Yes, this looks more pythonish and actually better than my version. I'm 
missing only one thing, extra_params +=  -mem-path /mnt/hugepage down 
in configuration (see below).


This cause problem with predefined mount point, because it needs to be 
the same in extra_params and python script.


Dne 27.7.2009 23:10, Lucas Meneghel Rodrigues napsal(a):

This patch adds a small setup script to set up huge memory
pages during the kvm tests execution. Also, added hugepage setup to the
fc8_quick sample.

Signed-off-by: LukĂĄĹĄ Doktorldok...@redhat.com
Signed-off-by: Lucas Meneghel Rodriguesl...@redhat.com

---
  client/tests/kvm/kvm_tests.cfg.sample |6 ++
  client/tests/kvm/kvm_vm.py|   11 +++
  client/tests/kvm/scripts/hugepage.py  |  110 +
  3 files changed, 127 insertions(+), 0 deletions(-)
  create mode 100644 client/tests/kvm/scripts/hugepage.py

diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample
index 2d75a66..4a6a174 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -585,6 +585,11 @@ variants:
  only default
  image_format = raw

+variants:
+- @kvm_smallpages:
+- kvm_hugepages:
+pre_command = /usr/bin/python scripts/hugepage.py


+extra_params +=  -mem-path /mnt/hugepage

# ^^Tells qemu to allocate guest memory as hugepage

I'd rather have this part of cfg look like this:
variants:
- @kvm_smallpages:
- kvm_hugepages:
pre_command = /usr/bin/python scripts/hugepage.py 
/mnt/hugepage
extra_params +=  -mem-path /mnt/hugepage

because this way it's more clear the relation between the constants. (it 
doesn't changes the script itself)



+

  variants:
  - @basic:
@@ -598,6 +603,7 @@ variants:
  only Fedora.8.32
  only install setup boot shutdown
  only rtl8139
+only kvm_hugepages
  - @sample1:
  only qcow2
  only ide
diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index d96b359..eba9b84 100644
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -397,6 +397,17 @@ class VM:
  self.destroy()
  return False

+# Get the output so far, to see if we have any problems with
+# hugepage setup.
+output = self.process.get_output()
+
+if alloc_mem_area in output:
+logging.error(Could not allocate hugepage memory; 
+  qemu command:\n%s % qemu_command)
+logging.error(Output: + kvm_utils.format_str_for_message(
+  self.process.get_output()))
+return False
+
  logging.debug(VM appears to be alive with PID %d,
self.process.get_pid())
  return True
diff --git a/client/tests/kvm/scripts/hugepage.py 
b/client/tests/kvm/scripts/hugepage.py
new file mode 100644
index 000..9bc4194
--- /dev/null
+++ b/client/tests/kvm/scripts/hugepage.py
@@ -0,0 +1,110 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+import os, sys, time
+
+
+Simple script to allocate enough hugepages for KVM testing purposes.
+
+
+class HugePageError(Exception):
+
+Simple wrapper for the builtin Exception class.
+
+pass
+
+
+class HugePage:
+def __init__(self, hugepage_path=None):
+
+Gets environment variable values and calculates the target number
+of huge memory pages.
+
+@param hugepage_path: Path where to mount hugetlbfs path, if not
+yet configured.
+
+self.vms = len(os.environ['KVM_TEST_vms'].split())
+self.mem = int(os.environ['KVM_TEST_mem'])
+try:
+self.max_vms = int(os.environ['KVM_TEST_max_vms'])
+except KeyError:
+self.max_vms = 0
+if hugepage_path:
+self.hugepage_path = hugepage_path
+else:
+self.hugepage_path = '/mnt/kvm_hugepage'
+self.hugepage_size = self.get_hugepage_size()
+self.target_hugepages = self.get_target_hugepages()
+
+
+def get_hugepage_size(self):
+
+Get the current system setting for huge memory page size.
+
+meminfo = open('/proc/meminfo', 'r').readlines()
+huge_line_list = [h for h in meminfo if h.startswith(Hugepagesize)]
+try:
+return int(huge_line_list[0].split()[1])
+except ValueError, e:
+raise HugePageError(Could not get huge page size setting from 
+/proc/meminfo: %s % e)
+
+
+def get_target_hugepages(self):
+
+Calculate the target number of hugepages for testing purposes.
+
+if self.vms  self.max_vms:
+self.vms = self.max_vms
+vmsm = (self.vms * self.mem) + (self.vms * 64)
+return int(vmsm * 1024 / self.hugepage_size)
+

Re: [KVM_AUTOTEST] add kvm hugepage variant

2009-07-21 Thread Lukáš Doktor

Well, thank you for notifications, I'll keep them in my mind.

Also the problem with mempath vs. mem-path is solved. It was just a 
misspell in one version of KVM.


* fixed patch attached

Dne 20.7.2009 14:58, Lucas Meneghel Rodrigues napsal(a):

On Fri, 2009-07-10 at 12:01 +0200, Lukáš Doktor wrote:

After discussion I split the patches.


Hi Lukáš, sorry for the delay answering your patch. Looks good to me in
general, I have some remarks to make:

1) When posting patches to the autotest kvm tests, please cross post the
autotest mailing list (autot...@test.kernel.org) and the KVM list.

2) About scripts to prepare the environment to perform tests - we've had
some discussion about including shell scripts on autotest. Bottom line,
autotest has a policy of not including non python code when possible
[1]. So, would you mind re-creating your hugepage setup code in python
and re-sending it?

Thanks for your contribution, looking forward getting it integrated to
our tests.

[1] Unless when it is not practical for testing purposes - writing tests
in C is just fine, for example.


This patch adds kvm_hugepage variant. It prepares the host system and
start vm with -mem-path option. It does not clean after itself, because
   it's impossible to unmount and free hugepages before all guests are
destroyed.

I need to ask you what to do with change of qemu parameter. Newest
versions are using -mempath insted of -mem-path. This is impossible to
fix using current config file. I can see 2 solutions:
1) direct change in kvm_vm.py (parse output and try another param)
2) detect qemu capabilities outside and create additional layer (better
for future occurrence)

Dne 9.7.2009 11:24, Lukáš Doktor napsal(a):

This patch adds kvm_hugepage variant. It prepares the host system and
start vm with -mem-path option. It does not clean after itself, because
it's impossible to unmount and free hugepages before all guests are
destroyed.

There is also added autotest.libhugetlbfs test.

I need to ask you what to do with change of qemu parameter. Newest
versions are using -mempath insted of -mem-path. This is impossible to
fix using current config file. I can see 2 solutions:
1) direct change in kvm_vm.py (parse output and try another param)
2) detect qemu capabilities outside and create additional layer (better
for future occurrence)

Tested by:ldok...@redhat.com on RHEL5.4 with kvm-83-72.el5




diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
b/client/tests/kvm/kvm_tests.cfg.sample
index 5bd6eb8..70e290d 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -555,6 +555,13 @@ variants:
 only default
 image_format = raw
 
+variants:
+- @kvm_smallpages:
+- kvm_hugepages:
+hugepage_path = /mnt/hugepage
+pre_command = /usr/bin/python scripts/hugepage.py
+extra_params +=  -mem-path /mnt/hugepage
+
 
 variants:
 - @basic:
@@ -568,6 +575,7 @@ variants:
 only Fedora.8.32
 only install setup boot shutdown
 only rtl8139
+only kvm_smallpages
 - @sample1:
 only qcow2
 only ide
diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index 48f2916..2b97ccc 100644
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -412,6 +412,13 @@ class VM:
 self.destroy()
 return False
 
+if output:
+logging.debug(qemu produced some output:\n%s, output)
+if alloc_mem_area in output:
+logging.error(Could not allocate hugepage memory
+  -- qemu command:\n%s, qemu_command)
+return False
+
 logging.debug(VM appears to be alive with PID %d, self.pid)
 return True
 

diff -Narup a/client/tests/kvm/scripts/hugepage.py b/client/tests/kvm/scripts/
hugepage.py
--- a/client/tests/kvm/scripts/hugepage.py 1970-01-01 01:00:00.0 +0100
+++ a/client/tests/kvm/scripts/hugepage.py2009-07-21 16:47:00.0 
+0200
@@ -0,0 +1,63 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+# Alocates enough hugepages and mount hugetlbfs
+import os, sys, time
+
+# Variables check  set
+vms = os.environ['KVM_TEST_vms'].split().__len__()
+try:
+max_vms = int(os.environ['KVM_TEST_max_vms'])
+except KeyError:
+max_vms = 0
+mem = int(os.environ['KVM_TEST_mem'])
+hugepage_path = os.environ['KVM_TEST_hugepage_path']
+
+fmeminfo = open(/proc/meminfo, r)
+while fmeminfo:
+   line = fmeminfo.readline()
+   if line.startswith(Hugepagesize):
+   dumm, hp_size, dumm = line.split()
+   break
+fmeminfo.close()
+
+if not hp_size:
+print Could not get Hugepagesize from /proc/meminfo file
+raise ValueError
+
+if vms  max_vms:
+vms = max_vms
+
+vmsm = ((vms * mem) + (vms * 64))
+target = (vmsm * 1024 / int(hp_size)) 
+
+# Iteratively set # of hugepages
+fhp = open(/proc/sys/vm/nr_hugepages, r+)
+hp

Re: [KVM_AUTOTEST][RFC] pre_command chaining

2009-07-13 Thread Lukáš Doktor

Hi Michael,

you are right, it is possible. But if I specify pre_command = true at 
the top of my config file, this command will be executed even if no 
additional command is added into the queue. (tests with pre_commands are 
not selected)


That is my reason why I'd like to see this two lines change into the 
framework.


Still you are right that it's basically a cosmetic modification  for 
simplification the config file.


Dne 10.7.2009 17:27, Michael Goldish napsal(a):

- Lukáš Doktorldok...@redhat.com  wrote:


Hi,

the way how kvm_autotest currently handle pre_command/post_command it
don't allow to specify more than one command. BASH can handle this
itself with a small change in the framework , as shown in the
attachment.


Why do you say the framework doesn't allow chaining pre_commands?
What's wrong with:
pre_command = command0
pre_command +=   command1
pre_command +=   command2


In .cfg file we just change variable from:
   pre_command = command
to:
   pre_commane += command
produce:
   $(command  true)

Framework adds the last command true, which enclose whole command.
This
way we can chain infinite pre/post_commands without losing the return

value (if something go wrong, other commands are not executed and
return
value is preserve.

example:
in cfg:
   pre_command += echo A
   pre_command += echo B
   pre_command += echo C
framework params.get(pre_command):
   echo A  echo B  echo C
framework process_command execute on the host:
   echo A  echo B  echo C  true

regards,
Lukáš Doktor


In any case, the proposed solution does not allow the user to use
pre_command in the most straightforward way:
pre_command = command
because that would get translated into:
command true
So the user must append  to the command, which makes little sense.

There could be other solutions, like

1. Specifying pre_command = true at the top of the config file, and
then using:
pre_command +=   command0
pre_command +=   command1

pre_command = command will also work fine in this case.

2. Removing the final  from the command, if any, so that if the
user enters:
pre_command = command0
pre_command += command1
the framework will run:
command0  command1 instead of command0  command1.

In any case, can you provide an example where it's impossible or
difficult to do command chaining without changing the framework?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM_AUTOTEST] add kvm hugepage variant and test

2009-07-10 Thread Lukáš Doktor
I'm sorry this patch has a bug. hugepage variant doesn't allocate enough 
memory with stress_boot (stress_boot uses different method to define VMS).

Attached the fixed patch.

Dne 9.7.2009 11:24, Lukáš Doktor napsal(a):

This patch adds kvm_hugepage variant. It prepares the host system and
start vm with -mem-path option. It does not clean after itself, because
it's impossible to unmount and free hugepages before all guests are
destroyed.

There is also added autotest.libhugetlbfs test.

I need to ask you what to do with change of qemu parameter. Newest
versions are using -mempath insted of -mem-path. This is impossible to
fix using current config file. I can see 2 solutions:
1) direct change in kvm_vm.py (parse output and try another param)
2) detect qemu capabilities outside and create additional layer (better
for future occurrence)

Tested by:ldok...@redhat.com on RHEL5.4 with kvm-83-72.el5


diff -Narup orig/client/tests/kvm/autotest_control/libhugetlbfs.control 
new/client/tests/kvm/autotest_control/libhugetlbfs.control
--- orig/client/tests/kvm/autotest_control/libhugetlbfs.control 1970-01-01 
01:00:00.0 +0100
+++ new/client/tests/kvm/autotest_control/libhugetlbfs.control  2009-07-08 
13:18:07.0 +0200
@@ -0,0 +1,13 @@
+AUTHOR = 'aga...@google.com (Ashwin Ganti)'
+TIME = 'MEDIUM'
+NAME = 'libhugetlbfs test'
+TEST_TYPE = 'client'
+TEST_CLASS = 'Kernel'
+TEST_CATEGORY = 'Functional'
+
+DOC = '''
+Tests basic huge pages functionality when using libhugetlbfs. For more info
+about libhugetlbfs see http://libhugetlbfs.ozlabs.org/
+'''
+
+job.run_test('libhugetlbfs', dir='/mnt')
diff -Narup orig/client/tests/kvm/kvm_tests.cfg.sample 
new/client/tests/kvm/kvm_tests.cfg.sample
--- orig/client/tests/kvm/kvm_tests.cfg.sample  2009-07-08 13:18:07.0 
+0200
+++ new/client/tests/kvm/kvm_tests.cfg.sample   2009-07-09 10:15:58.0 
+0200
@@ -79,6 +79,9 @@ variants:
 - bonnie:
 test_name = bonnie
 test_control_file = bonnie.control
+- libhugetlbfs:
+test_name = libhugetlbfs
+test_control_file = libhugetlbfs.control
 
 - linux_s3:  install setup
 type = linux_s3
@@ -546,6 +549,12 @@ variants:
 only default
 image_format = raw
 
+variants:
+- @kvm_smallpages:
+- kvm_hugepages:
+pre_command = /bin/bash scripts/hugepage.sh /mnt/hugepage
+extra_params +=  -mem-path /mnt/hugepage
+
 
 variants:
 - @basic:
@@ -559,6 +568,7 @@ variants:
 only Fedora.8.32
 only install setup boot shutdown
 only rtl8139
+only kvm_smallpages
 - @sample1:
 only qcow2
 only ide
diff -Narup orig/client/tests/kvm/kvm_vm.py new/client/tests/kvm/kvm_vm.py
--- orig/client/tests/kvm/kvm_vm.py 2009-07-08 13:18:07.0 +0200
+++ new/client/tests/kvm/kvm_vm.py  2009-07-09 10:05:19.0 +0200
@@ -400,6 +400,13 @@ class VM:
 self.destroy()
 return False
 
+if output:
+logging.debug(qemu produced some output:\n%s, output)
+if alloc_mem_area in output:
+logging.error(Could not allocate hugepage memory
+  -- qemu command:\n%s, qemu_command)
+return False
+
 logging.debug(VM appears to be alive with PID %d, self.pid)
 return True
 
diff -Narup orig/client/tests/kvm/scripts/hugepage.sh 
new/client/tests/kvm/scripts/hugepage.sh
--- orig/client/tests/kvm/scripts/hugepage.sh   1970-01-01 01:00:00.0 
+0100
+++ new/client/tests/kvm/scripts/hugepage.sh2009-07-09 09:47:14.0 
+0200
@@ -0,0 +1,38 @@
+#!/bin/bash
+# Alocates enaugh hugepages for $1 memory and mount hugetlbfs to $2.
+if [ $# -ne 1 ]; then
+   echo USAGE: $0 mem_path
+   exit 1
+fi
+
+Hugepagesize=$(grep Hugepagesize /proc/meminfo | cut -d':'  -f 2 | \
+xargs | cut -d' ' -f1)
+VMS=$(expr $(echo $KVM_TEST_vms | grep -c ' ') + 1)
+if [ $KVM_TEST_max_vms ]  [ $VMS -lt $KVM_TEST_max_vms ]; then
+VMS=$KVM_TEST_max_vms
+fi
+VMSM=$(expr $(expr $VMS \* $KVM_TEST_mem) + $(expr $VMS \* 64 ))
+TARGET=$(expr $VMSM \* 1024 \/ $Hugepagesize)
+
+NR=$(cat /proc/sys/vm/nr_hugepages)
+while [ $NR -ne $TARGET ]; do
+   NR_=$NR;echo $TARGET  /proc/sys/vm/nr_hugepages
+   sleep 5s
+   NR=$(cat /proc/sys/vm/nr_hugepages)
+   if [ $NR -eq $NR_ ] ; then
+   echo Can not alocate $TARGET of hugepages
+   exit 2
+   fi
+done
+
+if [ ! $(mount | grep /mnt/hugepage |grep hugetlbfs) ]; then
+   mkdir -p $1
+   mount -t hugetlbfs none $1 || \
+   (echo Can not mount hugetlbfs filesystem to $1; exit 3)
+else
+   echo hugetlbfs filesystem already mounted
+fi


[KVM_AUTOTEST] add kvm hugepage variant

2009-07-10 Thread Lukáš Doktor

After discussion I split the patches.

This patch adds kvm_hugepage variant. It prepares the host system and
start vm with -mem-path option. It does not clean after itself, because 
 it's impossible to unmount and free hugepages before all guests are 
destroyed.


I need to ask you what to do with change of qemu parameter. Newest 
versions are using -mempath insted of -mem-path. This is impossible to 
fix using current config file. I can see 2 solutions:

1) direct change in kvm_vm.py (parse output and try another param)
2) detect qemu capabilities outside and create additional layer (better 
for future occurrence)


Dne 9.7.2009 11:24, Lukáš Doktor napsal(a):

This patch adds kvm_hugepage variant. It prepares the host system and
start vm with -mem-path option. It does not clean after itself, because
it's impossible to unmount and free hugepages before all guests are
destroyed.

There is also added autotest.libhugetlbfs test.

I need to ask you what to do with change of qemu parameter. Newest
versions are using -mempath insted of -mem-path. This is impossible to
fix using current config file. I can see 2 solutions:
1) direct change in kvm_vm.py (parse output and try another param)
2) detect qemu capabilities outside and create additional layer (better
for future occurrence)

Tested by:ldok...@redhat.com on RHEL5.4 with kvm-83-72.el5


diff -Narup orig/client/tests/kvm/kvm_tests.cfg.sample 
new/client/tests/kvm/kvm_tests.cfg.sample
--- orig/client/tests/kvm/kvm_tests.cfg.sample  2009-07-08 13:18:07.0 
+0200
+++ new/client/tests/kvm/kvm_tests.cfg.sample   2009-07-09 10:15:58.0 
+0200
@@ -546,6 +549,12 @@ variants:
 only default
 image_format = raw
 
+variants:
+- @kvm_smallpages:
+- kvm_hugepages:
+pre_command = /bin/bash scripts/hugepage.sh /mnt/hugepage
+extra_params +=  -mem-path /mnt/hugepage
+
 
 variants:
 - @basic:
@@ -559,6 +568,7 @@ variants:
 only Fedora.8.32
 only install setup boot shutdown
 only rtl8139
+only kvm_smallpages
 - @sample1:
 only qcow2
 only ide
diff -Narup orig/client/tests/kvm/kvm_vm.py new/client/tests/kvm/kvm_vm.py
--- orig/client/tests/kvm/kvm_vm.py 2009-07-08 13:18:07.0 +0200
+++ new/client/tests/kvm/kvm_vm.py  2009-07-09 10:05:19.0 +0200
@@ -400,6 +400,13 @@ class VM:
 self.destroy()
 return False
 
+if output:
+logging.debug(qemu produced some output:\n%s, output)
+if alloc_mem_area in output:
+logging.error(Could not allocate hugepage memory
+  -- qemu command:\n%s, qemu_command)
+return False
+
 logging.debug(VM appears to be alive with PID %d, self.pid)
 return True

diff -Narup orig/client/tests/kvm/scripts/hugepage.sh 
new/client/tests/kvm/scripts/hugepage.sh
--- orig/client/tests/kvm/scripts/hugepage.sh   1970-01-01 01:00:00.0 
+0100
+++ new/client/tests/kvm/scripts/hugepage.sh2009-07-09 09:47:14.0 
+0200
@@ -0,0 +1,34 @@
+#!/bin/bash
+# Alocates enough hugepages and mount hugetlbfs to $1.
+if [ $# -ne 1 ]; then
+   echo USAGE: $0 mem_path
+   exit 1
+fi
+
+Hugepagesize=$(grep Hugepagesize /proc/meminfo | cut -d':'  -f 2 | \
+xargs | cut -d' ' -f1)
+VMS=$(expr $(echo $KVM_TEST_vms | grep -c ' ') + 1)
+if [ $KVM_TEST_max_vms ]  [ $VMS -lt $KVM_TEST_max_vms ]; then
+VMS=$KVM_TEST_max_vms
+fi
+VMSM=$(expr $(expr $VMS \* $KVM_TEST_mem) + $(expr $VMS \* 64 ))
+TARGET=$(expr $VMSM \* 1024 \/ $Hugepagesize)
+
+NR=$(cat /proc/sys/vm/nr_hugepages)
+while [ $NR -ne $TARGET ]; do
+   NR_=$NR;echo $TARGET  /proc/sys/vm/nr_hugepages
+   sleep 5s
+   NR=$(cat /proc/sys/vm/nr_hugepages)
+   if [ $NR -eq $NR_ ] ; then
+   echo Can not alocate $TARGET of hugepages
+   exit 2
+   fi
+done
+
+if [ ! $(mount | grep /mnt/hugepage |grep hugetlbfs) ]; then
+   mkdir -p $1
+   mount -t hugetlbfs none $1 || \
+   (echo Can not mount hugetlbfs filesystem to $1; exit 3)
+else
+   echo hugetlbfs filesystem already mounted
+fi


[KVM_AUTOTEST] add autotest.libhugetlbfs test

2009-07-10 Thread Lukáš Doktor

After discussion I split the patches.

this patch adds autotest.libhugetlbfs test which tests hugepage support 
inside of kvm guest.


Tested by:ldok...@redhat.com on RHEL5.4 with kvm-83-72.el5

Dne 9.7.2009 11:24, Lukáš Doktor napsal(a):

This patch adds kvm_hugepage variant. It prepares the host system and
start vm with -mem-path option. It does not clean after itself, because
it's impossible to unmount and free hugepages before all guests are
destroyed.

There is also added autotest.libhugetlbfs test.

I need to ask you what to do with change of qemu parameter. Newest
versions are using -mempath insted of -mem-path. This is impossible to
fix using current config file. I can see 2 solutions:
1) direct change in kvm_vm.py (parse output and try another param)
2) detect qemu capabilities outside and create additional layer (better
for future occurrence)

Tested by:ldok...@redhat.com on RHEL5.4 with kvm-83-72.el5



diff -Narup orig/client/tests/kvm/autotest_control/libhugetlbfs.control 
new/client/tests/kvm/autotest_control/libhugetlbfs.control
--- orig/client/tests/kvm/autotest_control/libhugetlbfs.control 1970-01-01 
01:00:00.0 +0100
+++ new/client/tests/kvm/autotest_control/libhugetlbfs.control  2009-07-08 
13:18:07.0 +0200
@@ -0,0 +1,13 @@
+AUTHOR = 'aga...@google.com (Ashwin Ganti)'
+TIME = 'MEDIUM'
+NAME = 'libhugetlbfs test'
+TEST_TYPE = 'client'
+TEST_CLASS = 'Kernel'
+TEST_CATEGORY = 'Functional'
+
+DOC = '''
+Tests basic huge pages functionality when using libhugetlbfs. For more info
+about libhugetlbfs see http://libhugetlbfs.ozlabs.org/
+'''
+
+job.run_test('libhugetlbfs', dir='/mnt')
diff -Narup orig/client/tests/kvm/kvm_tests.cfg.sample 
new/client/tests/kvm/kvm_tests.cfg.sample
--- orig/client/tests/kvm/kvm_tests.cfg.sample  2009-07-08 13:18:07.0 
+0200
+++ new/client/tests/kvm/kvm_tests.cfg.sample   2009-07-09 10:15:58.0 
+0200
@@ -79,6 +79,9 @@ variants:
 - bonnie:
 test_name = bonnie
 test_control_file = bonnie.control
+- libhugetlbfs:
+test_name = libhugetlbfs
+test_control_file = libhugetlbfs.control
 
 - linux_s3:  install setup
 type = linux_s3


[KVM_AUTOTEST][RFC] pre_command chaining

2009-07-10 Thread Lukáš Doktor

Hi,

the way how kvm_autotest currently handle pre_command/post_command it 
don't allow to specify more than one command. BASH can handle this 
itself with a small change in the framework , as shown in the attachment.


In .cfg file we just change variable from:
 pre_command = command
to:
 pre_commane += command 
produce:
 $(command  true)

Framework adds the last command true, which enclose whole command. This 
way we can chain infinite pre/post_commands without losing the return 
value (if something go wrong, other commands are not executed and return 
value is preserve.


example:
in cfg:
 pre_command += echo A 
 pre_command += echo B 
 pre_command += echo C 
framework params.get(pre_command):
 echo A  echo B  echo C 
framework process_command execute on the host:
 echo A  echo B  echo C  true

regards,
Lukáš Doktor
diff -Narup kvm-autotest/client/tests/kvm/kvm_preprocessing.py 
kvm-autotest-new/client/tests/kvm/kvm_preprocessing.py
--- kvm-autotest/client/tests/kvm/kvm_preprocessing.py  2009-07-08 
08:31:01.492284501 +0200
+++ kvm-autotest-new/client/tests/kvm/kvm_preprocessing.py  2009-07-10 
13:18:35.407285172 +0200
@@ -229,7 +229,8 @@ def preprocess(test, params, env):
 
 #execute any pre_commands
 if params.get(pre_command):
-process_command(test, params, env, params.get(pre_command),
+process_command(test, params, env, 
+(params.get(pre_command) +  true),
 params.get(pre_command_timeout),
 params.get(pre_command_noncritical))
 
@@ -287,7 +288,8 @@ def postprocess(test, params, env):
 
 #execute any post_commands
 if params.get(post_command):
-process_command(test, params, env, params.get(post_command),
+process_command(test, params, env,
+(params.get(post_command) +  true),
 params.get(post_command_timeout),
 params.get(post_command_noncritical))
 


[KVM_AUTOTEST] add kvm hugepage variant and test

2009-07-09 Thread Lukáš Doktor
This patch adds kvm_hugepage variant. It prepares the host system and 
start vm with -mem-path option. It does not clean after itself, because 
it's impossible to unmount and free hugepages before all guests are 
destroyed.


There is also added autotest.libhugetlbfs test.

I need to ask you what to do with change of qemu parameter. Newest 
versions are using -mempath insted of -mem-path. This is impossible to 
fix using current config file. I can see 2 solutions:

1) direct change in kvm_vm.py (parse output and try another param)
2) detect qemu capabilities outside and create additional layer (better 
for future occurrence)


Tested by:ldok...@redhat.com on RHEL5.4 with kvm-83-72.el5
diff -Narup orig/client/tests/kvm/autotest_control/libhugetlbfs.control 
new/client/tests/kvm/autotest_control/libhugetlbfs.control
--- orig/client/tests/kvm/autotest_control/libhugetlbfs.control 1970-01-01 
01:00:00.0 +0100
+++ new/client/tests/kvm/autotest_control/libhugetlbfs.control  2009-07-08 
13:18:07.0 +0200
@@ -0,0 +1,13 @@
+AUTHOR = 'aga...@google.com (Ashwin Ganti)'
+TIME = 'MEDIUM'
+NAME = 'libhugetlbfs test'
+TEST_TYPE = 'client'
+TEST_CLASS = 'Kernel'
+TEST_CATEGORY = 'Functional'
+
+DOC = '''
+Tests basic huge pages functionality when using libhugetlbfs. For more info
+about libhugetlbfs see http://libhugetlbfs.ozlabs.org/
+'''
+
+job.run_test('libhugetlbfs', dir='/mnt')
diff -Narup orig/client/tests/kvm/kvm_tests.cfg.sample 
new/client/tests/kvm/kvm_tests.cfg.sample
--- orig/client/tests/kvm/kvm_tests.cfg.sample  2009-07-08 13:18:07.0 
+0200
+++ new/client/tests/kvm/kvm_tests.cfg.sample   2009-07-09 10:15:58.0 
+0200
@@ -79,6 +79,9 @@ variants:
 - bonnie:
 test_name = bonnie
 test_control_file = bonnie.control
+- libhugetlbfs:
+test_name = libhugetlbfs
+test_control_file = libhugetlbfs.control
 
 - linux_s3:  install setup
 type = linux_s3
@@ -546,6 +549,12 @@ variants:
 only default
 image_format = raw
 
+variants:
+- @kvm_smallpages:
+- kvm_hugepages:
+pre_command = /bin/bash scripts/hugepage.sh /mnt/hugepage
+extra_params +=  -mem-path /mnt/hugepage
+
 
 variants:
 - @basic:
@@ -559,6 +568,7 @@ variants:
 only Fedora.8.32
 only install setup boot shutdown
 only rtl8139
+only kvm_smallpages
 - @sample1:
 only qcow2
 only ide
diff -Narup orig/client/tests/kvm/kvm_vm.py new/client/tests/kvm/kvm_vm.py
--- orig/client/tests/kvm/kvm_vm.py 2009-07-08 13:18:07.0 +0200
+++ new/client/tests/kvm/kvm_vm.py  2009-07-09 10:05:19.0 +0200
@@ -400,6 +400,13 @@ class VM:
 self.destroy()
 return False
 
+if output:
+logging.debug(qemu produced some output:\n%s, output)
+if alloc_mem_area in output:
+logging.error(Could not allocate hugepage memory
+  -- qemu command:\n%s, qemu_command)
+return False
+
 logging.debug(VM appears to be alive with PID %d, self.pid)
 return True
 
diff -Narup orig/client/tests/kvm/scripts/hugepage.sh 
new/client/tests/kvm/scripts/hugepage.sh
--- orig/client/tests/kvm/scripts/hugepage.sh   1970-01-01 01:00:00.0 
+0100
+++ new/client/tests/kvm/scripts/hugepage.sh2009-07-09 09:47:14.0 
+0200
@@ -0,0 +1,38 @@
+#!/bin/bash
+# Alocates enaugh hugepages for $1 memory and mount hugetlbfs to $2.
+if [ $# -ne 1 ]; then
+   echo USAGE: $0 mem_path
+   exit 1
+fi
+
+Hugepagesize=$(grep Hugepagesize /proc/meminfo | cut -d':'  -f 2 | \
+xargs | cut -d' ' -f1)
+VMS=$(expr $(echo $KVM_TEST_vms | grep -c ' ') + 1)
+VMSM=$(expr $(expr $VMS \* $KVM_TEST_mem) + $(expr $VMS \* 64 ))
+TARGET=$(expr $VMSM \* 1024 \/ $Hugepagesize)
+
+NR=$(cat /proc/sys/vm/nr_hugepages)
+while [ $NR -ne $TARGET ]; do
+   NR_=$NR;echo $TARGET  /proc/sys/vm/nr_hugepages
+   sleep 5s
+   NR=$(cat /proc/sys/vm/nr_hugepages)
+   if [ $NR -eq $NR_ ] ; then
+   echo Can not alocate $TARGET of hugepages
+   exit 2
+   fi
+done
+
+if [ ! $(mount | grep /mnt/hugepage |grep hugetlbfs) ]; then
+   mkdir -p $1
+   mount -t hugetlbfs none $1 || \
+   (echo Can not mount hugetlbfs filesystem to $1; exit 3)
+else
+   echo hugetlbfs filesystem already mounted
+fi


[KVM_AUTOTEST] set English environment

2009-07-09 Thread Lukáš Doktor

Set English environment before test executions.
This is critical because we are parsing outputs of commands, which are 
localized!


Tested by: ldok...@redhat.com on RHEL5.4 with kvm-83-72.el5
--- orig/client/tests/kvm/control   2009-07-08 13:18:07.0 +0200
+++ new/client/tests/kvm/control2009-07-09 12:32:32.0 +0200
@@ -45,6 +45,8 @@ Each test is appropriately documented on
 
 import sys, os
 
+# set English environment
+os.environ['LANG'] = 'en_US.UTF-8'
 # enable modules import from current directory (tests/kvm)
 pwd = os.path.join(os.environ['AUTODIR'],'tests/kvm')
 sys.path.append(pwd)


Re: [KVM_AUTOTEST] add kvm hugepage variant and test

2009-07-09 Thread Lukáš Doktor

Hi Michael,

actually it's necessarily. qemu-kvm only put this message into the 
output and continue booting the guest without hugepage support. Autotest 
than runs all the test. Later in the output is no mention about this. 
You have to predict that this happend and look at debug output of all 
particular tests to see if qemu didn't produced this message.
Using this check if qemu-kvm can't allocate the hugepage memory it fails 
this test, log this information and continue with next variant.


Dne 9.7.2009 14:30, Michael Goldish napsal(a):

I don't think you need to explicitly check for a memory allocation
failure in VM.create() (qemu produced some output ...).
VM.create() already makes sure the VM is started successfully, and
prints informative failure messages if there's any problem.

- Lukáš Doktorldok...@redhat.com  wrote:


This patch adds kvm_hugepage variant. It prepares the host system and
start vm with -mem-path option. It does not clean after itself,
because
it's impossible to unmount and free hugepages before all guests are
destroyed.

There is also added autotest.libhugetlbfs test.

I need to ask you what to do with change of qemu parameter. Newest
versions are using -mempath insted of -mem-path. This is impossible to
fix using current config file. I can see 2 solutions:
1) direct change in kvm_vm.py (parse output and try another param)
2) detect qemu capabilities outside and create additional layer
(better
for future occurrence)


I'll have to think about this a little before answering.


Tested by:ldok...@redhat.com on RHEL5.4 with kvm-83-72.el5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html