Re: virt: New version Cartesian config

2013-05-16 Thread Jiri Zupka
Hi,
  I have sent email about new Cart config update few days ago but nobody did  
respond to on it.
Could you look at new cart config? Mainly Eduardo.

Have a nice day,
  Jiří Župka

- Original Message -
 Hi,
   the new version of cart config is on github
   https://github.com/autotest/virt-test/pull/335
 Please send me your comment and I'll try to change code to good shape if
 there is a problem.
 I'll choose one big patch because there is almost no possibility split parser
 to parts.
 Unittest is in some patch too. Because it should be in one patch with tested
 code.
 
 In future I'll change comments to sphinx syntax.
 
 Regards,
   Jiří Župka
 
 - Original Message -
  Jiří, okay, got it and thanks.
  
  --
  Regards,
  Alex
  
  
  - Original Message -
  From: Jiri Zupka jzu...@redhat.com
  To: Alex Jia a...@redhat.com
  Cc: virt-test-de...@redhat.com, kvm@vger.kernel.org,
  kvm-autot...@redhat.com,
  l...@redhat.com, ldok...@redhat.com, ehabk...@redhat.com,
  pbonz...@redhat.com
  Sent: Tuesday, April 16, 2013 10:16:57 PM
  Subject: Re: [Virt-test-devel] [virt-test][PATCH 4/7] virt: Adds named
  variants to Cartesian config.
  
  Hi Alex,
thanks again for review. I recognize now what you mean. I thought that
  you another thread of mails. I was try it again with
  https://github.com/autotest/virt-test/pull/255 and demo example works.
  
  If you are interest in this feature. Check new version which I'll send in
  future days. There will be some changes in syntax and will be added lexer
  for better filtering.
  
  regards,
Jiří Župka
  
  
  
  - Original Message -
   Hi Alex,
 I hope you use new version of cart config in github
 https://github.com/autotest/virt-test/pull/255.
   This was older RFC version of vart config. And I'm preparing new version
   based on communication with Eduardo and Pablo.
   
   If you don't please loot at documentation
   https://github.com/autotest/virt-test/wiki/VirtTestDocumentation#wiki-id26
   This documentation says how it works now.
   
   regards
 Jiří Župka
   
   - Original Message -
On 03/30/2013 01:14 AM, Jiří Župka wrote:
 variants name=tests:
- wait:
 run = wait
 variants:
   - long:
  time = short_time
   - short: long
  time = logn_time
- test2:
 run = test1

 variants name=virt_system:
- linux:
- windows_XP:

 variants name=host_os:
- linux:
 image = linux
- windows_XP:
 image = windows

 testswait.short:
  shutdown = destroy

 only host_oslinux
Jiří , I pasted above above example into demo.cfg and ran it via
cartesian parser then I got the error __main__.ParserError: 'variants'
is not allowed inside a conditional block
(libvirt/tests/cfg/demo.cfg:4), any wrong with me? thanks.


   
  
  
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


virt: New version Cartesian config

2013-05-03 Thread Jiri Zupka
Hi,
  the new version of cart config is on github 
https://github.com/autotest/virt-test/pull/335
Please send me your comment and I'll try to change code to good shape if there 
is a problem.
I'll choose one big patch because there is almost no possibility split parser 
to parts.
Unittest is in some patch too. Because it should be in one patch with tested 
code.

In future I'll change comments to sphinx syntax.

Regards,
  Jiří Župka

- Original Message -
 Jiří, okay, got it and thanks.
 
 --
 Regards,
 Alex
 
 
 - Original Message -
 From: Jiri Zupka jzu...@redhat.com
 To: Alex Jia a...@redhat.com
 Cc: virt-test-de...@redhat.com, kvm@vger.kernel.org, kvm-autot...@redhat.com,
 l...@redhat.com, ldok...@redhat.com, ehabk...@redhat.com, pbonz...@redhat.com
 Sent: Tuesday, April 16, 2013 10:16:57 PM
 Subject: Re: [Virt-test-devel] [virt-test][PATCH 4/7] virt: Adds named
 variants to Cartesian config.
 
 Hi Alex,
   thanks again for review. I recognize now what you mean. I thought that
 you another thread of mails. I was try it again with
 https://github.com/autotest/virt-test/pull/255 and demo example works.
 
 If you are interest in this feature. Check new version which I'll send in
 future days. There will be some changes in syntax and will be added lexer
 for better filtering.
 
 regards,
   Jiří Župka
 
 
 
 - Original Message -
  Hi Alex,
I hope you use new version of cart config in github
https://github.com/autotest/virt-test/pull/255.
  This was older RFC version of vart config. And I'm preparing new version
  based on communication with Eduardo and Pablo.
  
  If you don't please loot at documentation
  https://github.com/autotest/virt-test/wiki/VirtTestDocumentation#wiki-id26
  This documentation says how it works now.
  
  regards
Jiří Župka
  
  - Original Message -
   On 03/30/2013 01:14 AM, Jiří Župka wrote:
variants name=tests:
   - wait:
run = wait
variants:
  - long:
 time = short_time
  - short: long
 time = logn_time
   - test2:
run = test1
   
variants name=virt_system:
   - linux:
   - windows_XP:
   
variants name=host_os:
   - linux:
image = linux
   - windows_XP:
image = windows
   
testswait.short:
 shutdown = destroy
   
only host_oslinux
   Jiří , I pasted above above example into demo.cfg and ran it via
   cartesian parser then I got the error __main__.ParserError: 'variants'
   is not allowed inside a conditional block
   (libvirt/tests/cfg/demo.cfg:4), any wrong with me? thanks.
   
   
  
 
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Virt-test-devel] [virt-test][PATCH 4/7] virt: Adds named variants to Cartesian config.

2013-04-16 Thread Jiri Zupka
Hi Alex,
  I hope you use new version of cart config in github 
https://github.com/autotest/virt-test/pull/255.
This was older RFC version of vart config. And I'm preparing new version based 
on communication with Eduardo and Pablo.

If you don't please loot at documentation 
https://github.com/autotest/virt-test/wiki/VirtTestDocumentation#wiki-id26 
This documentation says how it works now.

regards
  Jiří Župka

- Original Message -
 On 03/30/2013 01:14 AM, Jiří Župka wrote:
  variants name=tests:
 - wait:
  run = wait
  variants:
- long:
   time = short_time
- short: long
   time = logn_time
 - test2:
  run = test1
 
  variants name=virt_system:
 - linux:
 - windows_XP:
 
  variants name=host_os:
 - linux:
  image = linux
 - windows_XP:
  image = windows
 
  testswait.short:
   shutdown = destroy
 
  only host_oslinux
 Jiří , I pasted above above example into demo.cfg and ran it via
 cartesian parser then I got the error __main__.ParserError: 'variants'
 is not allowed inside a conditional block
 (libvirt/tests/cfg/demo.cfg:4), any wrong with me? thanks.
 
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Virt-test-devel] [virt-test][PATCH 4/7] virt: Adds named variants to Cartesian config.

2013-04-16 Thread Jiri Zupka
Hi Alex,
  thanks again for review. I recognize now what you mean. I thought that
you another thread of mails. I was try it again with
https://github.com/autotest/virt-test/pull/255 and demo example works.

If you are interest in this feature. Check new version which I'll send in
future days. There will be some changes in syntax and will be added lexer
for better filtering.

regards,
  Jiří Župka



- Original Message -
 Hi Alex,
   I hope you use new version of cart config in github
   https://github.com/autotest/virt-test/pull/255.
 This was older RFC version of vart config. And I'm preparing new version
 based on communication with Eduardo and Pablo.
 
 If you don't please loot at documentation
 https://github.com/autotest/virt-test/wiki/VirtTestDocumentation#wiki-id26
 This documentation says how it works now.
 
 regards
   Jiří Župka
 
 - Original Message -
  On 03/30/2013 01:14 AM, Jiří Župka wrote:
   variants name=tests:
  - wait:
   run = wait
   variants:
 - long:
time = short_time
 - short: long
time = logn_time
  - test2:
   run = test1
  
   variants name=virt_system:
  - linux:
  - windows_XP:
  
   variants name=host_os:
  - linux:
   image = linux
  - windows_XP:
   image = windows
  
   testswait.short:
shutdown = destroy
  
   only host_oslinux
  Jiří , I pasted above above example into demo.cfg and ran it via
  cartesian parser then I got the error __main__.ParserError: 'variants'
  is not allowed inside a conditional block
  (libvirt/tests/cfg/demo.cfg:4), any wrong with me? thanks.
  
  
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [virt-test][PATCH 4/7] virt: Adds named variants to Cartesian config.

2013-04-04 Thread Jiri Zupka


- Original Message -
 Sorry for not reading the commit message before my previous reply. Now I
 see the origin of the  syntax.
 
 On Fri, Mar 29, 2013 at 06:14:07PM +0100, Jiří Župka wrote:
 [...]
  
  For filtering of named variants is used character  because there was
  problem with conflict with = in expression key = value. The char 
  could be changed to something better but it should be different from =
  for optimization of speed.
 
 IMO we need really strong reasons to use anything different from =
 because it is the most obvious choice we have. Using  doesn't make
 any sense to me.


There is not necessary solve conflict with = or : in code. Code parsing is
straightforward with that . Chars = and : was one of my first selection
too but it brings conflicts in parsing. But it could be changed because
there were more voice against it. Users could prefer better syntax instead
of little improve of speed.


 
 What kind of speed optimization are you talking about, exactly? We need
 to keep algorithm time/space complexity under control, but making two or
 three additional regexp matches per line won't make the code much
 slower, will it?


Sometime yes (https://github.com/autotest/virt-test/pull/229).
But I don't think that it is this case. I'll will try think about more.


 Also: whatever symbol we use, I would really like to make it
 whitespace-insensitive.
 
 I mean: if foox or foo=x works, foo  x or foo = x should work,
 too. I am absolutely sure people _will_ eventually try to put whitespace
 around the operator symbol, and this shouldn't cause unpleasant
 surprises.

Thank a lot that you catch this bug. It is only bug not intention.
I have forgot one strip(). But I will repair the bug after we will
finish discussion about named variant.

 
 
  
  Additionally named variant adds keys to final dictionary in case of
  example is it (virt_system = linux). It should reduce size of config file.
  Keys defined in config and keys defined by named variants are in same
  name space.
 
 This is the part I like the most. Thanks!
 
 
  
  Signed-off-by: Jiří Župka jzu...@redhat.com
  ---
   virttest/cartesian_config.py | 138
   ++-
   1 file changed, 124 insertions(+), 14 deletions(-)
  
  diff --git a/virttest/cartesian_config.py b/virttest/cartesian_config.py
  index ef91051..04ed2b5 100755
  --- a/virttest/cartesian_config.py
  +++ b/virttest/cartesian_config.py
  @@ -145,6 +145,74 @@ class MissingIncludeError:
   num_failed_cases = 5
   
   
  +class Label(object):
  +__slots__ = [name, var_name, long_name, hash_val, hash_var]
  +
  +def __init__(self, name, next_name=None):
  +if next_name is None:
  +self.name = name
  +self.var_name = None
  +else:
  +self.name = next_name
  +self.var_name = name
  +
  +if self.var_name is None:
  +self.long_name = %s % (self.name)
  +else:
  +self.long_name = %s%s % (self.var_name, self.name)
  +
  +self.hash_val = self.hash_name()
  +self.hash_var = None
  +if self.var_name:
  +self.hash_var = self.hash_variant()
  +
  +
  +def __str__(self):
  +return self.long_name
  +
  +
  +def __repr__(self):
  +return self.long_name
  +
  +
  +def __eq__(self, o):
  +
  +The comparison is asymmetric due to optimization.
  +
  +if o.var_name:
  +if self.long_name == o.long_name:
  +return True
  +else:
  +if self.name == o.name:
  +return True
  +return False
  +
  +
  +def __ne__(self, o):
  +
  +The comparison is asymmetric due to optimization.
  +
  +if o.var_name:
  +if self.long_name != o.long_name:
  +return True
  +else:
  +if self.name != o.name:
  +return True
  +return False
  +
  +
  +def __hash__(self):
  +return self.hash_val
  +
  +
  +def hash_name(self):
  +return sum([i + 1 * ord(x) for i, x in enumerate(self.name)])
  +
  +
  +def hash_variant(self):
  +return sum([i + 1 * ord(x) for i, x in enumerate(str(self))])
  +
  +
   class Node(object):
   __slots__ = [name, dep, content, children, labels,
append_to_shortname, failed_cases, default]
  @@ -212,18 +280,19 @@ class Filter(object):
   def __init__(self, s):
   self.filter = []
   for char in s:
  -if not (char.isalnum() or char.isspace() or char in .,_-):
  +if not (char.isalnum() or char.isspace() or char in .,_-):
   raise ParserError(Illegal characters in filter)
   for word in s.replace(,,  ).split(): # OR
   word = [block.split(.) for block in word.split(..)]  # AND
  -for word in s.replace(,,  

Re: [PATCH 1/4] [kvm-autotest] cgroup-kvm: add_*_drive / rm_drive

2011-10-10 Thread Jiri Zupka
This is useful function. This function can be in kvm utils.

- Original Message -
 * functions for adding and removal of drive to vm using host-file or
host-scsi_debug device.
 
 Signed-off-by: Lukas Doktor ldok...@redhat.com
 ---
  client/tests/kvm/tests/cgroup.py |  125
  -
  1 files changed, 108 insertions(+), 17 deletions(-)
 
 diff --git a/client/tests/kvm/tests/cgroup.py
 b/client/tests/kvm/tests/cgroup.py
 index b9a10ea..d6418b5 100644
 --- a/client/tests/kvm/tests/cgroup.py
 +++ b/client/tests/kvm/tests/cgroup.py
 @@ -17,6 +17,108 @@ def run_cgroup(test, params, env):
  vms = None
  tests = None
  
 +# Func
 +def get_device_driver():
 +
 +Discovers the used block device driver {ide, scsi,
 virtio_blk}
 +@return: Used block device driver {ide, scsi, virtio}
 +
 +if test.tagged_testname.count('virtio_blk'):
 +return virtio
 +elif test.tagged_testname.count('scsi'):
 +return scsi
 +else:
 +return ide
 +
 +
 +def add_file_drive(vm, driver=get_device_driver(),
 host_file=None):
 +
 +Hot-add a drive based on file to a vm
 +@param vm: Desired VM
 +@param driver: which driver should be used (default: same as
 in test)
 +@param host_file: Which file on host is the image (default:
 create new)
 +@return: Tupple(ret_file, device)
 +ret_file: created file handler (None if not
 created)
 +device: PCI id of the virtual disk
 +
 +if not host_file:
 +host_file =
 tempfile.NamedTemporaryFile(prefix=cgroup-disk-,
 +   suffix=.iso)
 +utils.system(dd if=/dev/zero of=%s bs=1M count=8
 /dev/null
 + % (host_file.name))
 +ret_file = host_file
 +else:
 +ret_file = None
 +
 +out = vm.monitor.cmd(pci_add auto storage
 file=%s,if=%s,snapshot=off,
 + cache=off % (host_file.name, driver))
 +dev = re.search(r'OK domain (\d+), bus (\d+), slot (\d+),
 function \d+',
 +out)
 +if not dev:
 +raise error.TestFail(Can't add device(%s, %s, %s): %s
 % (vm,
 +host_file.name,
 driver, out))
 +device = %s:%s:%s % dev.groups()
 +return (ret_file, device)
 +
 +
 +def add_scsi_drive(vm, driver=get_device_driver(),
 host_file=None):
 +
 +Hot-add a drive based on scsi_debug device to a vm
 +@param vm: Desired VM
 +@param driver: which driver should be used (default: same as
 in test)
 +@param host_file: Which dev on host is the image (default:
 create new)
 +@return: Tupple(ret_file, device)
 +ret_file: string of the created dev (None if not
 created)
 +device: PCI id of the virtual disk
 +
 +if not host_file:
 +if utils.system_output(lsmod | grep scsi_debug -c) ==
 0:
 +utils.system(modprobe scsi_debug dev_size_mb=8
 add_host=0)
 +utils.system(echo 1 
 /sys/bus/pseudo/drivers/scsi_debug/add_host)
 +host_file = utils.system_output(ls /dev/sd* | tail -n
 1)
 +# Enable idling in scsi_debug drive
 +utils.system(echo 1  /sys/block/%s/queue/rotational %
 host_file)
 +ret_file = host_file
 +else:
 +# Don't remove this device during cleanup
 +# Reenable idling in scsi_debug drive (in case it's not)
 +utils.system(echo 1  /sys/block/%s/queue/rotational %
 host_file)
 +ret_file = None
 +
 +out = vm.monitor.cmd(pci_add auto storage
 file=%s,if=%s,snapshot=off,
 + cache=off % (host_file, driver))
 +dev = re.search(r'OK domain (\d+), bus (\d+), slot (\d+),
 function \d+',
 +out)
 +if not dev:
 +raise error.TestFail(Can't add device(%s, %s, %s): %s
 % (vm,
 +host_file,
 driver, out))
 +device = %s:%s:%s % dev.groups()
 +return (ret_file, device)
 +
 +
 +def rm_drive(vm, host_file, device):
 +
 +Remove drive from vm and device on disk
 +! beware to remove scsi devices in reverse order !
 +
 +vm.monitor.cmd(pci_del %s % device)
 +
 +if isinstance(host_file, file): # file
 +host_file.close()
 +elif isinstance(host_file, str):# scsi device
 +utils.system(echo -1
 /sys/bus/pseudo/drivers/scsi_debug/add_host)
 +else:# custom file, do nothing
 +pass
 +
 +def get_all_pids(ppid):
 +
 +Get all PIDs of children/threads of parent ppid
 +param ppid: parent PID
 +return: list of PIDs 

Re: [PATCH 1/2] cgroup: cgroup_common.py bugfixies and modifications

2011-09-23 Thread Jiri Zupka
Acked-by: Jiří Župka jzu...@redhat.com

- Original Message -
 [FIX] incorrect prop/dir variable usage
 [MOD] Use __del__() instead of cleanup - Simplifies the code with
 small drawback (failures can't be handled. Anyway, they are not
 critical and were never handled before...)
 
 Signed-off-by: Lukas Doktor ldok...@redhat.com
 ---
  client/tests/cgroup/cgroup_common.py |   41
  +-
  1 files changed, 35 insertions(+), 6 deletions(-)
 
 diff --git a/client/tests/cgroup/cgroup_common.py
 b/client/tests/cgroup/cgroup_common.py
 index 836a23e..2a95c76 100755
 --- a/client/tests/cgroup/cgroup_common.py
 +++ b/client/tests/cgroup/cgroup_common.py
 @@ -25,8 +25,20 @@ class Cgroup(object):
  self.module = module
  self._client = _client
  self.root = None
 +self.cgroups = []
  
  
 +def __del__(self):
 +
 +Destructor
 +
 +self.cgroups.sort(reverse=True)
 +for pwd in self.cgroups[:]:
 +for task in self.get_property(tasks, pwd):
 +if task:
 +self.set_root_cgroup(int(task))
 +self.rm_cgroup(pwd)
 +
  def initialize(self, modules):
  
  Initializes object for use.
 @@ -57,6 +69,7 @@ class Cgroup(object):
  except Exception, inst:
  logging.error(cg.mk_cgroup(): %s , inst)
  return None
 +self.cgroups.append(pwd)
  return pwd
  
  
 @@ -70,6 +83,10 @@ class Cgroup(object):
  
  try:
  os.rmdir(pwd)
 +self.cgroups.remove(pwd)
 +except ValueError:
 +logging.warn(cg.rm_cgroup(): Removed cgroup which
 wasn't created
 + using this Cgroup)
  except Exception, inst:
  if not supress:
  logging.error(cg.rm_cgroup(): %s , inst)
 @@ -329,6 +346,22 @@ class CgroupModules(object):
  self.modules.append([])
  self.mountdir = mkdtemp(prefix='cgroup-') + '/'
  
 +def __del__(self):
 +
 +Unmount all cgroups and remove the mountdir
 +
 +for i in range(len(self.modules[0])):
 +if self.modules[2][i]:
 +try:
 +os.system('umount %s -l' % self.modules[1][i])
 +except:
 +logging.warn(CGM: Couldn't unmount %s
 directory
 + % self.modules[1][i])
 +try:
 +os.system('rm -rf %s' % self.mountdir)
 +except:
 +logging.warn(CGM: Couldn't remove the %s directory
 + % self.mountdir)
  
  def init(self, _modules):
  
 @@ -376,13 +409,9 @@ class CgroupModules(object):
  
  def cleanup(self):
  
 -Unmount all cgroups and remove the mountdir.
 +Kept for compatibility
  
 -for i in range(len(self.modules[0])):
 -if self.modules[2][i]:
 -utils.system('umount %s -l' % self.modules[1][i],
 - ignore_status=True)
 -shutil.rmtree(self.mountdir)
 +pass
  
  
  def get_pwd(self, module):
 --
 1.7.6
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] adds cgroup tests on KVM guests with first test

2011-09-23 Thread Jiri Zupka
Acked-by: Jiří Župka jzu...@redhat.com

- Original Message -
 basic structure:
  * similar to general client/tests/cgroup/ test (imports from the
cgroup_common.py)
  * uses classes for better handling
  * improved logging and error handling
  * checks/repair the guests after each subtest
  * subtest mapping is specified in test dictionary in cgroup.py
  * allows to specify tests/repetions in tests_base.cfg
 (cgroup_tests = re1[:loops] re2[:loops] ...)
 
 TestBlkioBandwidthWeight{Read,Write}:
  * Two similar tests for blkio.weight functionality inside the guest
  using
direct io and virtio_blk driver
  * Function:
  1) On 2 VMs adds small (10MB) virtio_blk disk
  2) Assigns each to different cgroup and sets blkio.weight 100/1000
  3) Runs dd with flag=direct (read/write) from the virtio_blk disk
 repeatidly
  4) After 1 minute checks the results. If the ratio is better then
  1:3,
 test passes
 
 Signed-off-by: Lukas Doktor ldok...@redhat.com
 ---
  client/tests/kvm/subtests.cfg.sample |7 +
  client/tests/kvm/tests/cgroup.py |  316
  ++
  2 files changed, 323 insertions(+), 0 deletions(-)
  create mode 100644 client/tests/cgroup/__init__.py
  create mode 100644 client/tests/kvm/tests/cgroup.py
 
 diff --git a/client/tests/cgroup/__init__.py
 b/client/tests/cgroup/__init__.py
 new file mode 100644
 index 000..e69de29
 diff --git a/client/tests/kvm/subtests.cfg.sample
 b/client/tests/kvm/subtests.cfg.sample
 index 74e550b..79e0656 100644
 --- a/client/tests/kvm/subtests.cfg.sample
 +++ b/client/tests/kvm/subtests.cfg.sample
 @@ -848,6 +848,13 @@ variants:
  only Linux
  type = iofuzz
  
 +- cgroup:
 +type = cgroup
 +# cgroup_tests = re1[:loops] re2[:loops] ...
 +cgroup_tests = .*:1
 +vms +=  vm2
 +extra_params +=  -snapshot
 +
  - virtio_console: install setup image_copy
  unattended_install.cdrom
  only Linux
  vms = ''
 diff --git a/client/tests/kvm/tests/cgroup.py
 b/client/tests/kvm/tests/cgroup.py
 new file mode 100644
 index 000..4d0ec43
 --- /dev/null
 +++ b/client/tests/kvm/tests/cgroup.py
 @@ -0,0 +1,316 @@
 +
 +cgroup autotest test (on KVM guest)
 +@author: Lukas Doktor ldok...@redhat.com
 +@copyright: 2011 Red Hat, Inc.
 +
 +import logging, re, sys, tempfile, time, traceback
 +from autotest_lib.client.common_lib import error
 +from autotest_lib.client.bin import utils
 +from autotest_lib.client.tests.cgroup.cgroup_common import Cgroup,
 CgroupModules
 +
 +def run_cgroup(test, params, env):
 +
 +Tests the cgroup functions on KVM guests.
 + * Uses variable tests (marked by TODO comment) to map the
 subtests
 +
 +vms = None
 +tests = None
 +
 +# Tests
 +class _TestBlkioBandwidth:
 +
 +BlkioBandwidth dummy test
 + * Use it as a base class to an actual test!
 + * self.dd_cmd and attr '_set_properties' have to be
 implemented
 + * It prepares 2 vms and run self.dd_cmd to simultaniously
 stress the
 +machines. After 1 minute it kills the dd and gather the
 throughput
 +informations.
 +
 +def __init__(self, vms, modules):
 +
 +Initialization
 +@param vms: list of vms
 +@param modules: initialized cgroup module class
 +
 +self.vms = vms  # Virt machines
 +self.modules = modules  # cgroup module handler
 +self.blkio = Cgroup('blkio', '')# cgroup blkio
 handler
 +self.files = [] # Temporary files (files of virt
 disks)
 +self.devices = []   # Temporary virt devices (PCI drive
 1 per vm)
 +self.dd_cmd = None  # DD command used to test the
 throughput
 +
 +def cleanup(self):
 +
 +Cleanup
 +
 +err = 
 +try:
 +for i in range (2):
 +vms[i].monitor.cmd(pci_del %s %
 self.devices[i])
 +self.files[i].close()
 +except Exception, inst:
 +err += \nCan't remove PCI drive: %s % inst
 +try:
 +del(self.blkio)
 +except Exception, inst:
 +err += \nCan't remove Cgroup: %s % inst
 +
 +if err:
 +logging.error(Some parts of cleanup failed:%s,
 err)
 +raise error.TestError(Some parts of cleanup
 failed:%s % err)
 +
 +def init(self):
 +
 +Initialization
 + * assigns vm1 and vm2 into cgroups and sets the
 properties
 + * creates a new virtio device and adds it into vms
 +
 +if test.tagged_testname.find('virtio_blk') == -1:
 +logging.warn(You are executing non-virtio_blk test
 but this 
 + particular subtest uses manually added
 
 + 

Re: Add ability client part starts autotest like server part

2011-09-05 Thread Jiri Zupka


- Original Message -
 On 08/26/2011 04:12 AM, Jiří Župka wrote:
  This patch series was created because client part of autotest
  started to be used like server part and there are lot of tests
  which can be unified to one test (multicast, netperf) if there
  will be able to start already done tests from client part of
  autotest on virtual machine.
 
  The patch series adds autotest client part ability for start
  autotest on remote system over network like server part of autotest.
  More info is in last patch from patch series.
 
 Wow, awesome stuff Jiri! Congrats!

Thank you:-)
 
 I know mbligh wanted something on this lines, and I think it makes
 perfect sense. However, we need to do some careful review, unittesting
 and testing of these changes. 

What kind of unitesting you mean add new unittest modules for changes in 
autotest?

 Also, we need to put documentation in shape.

What kind of documentation you think? 

 
 Also, it is a good opportunity to ask our downstream parties and
 contributors whether they like this unification proposal or not.
 Copying
 some people that might be interested in take a look and express their
 concerns.
 
 My idea is to stage your changes in one of my personal git repo on
 github (branch merge-server-client) and go iterating from there.

Yes, it is good idea. Could you send me a link to this branch?

 
 Cheers,
 
 Lucas
 
  [AUTOTEST][PATCH 1/3] autotest: Move autotest.py from server part to
  [AUTOTEST][PATCH 2/3] autotest: Move hosts package from server side
  [AUTOTEST][PATCH 3/3] autotest: Client/server part unification.

Regards,
  Jiří Župka
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [AUTOTEST][KVM][PATCH] Add test for testing of killing guest when network is under usage.

2011-08-26 Thread Jiri Zupka
- Original Message -
 Hi Jiří,
 
 Do you have any further plans with this test? I'm not convinced that
 netperf only as a stress is necessarily. You can use netcat or simple
 python udp send/recv (flood attack ;-) ).

With netperf is easy to simulate easy way lots of type of network load.
In addition netperf is used from many places in autotest. 

 
 Dne 17.8.2011 16:17, Jiří Župka napsal(a):
  This patch contain two tests.
  1) Try kill guest when guest netwok is under loading.
  2) Try kill guest after multiple adding and removing network
  drivers.
 
  Signed-off-by: Jiří Župkajzu...@redhat.com
  ---
client/tests/kvm/tests_base.cfg.sample | 23 +
client/virt/tests/netstress_kill_guest.py | 146
+
2 files changed, 169 insertions(+), 0 deletions(-)
create mode 100644 client/virt/tests/netstress_kill_guest.py
 
  diff --git a/client/tests/kvm/tests_base.cfg.sample
  b/client/tests/kvm/tests_base.cfg.sample
  index ec1b48d..2c88088 100644
  --- a/client/tests/kvm/tests_base.cfg.sample
  +++ b/client/tests/kvm/tests_base.cfg.sample
  @@ -845,6 +845,29 @@ variants:
restart_vm = yes
kill_vm_on_error = yes
 
  + - netstress_kill_guest: install setup unattended_install.cdrom
  + only Linux
  + type = netstress_kill_guest
  + image_snapshot = yes
  + nic_mode = tap
  + # There should be enough vms for build topology.
  + variants:
  + -driver:
  + mode = driver
  + -load:
  + mode = load
  + netperf_files = netperf-2.4.5.tar.bz2 wait_before_data.patch
  + packet_size = 1500
  + setup_cmd = cd %s tar xvfj netperf-2.4.5.tar.bz2 cd
  netperf-2.4.5 patch -p0 ../wait_before_data.patch ./configure
  make
  + clean_cmd =  while killall -9 netserver; do True test; done;
  + netserver_cmd = %s/netperf-2.4.5/src/netserver
  + netperf_cmd = %s/netperf-2.4.5/src/netperf -t %s -H %s -l 60 -- -m
  %s
  + variants:
  + - vhost:
  + netdev_extra_params = vhost=on
 
 
 You might add modprobe vhost-net command as vhost-net might not be
 loaded by default.

Yes this is true. But clearest way is remove vhost from testing. 
Everybody who want vhost can add this to tests_base.conf

 
  + - vhost-no:
  + netdev_extra_params = 
  +
- set_link: install setup image_copy unattended_install.cdrom
type = set_link
test_timeout = 1000
  diff --git a/client/virt/tests/netstress_kill_guest.py
  b/client/virt/tests/netstress_kill_guest.py
  new file mode 100644
  index 000..7daec95
  --- /dev/null
  +++ b/client/virt/tests/netstress_kill_guest.py
  @@ -0,0 +1,146 @@
  +import logging, os, signal, re, time
  +from autotest_lib.client.common_lib import error
  +from autotest_lib.client.bin import utils
  +from autotest_lib.client.virt import aexpect, virt_utils
  +
  +
  +def run_netstress_kill_guest(test, params, env):
  + 
  + Try stop network interface in VM when other VM try to communicate.
  +
  + @param test: kvm test object
  + @param params: Dictionary with the test parameters
  + @param env: Dictionary with test environment.
  + 
  + def get_corespond_ip(ip):
  + 
  + Get local ip address which is used for contact ip.
  +
  + @param ip: Remote ip
  + @return: Local corespond IP.
  + 
  + result = utils.run(ip route get %s % (ip)).stdout
  + ip = re.search(src (.+), result)
  + if ip is not None:
  + ip = ip.groups()[0]
  + return ip
  +
  +
  + def get_ethernet_driver(session):
  + 
  + Get driver of network cards.
  +
  + @param session: session to machine
  + 
  + modules = []
  + out = session.cmd(ls -l /sys/class/net/*/device/driver/module)
  + for module in out.split(\n):
  + modules.append(module.split(/)[-1])
  + modules.remove()
  + return set(modules)
  +
  +
  + def kill_and_check(vm):
  + vm_pid = vm.get_pid()
  + vm.destroy(gracefully=False)
  + time.sleep(2)
  + try:
  + os.kill(vm_pid, 0)
  + logging.error(VM is not dead.)
  + raise error.TestFail(Problem with killing guest.)
  + except OSError:
  + logging.info(VM is dead.)
  +
  +
  + def netload_kill_problem(session_serial):
 
 I think you should clean this function. I belive it would be better
 and
 more readable, if you first get all the params/variables, than prepare
 the host/guests and after all of this start the guest. See the
 comments
 further...
 
  + netperf_dir = os.path.join(os.environ['AUTODIR'],
  tests/netperf2)
  + setup_cmd = params.get(setup_cmd)
  + clean_cmd = params.get(clean_cmd)
  +
  + firewall_flush = iptables -F
  + session_serial.cmd_output(firewall_flush)
  + try:
  + utils.run(iptables -F)
 you have firewall_flush command-string, why not to use it here to.
 Also
 you should either warn everywhere or not at all... (you log the
 failure
 when flushing the guest but not here)
 
  + except:
  + pass
  +
  + for i in params.get(netperf_files).split():
  + vm.copy_files_to(os.path.join(netperf_dir, i), /tmp)
  +
  + try:
  + session_serial.cmd(firewall_flush)
  + except aexpect.ShellError:
  + logging.warning(Could not flush firewall 

Re: [PATCH 1/4] [NEW] cgroup test * general smoke_test + module dependend subtests (memory test included) * library for future use in other tests (kvm)

2011-08-18 Thread Jiri Zupka
Hi,
  some minor problem in some timeout. 
Commented down in code. 
Otherwise good work.

- Original Message -
 From: root r...@dhcp-26-193.brq.redhat.com
 
 cgroup.py:
 * structure for different cgroup subtests
 * contains basic cgroup-memory test
 
 cgroup_common.py:
 * library for cgroup handling (intended to be used from kvm test in
 the future)
 * universal smoke_test for every module
 
 cgroup_client.py:
 * application which is executed and controled using cgroups
 * contains smoke, memory, cpu and devices tests which were manually
 tested to break cgroup rules and will be used in the cgroup.py
 subtests
 
 Signed-off-by: Lukas Doktor ldok...@redhat.com
 ---
 client/tests/cgroup/cgroup.py | 239 +
 client/tests/cgroup/cgroup_client.py | 116 
 client/tests/cgroup/cgroup_common.py | 327
 ++
 client/tests/cgroup/control | 12 ++
 4 files changed, 694 insertions(+), 0 deletions(-)
 create mode 100755 client/tests/cgroup/cgroup.py
 create mode 100755 client/tests/cgroup/cgroup_client.py
 create mode 100755 client/tests/cgroup/cgroup_common.py
 create mode 100644 client/tests/cgroup/control
 
 diff --git a/client/tests/cgroup/cgroup.py
 b/client/tests/cgroup/cgroup.py
 new file mode 100755
 index 000..112f012
 --- /dev/null
 +++ b/client/tests/cgroup/cgroup.py
 @@ -0,0 +1,239 @@
 +from autotest_lib.client.bin import test
 +from autotest_lib.client.common_lib import error
 +import os, logging
 +import time
 +from cgroup_common import Cgroup as CG
 +from cgroup_common import CgroupModules
 +
 +class cgroup(test.test):
 + 
 + Tests the cgroup functionalities
 + 
 + version = 1
 + _client = 
 + modules = CgroupModules()
 +
 +
 + def run_once(self):
 + 
 + Try to access different resources which are restricted by cgroup.
 + 
 + logging.info('Start')
 +
 + err = 
 + # Run available tests
 + for i in ['memory']:
 + try:
 + if self.modules.get_pwd(i):
 + if (eval (self.test_%s() % i)):
 + err += %s,  % i
 + else:
 + logging.error(CGROUP: Skipping test_%s, module not 
 + available/mounted, i)
 + err += %s,  % i
 + except Exception, inst:
 + logging.error(CGROUP: test_%s fatal failure: %s, i, inst)
 + err += %s,  % i
 +
 + if err:
 + raise error.TestFail('CGROUP: Some subtests failed (%s)' % err[:-2])
 +
 +
 + def setup(self):
 + 
 + Setup
 + 
 + logging.info('Setup')
 +
 + self._client = os.path.join(self.bindir, cgroup_client.py)
 +
 + _modules = ['cpuset', 'ns', 'cpu', 'cpuacct', 'memory', 'devices',
 + 'freezer', 'net_cls', 'blkio']
 + if (self.modules.init(_modules) = 0):
 + raise error.TestFail('Can\'t mount any cgroup modules')
 +
 +
 + def cleanup(self):
 + 
 + Unmount all cgroups and remove directories
 + 
 + logging.info('Cleanup')
 + self.modules.cleanup()
 +
 +
 + #
 + # TESTS
 + #
 + def test_memory(self):
 + 
 + Memory test
 + 
 + # Preparation
 + logging.info(Entering 'test_memory')
 + item = CG('memory', self._client)
 + if item.initialize(self.modules):
 + logging.error(test_memory: cgroup init failed)
 + return -1
 +
 + if item.smoke_test():
 + logging.error(test_memory: smoke_test failed)
 + return -1
 +
 + pwd = item.mk_cgroup()
 + if pwd == None:
 + logging.error(test_memory: Can't create cgroup)
 + return -1
 +
 + logging.debug(test_memory: Memory filling test)
 +
 + f = open('/proc/meminfo','r')

Not clean way how to do this.. It is better to use regular expression.
But this is absolutely no important.

 + mem = f.readline()
 + while not mem.startswith(MemFree):
 + mem = f.readline()

 + # Use only 1G or max of the free memory
 + mem = min(int(mem.split()[1])/1024, 1024)
 + mem = max(mem, 100) # at least 100M
 + if (item.get_property(memory.memsw.limit_in_bytes, supress=True)
 + != None):
 + memsw = True
 + # Clear swap
 + os.system(swapoff -a)
 + os.system(swapon -a)
 + f.seek(0)
 + swap = f.readline()
 + while not swap.startswith(SwapTotal):
 + swap = f.readline()
 + swap = int(swap.split()[1])/1024
 + if swap  mem / 2:
 + logging.error(Not enough swap memory to test 'memsw')
 + memsw = False
 + else:
 + # Doesn't support swap+memory limitation, disable swap
 + logging.info('memsw' not supported)
 + os.system(swapoff -a)
 + memsw = False
 + logging.debug(test_memory: Initializition passed)
 +
 + 
 + # Fill the memory without cgroup limitation
 + # Should pass
 + 
 + logging.debug(test_memory: Memfill WO cgroup)
 + ps = item.test(memfill %d % mem)
 + ps.stdin.write('\n')
 + i = 0
 + while ps.poll() == None:
 + if i  60:
 + break
 + i += 1
 + time.sleep(1)
 + if i  60:
 + logging.error(test_memory: Memory filling failed (WO cgroup))
 + ps.terminate()
 + return -1
 + if not ps.stdout.readlines()[-1].startswith(PASS):
 + logging.error(test_memory: Unsuccessful memory filling 
 + (WO cgroup))
 + return -1
 + logging.debug(test_memory: Memfill WO cgroup passed)
 +
 + 

Re: [PATCH] Adds cgroup handling library

2011-08-09 Thread Jiri Zupka
ACK nice work.

May be cgroup_common should be more general and placed in client/common_lib
and can work general tools for manipulating with cgroups.

- Original Message -
 [new] cgroup_common.py
 * library for handling cgroups
 
 Signed-off-by: Lukas Doktor ldok...@redhat.com
 ---
 client/tests/cgroup/cgroup.py | 5 +-
 client/tests/cgroup/cgroup_common.py | 327
 ++
 2 files changed, 331 insertions(+), 1 deletions(-)
 create mode 100755 client/tests/cgroup/cgroup_common.py
 
 diff --git a/client/tests/cgroup/cgroup.py
 b/client/tests/cgroup/cgroup.py
 index d043d65..112f012 100755
 --- a/client/tests/cgroup/cgroup.py
 +++ b/client/tests/cgroup/cgroup.py
 @@ -118,6 +118,7 @@ class cgroup(test.test):
 # Fill the memory without cgroup limitation
 # Should pass
 
 + logging.debug(test_memory: Memfill WO cgroup)
 ps = item.test(memfill %d % mem)
 ps.stdin.write('\n')
 i = 0
 @@ -141,6 +142,7 @@ class cgroup(test.test):
 # memsw: should swap out part of the process and pass
 # WO memsw: should fail (SIGKILL)
 
 + logging.debug(test_memory: Memfill mem only limit)
 ps = item.test(memfill %d % mem)
 if item.set_cgroup(ps.pid, pwd):
 logging.error(test_memory: Could not set cgroup)
 @@ -187,6 +189,7 @@ class cgroup(test.test):
 # Fill the memory with 1/2 memory+swap limit
 # Should fail
 
 + logging.debug(test_memory: Memfill mem + swap limit)
 if memsw:
 ps = item.test(memfill %d % mem)
 if item.set_cgroup(ps.pid, pwd):
 @@ -226,11 +229,11 @@ class cgroup(test.test):
 logging.debug(test_memory: Memfill mem+swap cgroup passed)
 
 # cleanup
 + logging.debug(test_memory: Cleanup)
 if item.rm_cgroup(pwd):
 logging.error(test_memory: Can't remove cgroup directory)
 return -1
 os.system(swapon -a)
 - logging.debug(test_memory: Cleanup passed)
 
 logging.info(Leaving 'test_memory': PASSED)
 return 0
 diff --git a/client/tests/cgroup/cgroup_common.py
 b/client/tests/cgroup/cgroup_common.py
 new file mode 100755
 index 000..3fd1cf7
 --- /dev/null
 +++ b/client/tests/cgroup/cgroup_common.py
 @@ -0,0 +1,327 @@
 +#!/usr/bin/python
 +# -*- coding: utf-8 -*-
 +
 +Helpers for cgroup testing
 +
 +@copyright: 2011 Red Hat Inc.
 +@author: Lukas Doktor ldok...@redhat.com
 +
 +import os, logging
 +import subprocess
 +from tempfile import mkdtemp
 +import time
 +
 +class Cgroup:
 + 
 + Cgroup handling class
 + 
 + def __init__(self, module, _client):
 + 
 + Constructor
 + @param module: Name of the cgroup module
 + @param _client: Test script pwd+name
 + 
 + self.module = module
 + self._client = _client
 + self.root = None
 +
 +
 + def initialize(self, modules):
 + 
 + Inicializes object for use
 + @param modules: array of all available cgroup modules
 + @return: 0 when PASSED
 + 
 + self.root = modules.get_pwd(self.module)
 + if self.root:
 + return 0
 + else:
 + logging.error(cg.initialize(): Module %s not found, self.module)
 + return -1
 + return 0
 +
 +
 + def mk_cgroup(self, root=None):
 + 
 + Creates new temporary cgroup
 + @param root: where to create this cgroup (default: self.root)
 + @return: 0 when PASSED
 + 
 + try:
 + if root:
 + pwd = mkdtemp(prefix='cgroup-', dir=root) + '/'
 + else:
 + pwd = mkdtemp(prefix='cgroup-', dir=self.root) + '/'
 + except Exception, inst:
 + logging.error(cg.mk_cgroup(): %s , inst)
 + return None
 + return pwd
 +
 +
 + def rm_cgroup(self, pwd, supress=False):
 + 
 + Removes cgroup
 + @param pwd: cgroup directory
 + @param supress: supress output
 + @return: 0 when PASSED
 + 
 + try:
 + os.rmdir(pwd)
 + except Exception, inst:
 + if not supress:
 + logging.error(cg.rm_cgroup(): %s , inst)
 + return -1
 + return 0
 +
 +
 + def test(self, cmd):
 + 
 + Executes cgroup_client.py with cmd parameter
 + @param cmd: command to be executed
 + @return: subprocess.Popen() process
 + 
 + logging.debug(cg.test(): executing paralel process '%s' , cmd)
 + process = subprocess.Popen((self._client + ' ' + cmd), shell=True,
 + stdin=subprocess.PIPE, stdout=subprocess.PIPE,
 + stderr=subprocess.PIPE, close_fds=True)
 + return process
 +
 +
 + def is_cgroup(self, pid, pwd):
 + 
 + Checks if the 'pid' process is in 'pwd' cgroup
 + @param pid: pid of the process
 + @param pwd: cgroup directory
 + @return: 0 when is 'pwd' member
 + 
 + if open(pwd+'/tasks').readlines().count(%d\n % pid)  0:
 + return 0
 + else:
 + return -1
 +
 + def is_root_cgroup(self, pid):
 + 
 + Checks if the 'pid' process is in root cgroup (WO cgroup)
 + @param pid: pid of the process
 + @return: 0 when is 'root' member
 + 
 + return self.is_cgroup(pid, self.root)
 +
 + def set_cgroup(self, pid, pwd):
 + 
 + Sets cgroup membership
 + @param pid: pid of the process
 + @param pwd: cgroup directory
 + @return: 0 when PASSED
 + 
 + try:
 + open(pwd+'/tasks', 'w').write(str(pid))
 + except Exception, inst:
 + logging.error(cg.set_cgroup(): %s , inst)
 + return 

Re: [PATCH] [NEW] cgroup test * general smoke_test + module dependend subtests (memory test included) * library for future use in other tests (kvm)

2011-08-08 Thread Jiri Zupka
I go through this and let you know.

- Original Message -
 From: root r...@dhcp-26-193.brq.redhat.com
 
 cgroup.py:
 * structure for different cgroup subtests
 * contains basic cgroup-memory test
 
 cgroup_common.py:
 * library for cgroup handling (intended to be used from kvm test in
 the future)
 * universal smoke_test for every module
 
 cgroup_client.py:
 * application which is executed and controled using cgroups
 * contains smoke, memory, cpu and devices tests which were manually
 tested to break cgroup rules and will be used in the cgroup.py
 subtests
 
 Signed-off-by: Lukas Doktor ldok...@redhat.com
 ---
 client/tests/cgroup/cgroup.py | 236 ++
 client/tests/cgroup/cgroup_client.py | 116 +
 client/tests/cgroup/control | 12 ++
 3 files changed, 364 insertions(+), 0 deletions(-)
 create mode 100755 client/tests/cgroup/cgroup.py
 create mode 100755 client/tests/cgroup/cgroup_client.py
 create mode 100644 client/tests/cgroup/control
 
 diff --git a/client/tests/cgroup/cgroup.py
 b/client/tests/cgroup/cgroup.py
 new file mode 100755
 index 000..d043d65
 --- /dev/null
 +++ b/client/tests/cgroup/cgroup.py
 @@ -0,0 +1,236 @@
 +from autotest_lib.client.bin import test
 +from autotest_lib.client.common_lib import error
 +import os, logging
 +import time
 +from cgroup_common import Cgroup as CG
 +from cgroup_common import CgroupModules
 +
 +class cgroup(test.test):
 + 
 + Tests the cgroup functionalities
 + 
 + version = 1
 + _client = 
 + modules = CgroupModules()
 +
 +
 + def run_once(self):
 + 
 + Try to access different resources which are restricted by cgroup.
 + 
 + logging.info('Start')
 +
 + err = 
 + # Run available tests
 + for i in ['memory']:
 + try:
 + if self.modules.get_pwd(i):
 + if (eval (self.test_%s() % i)):
 + err += %s,  % i
 + else:
 + logging.error(CGROUP: Skipping test_%s, module not 
 + available/mounted, i)
 + err += %s,  % i
 + except Exception, inst:
 + logging.error(CGROUP: test_%s fatal failure: %s, i, inst)
 + err += %s,  % i
 +
 + if err:
 + raise error.TestFail('CGROUP: Some subtests failed (%s)' % err[:-2])
 +
 +
 + def setup(self):
 + 
 + Setup
 + 
 + logging.info('Setup')
 +
 + self._client = os.path.join(self.bindir, cgroup_client.py)
 +
 + _modules = ['cpuset', 'ns', 'cpu', 'cpuacct', 'memory', 'devices',
 + 'freezer', 'net_cls', 'blkio']
 + if (self.modules.init(_modules) = 0):
 + raise error.TestFail('Can\'t mount any cgroup modules')
 +
 +
 + def cleanup(self):
 + 
 + Unmount all cgroups and remove directories
 + 
 + logging.info('Cleanup')
 + self.modules.cleanup()
 +
 +
 + #
 + # TESTS
 + #
 + def test_memory(self):
 + 
 + Memory test
 + 
 + # Preparation
 + logging.info(Entering 'test_memory')
 + item = CG('memory', self._client)
 + if item.initialize(self.modules):
 + logging.error(test_memory: cgroup init failed)
 + return -1
 +
 + if item.smoke_test():
 + logging.error(test_memory: smoke_test failed)
 + return -1
 +
 + pwd = item.mk_cgroup()
 + if pwd == None:
 + logging.error(test_memory: Can't create cgroup)
 + return -1
 +
 + logging.debug(test_memory: Memory filling test)
 +
 + f = open('/proc/meminfo','r')
 + mem = f.readline()
 + while not mem.startswith(MemFree):
 + mem = f.readline()
 + # Use only 1G or max of the free memory
 + mem = min(int(mem.split()[1])/1024, 1024)
 + mem = max(mem, 100) # at least 100M
 + if (item.get_property(memory.memsw.limit_in_bytes, supress=True)
 + != None):
 + memsw = True
 + # Clear swap
 + os.system(swapoff -a)
 + os.system(swapon -a)
 + f.seek(0)
 + swap = f.readline()
 + while not swap.startswith(SwapTotal):
 + swap = f.readline()
 + swap = int(swap.split()[1])/1024
 + if swap  mem / 2:
 + logging.error(Not enough swap memory to test 'memsw')
 + memsw = False
 + else:
 + # Doesn't support swap+memory limitation, disable swap
 + logging.info('memsw' not supported)
 + os.system(swapoff -a)
 + memsw = False
 + logging.debug(test_memory: Initializition passed)
 +
 + 
 + # Fill the memory without cgroup limitation
 + # Should pass
 + 
 + ps = item.test(memfill %d % mem)
 + ps.stdin.write('\n')
 + i = 0
 + while ps.poll() == None:
 + if i  60:
 + break
 + i += 1
 + time.sleep(1)
 + if i  60:
 + logging.error(test_memory: Memory filling failed (WO cgroup))
 + ps.terminate()
 + return -1
 + if not ps.stdout.readlines()[-1].startswith(PASS):
 + logging.error(test_memory: Unsuccessful memory filling 
 + (WO cgroup))
 + return -1
 + logging.debug(test_memory: Memfill WO cgroup passed)
 +
 + 
 + # Fill the memory with 1/2 memory limit
 + # memsw: should swap out part of the process and pass
 + # WO memsw: should fail (SIGKILL)
 + 
 + ps = item.test(memfill %d % mem)
 + if item.set_cgroup(ps.pid, pwd):
 + logging.error(test_memory: 

Re: [Autotest] [AUTOTEST][KVM] [PATCH 2/2] Add ability to call autotest client tests from kvm tests like a subtest.

2011-05-04 Thread Jiri Zupka
- Original Message -
 Hi Jiri, after reviewing the code I have comments, similar to
 Cleber's:
 
 On Fri, Apr 29, 2011 at 10:59 AM, Jiří Župka jzu...@redhat.com
 wrote:
  Example run autotest/client/netperf2 like a server.
 
 ... snip
 
  diff --git a/client/tests/kvm/tests/subtest.py
  b/client/tests/kvm/tests/subtest.py
  new file mode 100644
  index 000..3b546dc
  --- /dev/null
  +++ b/client/tests/kvm/tests/subtest.py
  @@ -0,0 +1,43 @@
  +import os, logging
  +from autotest_lib.client.virt import virt_utils, virt_test_utils,
  kvm_monitor
  +from autotest_lib.client.bin import job
  +from autotest_lib.client.bin.net import net_utils
  +
  +
  +def run_subtest(test, params, env):
  + 
  + Run an autotest test inside a guest and subtest on host side.
  + This test should be substitution netperf test in kvm.
  +
  + @param test: kvm test object.
  + @param params: Dictionary with test parameters.
  + @param env: Dictionary with the test environment.
  + 
  + vm = env.get_vm(params[main_vm])
  + vm.verify_alive()
  + timeout = int(params.get(login_timeout, 360))
  + session = vm.wait_for_login(timeout=timeout)
  +
  + # Collect test parameters
  + timeout = int(params.get(test_timeout, 300))
  + control_path = os.path.join(test.bindir, autotest_control,
  + params.get(test_control_file))
  + control_args = params.get(test_control_args)
  + outputdir = test.outputdir
  +
  + guest_ip = vm.get_address()
  + host_ip = net_utils.network().get_corespond_local_ip(guest_ip)
  + if not host_ip is None:
  + control_args = host_ip +   + guest_ip
  +
  + guest = virt_utils.Thread(virt_test_utils.run_autotest,
  + (vm, session, control_path, control_args,
  + timeout, outputdir, params))
  + guest.start()
  +
  + test.runsubtest(netperf2, tag=server, server_ip=host_ip,
  + client_ip=guest_ip, role='server')
 
 ^ This really should be made generic, since as Cleber mentioned,
 calling this test run_subtest wouldn't cut for cases where we run
 something other than netperf2. So things that started coming to my
 mind:

^ Yes you are right. I wanted to show how use and configure parameter 
in control file. This shouldn't be a test this test should be only a sample 
of technology. But I made wrong implanting to tests_base.conf. I try think 
about tests_base.conf and make this implantation in better way. 

I repair subtest and send patch again.

 
 * We could extend the utility function to run autotest tests on a
 guest in a way that it can accept a string with the control file
 contents, rather than just an existing control file. This way we'd be
 more free to run arbitrary control code in guests, while of course
 keeping the ability to use existing control files;
 * We could actually create an Autotest() class abstraction, very much
 like what we have in server control files, such as
 
 auto_vm1 = virt_utils.Autotest(vm1) # This would install autotest in a
 VM and wait for further commands
 
 control = job.run_test('sleeptest')

^ This should be standard test in 
client/tests/
not file from 
client/tests/kvm/autotest_control.

 
 auto_vm1.run_control(control) # This would run sleeptest and bring
 back the results to the host

 
 It's a matter to see how this is modeled for server side control
 files... I believe this could be cleaner and help us a lot...

And yes I agree with this. This sounds good. 

 
 In other comments, please use the idiom:
 
 if foo is not None:
 
 Across all places where we compare a variable with None, because it's
 easier to understand the intent right away and it's on the
 CODING_STYLE document.

^^ I try this.
 
 --
 Lucas
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM test: Cleanup of virtio_console subtest

2011-02-09 Thread Jiri Zupka
- Original Message -
 This is a cleanup patch for virtio_console:
 
 1) Use the safer is None instead of == None or similar
 comparisons
 2) Remove some unused imports
 3) Remove some unneded parenthesis
 4) Correct typos

Thank you for your corrections.

 
 Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
 ---
 client/tests/kvm/scripts/virtio_console_guest.py | 65 ++--
 client/tests/kvm/tests/virtio_console.py | 118 +++---
 2 files changed, 91 insertions(+), 92 deletions(-)
 
 diff --git a/client/tests/kvm/scripts/virtio_console_guest.py
 b/client/tests/kvm/scripts/virtio_console_guest.py
 index db67fc9..6626593 100755
 --- a/client/tests/kvm/scripts/virtio_console_guest.py
 +++ b/client/tests/kvm/scripts/virtio_console_guest.py
 @@ -9,8 +9,8 @@ Auxiliary script used to send data between ports on
 guests.
 
 import threading
 from threading import Thread
 -import os, time, select, re, random, sys, array
 -import fcntl, subprocess, traceback, signal
 +import os, select, re, random, sys, array
 +import fcntl, traceback, signal
 
 DEBUGPATH = /sys/kernel/debug
 SYSFSPATH = /sys/class/virtio-ports/
 @@ -63,7 +63,7 @@ class VirtioGuest:
 
 ports = {}
 not_present_msg = FAIL: There's no virtio-ports dir in debugfs
 - if (not os.path.ismount(DEBUGPATH)):
 + if not os.path.ismount(DEBUGPATH):
 os.system('mount -t debugfs none %s' % (DEBUGPATH))
 try:
 if not os.path.isdir('%s/virtio-ports' % (DEBUGPATH)):
 @@ -72,21 +72,20 @@ class VirtioGuest:
 print not_present_msg
 else:
 viop_names = os.listdir('%s/virtio-ports' % (DEBUGPATH))
 - if (in_files != None):
 + if in_files is not None:
 dev_names = os.listdir('/dev')
 rep = re.compile(rvport[0-9]p[0-9]+)
 - dev_names = filter(lambda x: rep.match(x) != None, dev_names)
 + dev_names = filter(lambda x: rep.match(x) is not None, dev_names)
 if len(dev_names) != len(in_files):
 - print (FAIL: Not all ports are sucesfully inicailized+
 -  in /dev +
 - only %d from %d. % (len(dev_names),
 - len(in_files)))
 + print (FAIL: Not all ports were successfully initialized 
 + in /dev, only %d from %d. % (len(dev_names),
 + len(in_files)))
 return
 
 if len(viop_names) != len(in_files):
 - print (FAIL: No all ports are sucesfully inicailized 
 - in debugfs only %d from %d. % (len(viop_names),
 - len(in_files)))
 + print (FAIL: Not all ports were successfuly initialized 
 + in debugfs, only %d from %d. % (len(viop_names),
 + len(in_files)))
 return
 
 for name in viop_names:
 @@ -101,35 +100,36 @@ class VirtioGuest:
 m = re.match((\S+): (\S+), line)
 port[m.group(1)] = m.group(2)
 
 - if (port['is_console'] == yes):
 + if port['is_console'] == yes:
 port[path] = /dev/hvc%s % (port[console_vtermno])
 # Console works like a serialport
 else:
 port[path] = /dev/%s % name
 
 - if (not os.path.exists(port['path'])):
 + if not os.path.exists(port['path']):
 print FAIL: %s not exist % port['path']
 
 sysfspath = SYSFSPATH + name
 - if (not os.path.isdir(sysfspath)):
 + if not os.path.isdir(sysfspath):
 print FAIL: %s not exist % (sysfspath)
 
 info_name = sysfspath + /name
 port_name = self._readfile(info_name).strip()
 - if (port_name != port[name]):
 - print (FAIL: Port info not match \n%s - %s\n%s - %s %
 + if port_name != port[name]:
 + print (FAIL: Port info does not match 
 + \n%s - %s\n%s - %s %
 (info_name , port_name,
 %s/virtio-ports/%s % (DEBUGPATH, name),
 port[name]))
 dev_ppath = DEVPATH + port_name
 - if not (os.path.exists(dev_ppath)):
 - print (FAIL: Symlink  + dev_ppath +  not exist.)
 - if not (os.path.realpath(dev_ppath) != /dev/name):
 - print (FAIL: Sumlink  + dev_ppath +  not correct.)
 + if not os.path.exists(dev_ppath):
 + print FAIL: Symlink %s does not exist. % dev_ppath
 + if not os.path.realpath(dev_ppath) != /dev/name:
 + print FAIL: Symlink %s is not correct. % dev_ppath
 except AttributeError:
 - print (In file  + open_db_file +
 -  are incorrect data\n + .join(file).strip())
 - print (FAIL: Fail file data.)
 + print (Bad data on file %s:\n%s.  %
 + (open_db_file, .join(file).strip()))
 + print FAIL: Bad data on file %s. % open_db_file
 return
 
 ports[port['name']] = port
 @@ -142,10 +142,11 @@ class VirtioGuest:
 
 Check if port /dev/vport0p0 was created.
 
 - if os.path.exists(/dev/vport0p0):
 - print PASS: Port exist.
 + symlink = /dev/vport0p0
 + if os.path.exists(symlink):
 + print PASS: Symlink %s exists. % symlink
 else:
 - print FAIL: Device /dev/vport0p0 not exist.
 + print FAIL: Symlink %s does not exist. % symlink
 
 
 def init(self, in_files):
 @@ -154,7 +155,7 @@ class VirtioGuest:
 
 self.ports = self._get_port_status(in_files)
 
 - if self.ports == None:
 + if self.ports is None:
 return
 for item in in_files:
 if (item[1] != self.ports[item[0]][is_console]):
 @@ -507,11 +508,11 @@ class VirtioGuest:
 
 descriptor = None
 path = self.ports[file][path]
 - if path != None:
 + if path is not None:
 if path in self.files.keys():
 descriptor = self.files[path]
 del self.files[path]
 - if descriptor != 

Re: [PATCH 1/1] virtio_console: perf-test fix [FIX] read-out all data after perf-test [FIX] code clean-up

2010-09-29 Thread Jiri Zupka
- Amit Shah amit.s...@redhat.com wrote:

 On (Thu) Sep 23 2010 [14:11:52], Lukas Doktor wrote:
  @@ -829,6 +832,11 @@ def run_virtio_console(test, params, env):
   exit_event.set()
   thread.join()
   
  +# Let the guest read-out all the remaining data
  +while not _on_guest(virt.poll('%s', %s)
  +% (port[1], select.POLLIN), vm,
 2)[0]:
  +time.sleep(1)
 
 This is just polling the guest, not reading out any data?

The reading thread in on guest side. We are only check when stop
guest side reading thread.

 
 (BTW POLLIN will be set if host is disconnected and there's no data
 to
 read, so ensure you don't enter an infinite loop here.)
 
   Amit
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] KVM test: KSM (kernel shared memory) overcommit test

2010-02-10 Thread Jiri Zupka
FIX:
  Patch is based on The previous version [PATCH] KVM test: KSM (kernel shared 
memory) test   
  developed overcommit Lucas Meneghel Rodrigues.
  The only fix is change of behavior with large overcommit. Python has a 
problem in allocating 
  memory even when is still enough space in swap (300MB etc). 
  Change is on line 150 in file ksm_overcommit.py .
diff --git a/client/tests/kvm/kvm_test_utils.py b/client/tests/kvm/kvm_test_utils.py
index 02ec0cf..7d96d6e 100644
--- a/client/tests/kvm/kvm_test_utils.py
+++ b/client/tests/kvm/kvm_test_utils.py
@@ -22,7 +22,8 @@ More specifically:
 
 
 import time, os, logging, re, commands
-from autotest_lib.client.common_lib import utils, error
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
 import kvm_utils, kvm_vm, kvm_subprocess
 
 
@@ -203,3 +204,36 @@ def get_time(session, time_command, time_filter_re, time_format):
 s = re.findall(time_filter_re, s)[0]
 guest_time = time.mktime(time.strptime(s, time_format))
 return (host_time, guest_time)
+
+
+def get_memory_info(lvms):
+
+Get memory information from host and guests in format:
+Host: memfree = XXXM; Guests memsh = {XXX,XXX,...}
+
+@params lvms: List of VM objects
+@return: String with memory info report
+
+if not isinstance(lvms, list):
+raise error.TestError(Invalid list passed to get_stat: %s  % lvms)
+
+try:
+meminfo = Host: memfree = 
+meminfo += str(int(utils.freememtotal()) / 1024) + M; 
+meminfo += swapfree = 
+mf = int(utils.read_from_meminfo(SwapFree)) / 1024
+meminfo += str(mf) + M; 
+except Exception, e:
+raise error.TestFail(Could not fetch host free memory info, 
+ reason: %s % e)
+
+meminfo += Guests memsh = {
+for vm in lvms:
+shm = vm.get_shared_meminfo()
+if shm is None:
+raise error.TestError(Could not get shared meminfo from 
+  VM %s % vm)
+meminfo += %dM;  % shm
+meminfo = meminfo[0:-2] + }
+
+return meminfo
diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
index df26a77..c9cd2e1 100644
--- a/client/tests/kvm/kvm_utils.py
+++ b/client/tests/kvm/kvm_utils.py
@@ -713,6 +713,22 @@ def generate_random_string(length):
 return str
 
 
+def generate_tmp_file_name(file, ext=None, dir='/tmp/'):
+
+Returns a temporary file name. The file is not created.
+
+while True:
+file_name = (file + '-' + time.strftime(%Y%m%d-%H%M%S-) +
+ generate_random_string(4))
+if ext:
+file_name += '.' + ext
+file_name = os.path.join(dir, file_name)
+if not os.path.exists(file_name):
+break
+
+return file_name
+
+
 def format_str_for_message(str):
 
 Format str so that it can be appended to a message.
diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index 6731927..5790dff 100755
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -760,6 +760,23 @@ class VM:
 return self.process.get_pid()
 
 
+def get_shared_meminfo(self):
+
+Returns the VM's shared memory information.
+
+@return: Shared memory used by VM (MB)
+
+if self.is_dead():
+logging.error(Could not get shared memory info from dead VM.)
+return None
+
+cmd = cat /proc/%d/statm % self.params.get('pid_' + self.name)
+shm = int(os.popen(cmd).readline().split()[2])
+# statm stores informations in pages, translate it to MB
+shm = shm * 4 / 1024
+return shm
+
+
 def remote_login(self, nic_index=0, timeout=10):
 
 Log into the guest via SSH/Telnet/Netcat.
diff --git a/client/tests/kvm/scripts/allocator.py b/client/tests/kvm/scripts/allocator.py
new file mode 100644
index 000..e0b8c75
--- /dev/null
+++ b/client/tests/kvm/scripts/allocator.py
@@ -0,0 +1,230 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+Auxiliary script used to allocate memory on guests.
+
+...@copyright: 2008-2009 Red Hat Inc.
+...@author: Jiri Zupka (jzu...@redhat.com)
+
+
+
+import os, array, sys, struct, random, copy, inspect, tempfile, datetime
+
+PAGE_SIZE = 4096 # machine page size
+
+
+class MemFill(object):
+
+Fills guest memory according to certain patterns.
+
+def __init__(self, mem, static_value, random_key):
+
+Constructor of MemFill class.
+
+@param mem: Amount of test memory in MB.
+@param random_key: Seed of random series used for fill up memory.
+@param static_value: Value used to fill all memory.
+
+if (static_value  0 or static_value  255):
+print (FAIL: Initialization static value
+   can be only in range (0..255))
+return
+
+self.tmpdp = tempfile.mkdtemp()
+ret_code = os.system(mount -o size

[KVM-autotest][RFC] 32/32 PAE bit guest system definition

2009-12-11 Thread Jiri Zupka
Hello,
  we write KSM_overcommit test. If we calculate memory for guest we need to know
which architecture is Guest. If it is a 32b or 32b with PAE or 64b system.
Because with a 32b guest we can allocate only 3100M +-.

Currently we use the name of disk's image file. Image file name ends with 64 or 
32.
Is there way how we can detect if guest machine run with PAE etc.. ? 
Do you think that kvm_autotest can define parameter in kvm_tests.cfg which 
configure determine if is guest 32b or 32b with PAE or 64b.

Thank Župka 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [KVM-AUTOTEST] KSM-overcommit test v.2 (python version)

2009-11-17 Thread Jiri Zupka
Hi,
  We find a little mistake with ending of allocator.py. 
Because I send this patch today. I resend whole repaired patch again. 


- Original Message -
From: Jiri Zupka jzu...@redhat.com
To: autotest autot...@test.kernel.org, kvm kvm@vger.kernel.org
Cc: u...@redhat.com
Sent: Tuesday, November 17, 2009 12:52:28 AM GMT +01:00 Amsterdam / Berlin / 
Bern / Rome / Stockholm / Vienna
Subject: [Autotest] [KVM-AUTOTEST] KSM-overcommit test v.2 (python version)

Hi,  
  based on your requirements we have created new version 
of KSM-overcommit patch (submitted in September). 

Describe:
  It tests KSM (kernel shared memory) with overcommit of memory.

Changelog:
  1) Based only on python (remove C code)
  2) Add new test (check last 96B)
  3) Separate test to (serial,parallel,both)
  4) Improve log and documentation 
  5) Add perf constat to change time limit for waiting. (slow computer problem)

Functionality:
  KSM test start guests. They are connect to guest over ssh.
  Copy and run allocator.py to guests. 
  Host can run any python command over Allocator.py loop on client side. 

  Start run_ksm_overcommit.
  Define host and guest reserve variables (host_reserver,guest_reserver).
  Calculate amount of virtual machine and their memory based on variables
  host_mem and overcommit. 
  Check KSM status.
  Create and start virtual guests.
  Test :
   a] serial
1) initialize, merge all mem to single page
2) separate first guset mem
3) separate rest of guest up to fill all mem
4) kill all guests except for the last
5) check if mem of last guest is ok
6) kill guest
   b] parallel 
1) initialize, merge all mem to single page
2) separate mem of guest
3) verification of guest mem
4) merge mem to one block
5) verification of guests mem
6) separate mem of guests by 96B
7) check if mem is all right 
8) kill guest
  allocator.py (client side script) 
After start they wait for command witch they make in client side.
mem_fill class implement commands to fill, check mem and return 
error to host.

We need client side script because we need generate lot of GB of special 
data. 

Future plane:
  We want to add to log information about time spend in task. 
  Information from log we want to use to automatic compute perf contant.
  And add New tests.
  






  


___
Autotest mailing list
autot...@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
diff --git a/client/tests/kvm/kvm_tests.cfg.sample b/client/tests/kvm/kvm_tests.cfg.sample
index ac9ef66..90f62bb 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -118,6 +118,23 @@ variants:
 test_name = npb
 test_control_file = npb.control
 
+- ksm_overcommit:
+# Don't preprocess any vms as we need to change it's params
+vms = ''
+image_snapshot = yes
+kill_vm_gracefully = no
+type = ksm_overcommit
+ksm_swap = yes   # yes | no
+no hugepages
+# Overcommit of host memmory
+ksm_overcommit_ratio = 3
+# Max paralel runs machine
+ksm_paralel_ratio = 4
+variants:
+- serial
+ksm_test_size = serial
+- paralel
+ksm_test_size = paralel
 
 - linux_s3: install setup unattended_install
 type = linux_s3
diff --git a/client/tests/kvm/tests/ksm_overcommit.py b/client/tests/kvm/tests/ksm_overcommit.py
new file mode 100644
index 000..408e711
--- /dev/null
+++ b/client/tests/kvm/tests/ksm_overcommit.py
@@ -0,0 +1,605 @@
+import logging, time
+from autotest_lib.client.common_lib import error
+import kvm_subprocess, kvm_test_utils, kvm_utils
+import kvm_preprocessing
+import random, string, math, os
+
+def run_ksm_overcommit(test, params, env):
+
+Test how KSM (Kernel Shared Memory) act with more than physical memory is
+used. In second part is also tested, how KVM can handle the situation,
+when the host runs out of memory (expected is to pause the guest system,
+wait until some process returns the memory and bring the guest back to life)
+
+@param test: kvm test object.
+@param params: Dictionary with test parameters.
+@param env: Dictionary with the test wnvironment.
+
+
+def parse_meminfo(rowName):
+
+Function get date from file /proc/meminfo
+
+@param rowName: Name of line in meminfo 
+
+for line in open('/proc/meminfo').readlines():
+if line.startswith(rowName+:):
+name, amt, unit = line.split()
+return name, amt, unit   
+
+def parse_meminfo_value(rowName):
+
+Function convert meminfo value to int
+
+@param rowName: Name of line in meminfo  
+
+name, amt, unit = parse_meminfo(rowName)
+return amt

[KVM-AUTOTEST] KSM-overcommit test v.2 (python version)

2009-11-16 Thread Jiri Zupka
 memory by random on guest\
+  %s % (vm.name))
+logging.info(get_stat([vm]))
+
+logging.info(Phase 4e: Simultaneous verification)
+for i in range(0, max_alloc):
+lsessions[i].sendline(mem.value_fill(%d) % (skeys[0]))
+for i in range(0, max_alloc):
+(match,data) = lsessions[i].read_until_last_line_matches(\
+[PASS:,FAIL:], (mem / 200 * 50 * perf_ratio))
+if (match == 1):
+raise error.TestError(Memory error dump: %s % data)
+
+logging.info(Phases 4f: Simultaneous spliting last 96B)
+
+# Actual splitting
+for i in range(0, max_alloc):
+lsessions[i].sendline(mem.static_random_fill(96))
+
+for i in range(0, max_alloc):
+(match,data) = lsessions[i].read_until_last_line_matches(\
+[PASS:,FAIL:], (60 * perf_ratio))
+if match == 1:
+raise error.TestFail(Could not fill memory by zero on guest\
+  %s % (vm.name))
+
+	if match == None:
+	raise error.TestFail(Generating random series timeout on guest %s\
+	 % (vm.name))
+
+data = data.splitlines()[-1]
+out = int(data.split()[4])
+logging.info(PERFORMANCE: %dMB * 1000 / %dms = %dMB/s\
+ % (ksm_size/max_alloc, out, \
+			 (ksm_size * 1000 / out / max_alloc)))
+logging.info(get_stat([vm]))
+
+logging.info(Phase 4g: Simultaneous verification last 96B)
+for i in range(0, max_alloc):
+lsessions[i].sendline(mem.static_random_verify(96))
+for i in range(0, max_alloc):
+(match,data) = lsessions[i].read_until_last_line_matches(
+[PASS:,FAIL:], (mem / 200 * 50 * perf_ratio))
+if (match == 1):
+raise error.TestError(Memory error dump: %s % data)
+
+ 
+
+
+logging.info(get_stat([vm]))
+
+logging.info(Phase 4 = passed)
+# Clean-up
+for i in range(0, max_alloc):
+lsessions[i].get_command_status_output(exit(),20)
+session.close()
+vm.destroy(gracefully = False)
+
+if params['ksm_test_size'] == paralel:
+phase_paralel()
+elif params['ksm_test_size'] == serial:
+phase_inicialize_guests()
+phase_separate_first_guest()
+phase_split_guest()
+
diff --git a/client/tests/kvm/unattended/allocator.py b/client/tests/kvm/unattended/allocator.py
new file mode 100644
index 000..3cad9e6
--- /dev/null
+++ b/client/tests/kvm/unattended/allocator.py
@@ -0,0 +1,213 @@
+import os
+import array
+import sys
+import struct
+import random
+import copy
+import inspect
+import tempfile
+from datetime import datetime
+from datetime import timedelta
+
+
+KVM test definitions.
+
+...@copyright: 2008-2009 Red Hat Inc.
+Jiri Zupka jzu...@redhat.com
+
+PAGE_SIZE = 4096 #machine page size
+
+class mem_fill:
+ 
+Guest side script to test KSM driver
+
+
+def __init__(self,mem,static_value,random_key):
+
+Constructor of mem_fill class
+
+@param mem: amount of test memory in MB
+@param random_key: seed of random series used for fill
+@param static_value: value witch fill whole memory 
+
+if (static_value  0 or static_value  255):
+print FAIL: Initialization static value+\
+  can be only in range (0..255)
+return
+
+self.tmpdp = tempfile.mkdtemp()
+if (not os.system(mount -osize=%dM tmpfs %s -t tmpfs \
+  % (mem+50,self.tmpdp)) == 0):
+print FAIL: Only root can do that
+else:
+self.f = tempfile.TemporaryFile(prefix='mem', dir=self.tmpdp)
+self.allocate_by = 'L'
+self.npages = (mem * 1024 * 1024)/PAGE_SIZE
+self.random_key = random_key
+self.static_value = static_value
+print PASS: Initialization
+
+def __del__(self):
+if (os.path.ismount(self.tmpdp)):
+self.f.close()
+os.system(umount %s % (self.tmpdp))
+
+def compare_page(self,original,inmem):
+
+compare memory
+
+@param original: data witch we expected on memory
+@param inmem: data in memory 
+
+for ip in range(PAGE_SIZE/original.itemsize):
+if (not original[ip] == inmem[ip]):#find wrong item
+originalp = array.array(B)
+inmemp = array.array(B)
+originalp.fromstring(original[ip:ip+1].tostring())
+inmemp.fromstring(inmem[ip:ip+1].tostring())
+for ib in range(len(originalp)): #find wrong byte in item

Re: [KVM-AUTOTEST PATCH 1/2] Add KSM test

2009-09-25 Thread Jiri Zupka

- Dor Laor dl...@redhat.com wrote:

 On 09/16/2009 04:09 PM, Jiri Zupka wrote:
 
  - Dor Laordl...@redhat.com  wrote:
 
  On 09/15/2009 09:58 PM, Jiri Zupka wrote:
  After a quick review I have the following questions:
  1. Why did you implement the guest tool in 'c' and not in
 python?
  Python is much simpler and you can share some code with the
  server.
  This 'test protocol' would also be easier to understand this
  way.
 
  We need speed and the precise control of allocate memory in
 pages.
 
  2. IMHO there is no need to use select, you can do blocking
 read.
 
  We replace socket communication by interactive program
 communication
  via ssh/telnet
 
  3. Also you can use plain malloc without the more complex ( a
 bit)
  mmap.
 
  We need address exactly the memory pages. We can't allow shift of
  the data in memory.
 
  You can use the tmpfs+dd idea instead of the specific program as I
  detailed before. Maybe some other binary can be used. My intention
 is
  to
  simplify the test/environment as much as possible.
 
 
  We need compatibility with others system, like Windows etc..
  We want to add support for others system in next version
 
 KSM is a host feature and should be agnostic to the guest.
 Also I don't think your code will compile on windows...

Yes, I think you have true. 

But because we need generate special data to pages in memory. 
We need use script on guest side of test. Because communication 
over ssh is to slow to transfer lot of GB of special data to guests.

We can use optimized C program which is 10x and more faster than 
python script on native system. Heavy load of virtual guest can 
make some performance problem.

We can use tmpfs but with python script to generate special data. 
We can't use dd with random because we need test some special case.
(change only last 96B of page etc.. )


What do you think about it? 

 
 
 
  --
  To unsubscribe from this list: send the line unsubscribe kvm in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at 
 http://vger.kernel.org/majordomo-info.html
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM-AUTOTEST PATCH 1/2] Add KSM test

2009-09-16 Thread Jiri Zupka

- Dor Laor dl...@redhat.com wrote:

 On 09/15/2009 09:58 PM, Jiri Zupka wrote:
  After a quick review I have the following questions:
  1. Why did you implement the guest tool in 'c' and not in python?
 Python is much simpler and you can share some code with the
 server.
 This 'test protocol' would also be easier to understand this
 way.
 
  We need speed and the precise control of allocate memory in pages.
 
  2. IMHO there is no need to use select, you can do blocking read.
 
  We replace socket communication by interactive program communication
 via ssh/telnet
 
  3. Also you can use plain malloc without the more complex ( a bit)
 mmap.
 
  We need address exactly the memory pages. We can't allow shift of
 the data in memory.
 
 You can use the tmpfs+dd idea instead of the specific program as I 
 detailed before. Maybe some other binary can be used. My intention is
 to 
 simplify the test/environment as much as possible.
 

We need compatibility with others system, like Windows etc..  
We want to add support for others system in next version 

 
  --
  To unsubscribe from this list: send the line unsubscribe kvm in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [KVM-AUTOTEST PATCH 1/2] Add KSM test

2009-09-15 Thread Jiri Zupka
 After a quick review I have the following questions:
 1. Why did you implement the guest tool in 'c' and not in python?
   Python is much simpler and you can share some code with the server.
   This 'test protocol' would also be easier to understand this way.

We need speed and the precise control of allocate memory in pages. 

 2. IMHO there is no need to use select, you can do blocking read.

We replace socket communication by interactive program communication via 
ssh/telnet

 3. Also you can use plain malloc without the more complex ( a bit) mmap.

We need address exactly the memory pages. We can't allow shift of the data in 
memory. 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html