Re: KVM: perf: a smart tool to analyse kvm events

2012-02-20 Thread Pradeep Kumar Surisetty
* Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com [2012-02-17 10:35:04]:

 On 02/16/2012 11:52 PM, Pradeep Kumar wrote:
 
  Xiao,
  
   i tried your perf events patch set on RHEL 6.1 host and failed to trace 
  kvm-events with below error message.
  
[root@kvm perf]# ./perf kvm-events report
Warning: unknown op '{'
Warning: Error: expected type 5 but read 1
Warning: failed to read event print fmt for hrtimer_start
Warning: unknown op '{'
Warning: Error: expected type 5 but read 1
Warning: failed to read event print fmt for hrtimer_expire_entry
Analyze events for all VCPUs:
VM-EXITSamples  Samples% Time% Avg time   
Total Samples:0, Total events handled time:0.00us.
[root@kvm perf]#  
   
 
 
 Thanks for your try, Pradeep!
 
 It seems that kvm events do not be recorded.
 
 Do your guest was running when kvm-events was executed?

Hello Xiao

my guest was not running when i executed. 

 What is the output of ./perf script | grep kvm_*?

[root@phx3 perf]# ./perf script | grep kvm_*
Warning: unknown op '{'
Warning: Error: expected type 5 but read 1
Warning: failed to read event print fmt for hrtimer_start
Warning: unknown op '{'
Warning: Error: expected type 5 but read 1
Warning: failed to read event print fmt for
hrtimer_expire_entry
# cmdline : /home/patch/linux/tools/perf/perf record -a -R
# -f -m 1024 -c 1 -e kvm:kvm_entry -e kvm:kvm_exit -e
# kvm:kvm_mmio -e kvm:kvm_pio -e timer:* 
# event : name = kvm:kvm_entry, type = 2, config = 0x2ef,
# config1 = 0x0, config2 = 0x0, excl_usr = 0, excl_kern = 0,
# id = { 6241, 6242, 6243, 6244, 6245, 6246, 6247, 6248 }
# event : name = kvm:kvm_exit, type = 2, config = 0x2ea,
# config1 = 0x0, config2 = 0x0, excl_usr = 0, excl_kern = 0,
# id = { 6249, 6250, 6251, 6252, 6253, 6254, 6255, 6256 }
# event : name = kvm:kvm_mmio, type = 2, config = 0x2dd,
# config1 = 0x0, config2 = 0x0, excl_usr = 0, excl_kern = 0,
# id = { 6257, 6258, 6259, 6260, 6261, 6262, 6263, 6264 }
# event : name = kvm:kvm_pio, type = 2, config = 0x2ed,
# config1 = 0x0, config2 = 0x0, excl_usr = 0, excl_kern = 0,
# id = { 6265, 6266, 6267, 6268, 6269, 6270, 6271, 6272 }
no symbols found in /sbin/killall5, maybe install a debug
package?

--Pradeep

 
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


ethtool

2011-09-30 Thread pradeep


Hello Amos, Lmr

Couple of networking tests like ethtool, file_transfer..etc are not
doing cleaning properly.  Huge files are not getting deleted after the
test. So guest running out of space for next tests. 

--Pradeep

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Autotest] [PATCH]: watchdog test extension

2011-09-29 Thread Pradeep Kumar
From b7fc8b21fb3cdd9fefffd05d12dabfd055032f8f Mon Sep 17 00:00:00 2001
From: pradeep y...@example.com
Date: Thu, 29 Sep 2011 11:22:32 +0530
Subject: [PATCH] [KVM][Autotest]: watchdog test extension
 Signed-off-by: Pradeep K Surisetty psuri...@linux.vnet.ibm.com
modified:   client/tests/kvm/subtests.cfg.sample
modified:   client/virt/tests/watchdog.py

---
 client/tests/kvm/subtests.cfg.sample |2 ++
 client/virt/tests/watchdog.py|6 +++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/client/tests/kvm/subtests.cfg.sample 
b/client/tests/kvm/subtests.cfg.sample
index e4f6718..79a6d8b 100644
--- a/client/tests/kvm/subtests.cfg.sample
+++ b/client/tests/kvm/subtests.cfg.sample
@@ -363,6 +363,8 @@ variants:
 only RHEL.5, RHEL.6
 type = watchdog
 extra_params +=  -watchdog i6300esb -watchdog-action reset
+   #watchdog device: i6300esb|ib700
+   #watchdog-actions: reset|shutdown|poweroff|pause|debug
 relogin_timeout = 240
 
 - smbios_table: install setup image_copy unattended_install.cdrom
diff --git a/client/virt/tests/watchdog.py b/client/virt/tests/watchdog.py
index 446f250..2a19c7b 100644
--- a/client/virt/tests/watchdog.py
+++ b/client/virt/tests/watchdog.py
@@ -17,10 +17,10 @@ def run_watchdog(test, params, env):
 relogin_timeout = int(params.get(relogin_timeout, 240))
 watchdog_enable_cmd = chkconfig watchdog on  service watchdog start
 
-def watchdog_action_reset():
+def watchdog_action():
 
 Trigger a crash dump through sysrq-trigger
-Ensure watchdog_action(reset) occur.
+Ensure watchdog_action occur.
 
 session = vm.wait_for_login(timeout=timeout)
 
@@ -37,7 +37,7 @@ def run_watchdog(test, params, env):
 
 logging.info(Enabling watchdog service...)
 session.cmd(watchdog_enable_cmd, timeout=320)
-watchdog_action_reset()
+watchdog_action()
 
 # Close stablished session
 session.close()
-- 
1.7.0.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM test: Add cpu_hotplug subtest

2011-08-23 Thread pradeep
On Wed, 24 Aug 2011 01:05:13 -0300
Lucas Meneghel Rodrigues l...@redhat.com wrote:

 Tests the ability of adding virtual cpus on the fly to qemu using
 the monitor command cpu_set, then after everything is OK, run the
 cpu_hotplug testsuite on the guest through autotest.
 
 Updates: As of the latest qemu-kvm (08-24-2011) HEAD, trying to
 online more CPUs than the ones already available leads to qemu
 hanging:
 
 File /home/lmr/Code/autotest-git/client/virt/kvm_monitor.py, line
 279, in cmd raise MonitorProtocolError(msg)
 MonitorProtocolError: Could not find (qemu) prompt after command
 cpu_set 2 online. Output so far: 
 
 Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
 ---
  client/tests/kvm/tests/cpu_hotplug.py  |   99
 
 client/tests/kvm/tests_base.cfg.sample |7 ++ 2 files changed, 106
 insertions(+), 0 deletions(-) create mode 100644
 client/tests/kvm/tests/cpu_hotplug.py
 
 diff --git a/client/tests/kvm/tests/cpu_hotplug.py
 b/client/tests/kvm/tests/cpu_hotplug.py new file mode 100644
 index 000..fa75c9b
 --- /dev/null
 +++ b/client/tests/kvm/tests/cpu_hotplug.py
 @@ -0,0 +1,99 @@
 +import os, logging, re
 +from autotest_lib.client.common_lib import error
 +from autotest_lib.client.virt import virt_test_utils
 +
 +
 +@error.context_aware
 +def run_cpu_hotplug(test, params, env):
 +
 +Runs CPU hotplug test:
 +
 +1) Pick up a living guest
 +2) Send the monitor command cpu_set [cpu id] for each cpu we
 wish to have
 +3) Verify if guest has the additional CPUs showing up under
 +/sys/devices/system/cpu
 +4) Try to bring them online by writing 1 to the 'online' file
 inside that dir
 +5) Run the CPU Hotplug test suite shipped with autotest inside

It looks good to me.  How about adding 
1) off-lining of vcpu. 
2) Frequent offline-online of vcpus.  some thing like below. 

#!/bin/sh

SYS_CPU_DIR=/sys/devices/system/cpu

VICTIM_IRQ=15
IRQ_MASK=f0

iteration=0
while true; do
  echo $iteration
  echo $IRQ_MASK  /proc/irq/$VICTIM_IRQ/smp_affinity
  for cpudir in $SYS_CPU_DIR/cpu[1-9]; do
echo 0  $cpudir/online
  done
  for cpudir in $SYS_CPU_DIR/cpu[1-9]; do
echo 1  $cpudir/online
  done
  iteration=`expr $iteration + 1`
done


 guest +
 +@param test: KVM test object.
 +@param params: Dictionary with test parameters.
 +@param env: Dictionary with the test environment.
 +
 +vm = env.get_vm(params[main_vm])
 +vm.verify_alive()
 +timeout = int(params.get(login_timeout, 360))
 +session = vm.wait_for_login(timeout=timeout)
 +
 +n_cpus_add = int(params.get(n_cpus_add, 1))
 +current_cpus = int(params.get(smp, 1))
 +total_cpus = current_cpus + n_cpus_add
 +
 +error.context(getting guest dmesg before addition)
 +dmesg_before = session.cmd(dmesg -c)
 +
 +error.context(Adding %d CPUs to guest % n_cpus_add)
 +for i in range(total_cpus):
 +vm.monitor.cmd(cpu_set %s online % i)
 +
 +output = vm.monitor.cmd(info cpus)
 +logging.debug(Output of info cpus:\n%s, output)
 +
 +cpu_regexp = re.compile(CPU #(\d+))
 +total_cpus_monitor = len(cpu_regexp.findall(output))
 +if total_cpus_monitor != total_cpus:
 +raise error.TestFail(Monitor reports %s CPUs, when VM
 should have %s %
 + (total_cpus_monitor, total_cpus))
 +
 +dmesg_after = session.cmd(dmesg -c)
 +logging.debug(Guest dmesg output after CPU add:\n%s %
 dmesg_after) +
 +# Verify whether the new cpus are showing up on /sys
 +error.context(verifying if new CPUs are showing on guest's /sys
 dir)
 +n_cmd = 'find /sys/devices/system/cpu/cpu[0-99] -maxdepth 0
 -type d | wc -l'
 +output = session.cmd(n_cmd)
 +logging.debug(List of cpus on /sys:\n%s % output)
 +try:
 +cpus_after_addition = int(output)
 +except ValueError:
 +logging.error(Output of '%s': %s, n_cmd, output)
 +raise error.TestFail(Unable to get CPU count after CPU
 addition) +
 +if cpus_after_addition != total_cpus:
 +raise error.TestFail(%s CPUs are showing up under 
 + /sys/devices/system/cpu, was expecting
 %s %
 + (cpus_after_addition, total_cpus))
 +
 +error.context(locating online files for guest's new CPUs)
 +r_cmd = 'find /sys/devices/system/cpu/cpu[0-99]/online -maxdepth
 0 -type f'
 +online_files = session.cmd(r_cmd)
 +logging.debug(CPU online files detected: %s, online_files)
 +online_files = online_files.split().sort()
 +
 +if not online_files:
 +raise error.TestFail(Could not find CPUs that can be 
 + enabled/disabled on guest)
 +
 +for online_file in online_files:
 +cpu_regexp = re.compile(cpu(\d+), re.IGNORECASE)
 +cpu_id = cpu_regexp.findall(online_file)[0]
 +error.context(changing online status for CPU %s % cpu_id)
 +check_online_status = 

Re: [Autotest] [PATCH] Virt: Adding softlockup subtest

2011-08-12 Thread pradeep
On Wed, 20 Jul 2011 22:30:09 -0300
Lucas Meneghel Rodrigues l...@redhat.com wrote:

 From: pradeep y...@example.com
 
 This patch introduces a soft lockup/drift test with stress.
 
 1) Boot up a VM.
 2) Build stress on host and guest.
 3) run heartbeat monitor with the given options on server and
 host. 3) Run for a relatively long time length, ex: 12, 18 or 24
 hours. 4) Output the test result and observe drift.

Thanks for making changes.  
How about taking average of last 10 drift values? 

 
 Changes from v2:
  * Fixed up commands being used on guest, lack of proper output
redirection was confusing aexpect
  * Proper clean up previous instances of the monitor programs
lying around, as well as log files
  * Resort to another method of determining host IP if the same
has no fully qualified hostname (stand alone laptops, for
example)
  * Only use a single session on guest to execute all the commands.
previous version was opening unneeded connections.
  * Fix stress execution in guest and host, now the stress instances
effectively start
  * Actively open guest and host firewall rules so heartbeat monitor
communication can happen
 
 Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
 Signed-off-by: Pradeep Kumar Surisetty psuri...@linux.vnet.ibm.com
 ---
  client/tests/kvm/deps/heartbeat_slu.py |  205
 
 client/tests/kvm/tests_base.cfg.sample |   18 +++
 client/virt/tests/softlockup.py|  147 +++
 3 files changed, 370 insertions(+), 0 deletions(-) create mode 100755
 client/tests/kvm/deps/heartbeat_slu.py create mode 100644
 client/virt/tests/softlockup.py
 
 diff --git a/client/tests/kvm/deps/heartbeat_slu.py
 b/client/tests/kvm/deps/heartbeat_slu.py new file mode 100755
 index 000..697bbbf
 --- /dev/null
 +++ b/client/tests/kvm/deps/heartbeat_slu.py
 @@ -0,0 +1,205 @@
 +#!/usr/bin/env python
 +
 +
 +Heartbeat server/client to detect soft lockups
 +
 +
 +import socket, os, sys, time, getopt
 +
 +def daemonize(output_file):
 +try:
 +pid = os.fork()
 +except OSError, e:
 +raise Exception, error %d: %s % (e.strerror, e.errno)
 +
 +if pid:
 +os._exit(0)
 +
 +os.umask(0)
 +os.setsid()
 +sys.stdout.flush()
 +sys.stderr.flush()
 +
 +if file:
 +output_handle = file(output_file, 'a+', 0)
 +# autoflush stdout/stderr
 +sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
 +sys.stderr = os.fdopen(sys.stderr.fileno(), 'w', 0)
 +else:
 +output_handle = file('/dev/null', 'a+')
 +
 +stdin_handle = open('/dev/null', 'r')
 +os.dup2(output_handle.fileno(), sys.stdout.fileno())
 +os.dup2(output_handle.fileno(), sys.stderr.fileno())
 +os.dup2(stdin_handle.fileno(), sys.stdin.fileno())
 +
 +def recv_all(sock):
 +total_data = []
 +while True:
 +data = sock.recv(1024)
 +if not data:
 +break
 +total_data.append(data)
 +return ''.join(total_data)
 +
 +def run_server(host, port, daemon, file, queue_size, threshold,
 drift):
 +if daemon:
 +daemonize(output_file=file)
 +sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
 +sock.bind((host, port))
 +sock.listen(queue_size)
 +timeout_interval = threshold * 2
 +prev_check_timestamp = float(time.time())
 +while 1:
 +c_sock, c_addr = sock.accept()
 +heartbeat = recv_all(c_sock)
 +local_timestamp = float(time.time())
 +drift = check_heartbeat(heartbeat, local_timestamp,
 threshold, check_drift)
 +# NOTE: this doesn't work if the only client is the one that
 timed
 +# out, but anything more complete would require another
 thread and
 +# a lock for client_prev_timestamp.
 +if local_timestamp - prev_check_timestamp  threshold * 2.0:
 +check_for_timeouts(threshold, check_drift)
 +prev_check_timestamp = local_timestamp
 +if verbose:
 +if check_drift:
 +print %.2f: %s (%s) % (local_timestamp, heartbeat,
 drift)
 +else:
 +print %.2f: %s % (local_timestamp, heartbeat)
 +
 +def run_client(host, port, daemon, file, interval):
 +if daemon:
 +daemonize(output_file=file)
 +seq = 1
 +while 1:
 +try:
 +sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 
 +sock.connect((host, port))
 +heartbeat = get_heartbeat(seq)
 +sock.sendall(heartbeat)
 +sock.close()
 +if verbose:
 +print heartbeat
 +except socket.error, (value, message):
 +print %.2f: ERROR, %d - %s % (float(time.time()),
 value, message) +
 +seq += 1
 +time.sleep(interval)
 +
 +def get_heartbeat(seq=1):
 +return %s %06d %.2f % (hostname, seq, float(time.time()))
 +
 +def check_heartbeat(heartbeat, local_timestamp, threshold

Re: [Autotest] [PATCH] Virt: Adding softlockup subtest

2011-08-12 Thread pradeep
On Fri, 12 Aug 2011 12:37:15 +0530
pradeep psuri...@linux.vnet.ibm.com wrote:

 On Wed, 20 Jul 2011 22:30:09 -0300
 Lucas Meneghel Rodrigues l...@redhat.com wrote:
 
  From: pradeep y...@example.com
  
  This patch introduces a soft lockup/drift test with stress.
  
  1) Boot up a VM.
  2) Build stress on host and guest.
  3) run heartbeat monitor with the given options on server and
  host. 3) Run for a relatively long time length, ex: 12, 18 or 24
  hours. 4) Output the test result and observe drift.
 
 Thanks for making changes.  
 How about taking average of last 10 drift values? 

I observed below values for my softlockup test.  More or less drift
values are similar.  (+0.01, +0.02). 
There wouldn't be much diff between last value or average of last 10
also. 

For stress  performance kind of tests, why do we need a  PASS/FAIL. We
just bother about drift value here. 


1313148260.65: localhost.localdomain 000417 1313148259.45 (drift +0.01
(-0.00)) 1313148261.65: localhost.localdomain 000418 1313148260.46
(drift +0.02 (+0.01)) 1313148262.65: localhost.localdomain 000419
1313148261.46 (drift +0.02 (-0.00)) 1313148263.66:
localhost.localdomain 000420 1313148262.46 (drift +0.02 (-0.00))
1313148264.66: localhost.localdomain 000421 1313148263.46 (drift +0.01
(-0.00)) 1313148265.76: localhost.localdomain 000422 1313148264.56
(drift +0.01 (-0.00)) 1313148266.76: localhost.localdomain 000423
1313148265.56 (drift +0.01 (-0.00)) 1313148267.76:
localhost.localdomain 000424 1313148266.57 (drift +0.02 (+0.01))
1313148268.76: localhost.localdomain 000425 1313148267.57 (drift +0.02
(-0.00)) 1313148269.77: localhost.localdomain 000426 1313148268.57
(drift +0.02 (-0.00)) 1313148270.87: localhost.localdomain 000427
1313148269.67 (drift +0.01 (-0.00)) 1313148271.87:
localhost.localdomain 000428 1313148270.68 (drift +0.02 (+0.01))
1313148272.87: localhost.localdomain 000429 1313148271.68 (drift +0.02
(-0.00)) 1313148273.88: localhost.localdomain 000430 1313148272.68
(drift +0.02 (-0.00)) 1313148274.88: localhost.localdomain 000431
1313148273.68 (drift +0.01 (-0.00)) 1313148275.97:
localhost.localdomain 000432 1313148274.78 (drift +0.02 (+0.01))
1313148276.97: localhost.localdomain 000433 1313148275.78 (drift +0.02
(-0.00)) 1313148277.98: localhost.localdomain 000434 1313148276.78
(drift +0.02 (-0.00)) 1313148278.98: localhost.localdomain 000435
1313148277.78 (drift +0.01 (-0.00)) 1313148279.98:
localhost.localdomain 000436 1313148278.78 (drift +0.01 (-0.00))
1313148281.08: localhost.localdomain 000437 1313148279.89 (drift +0.02
(+0.01)) 1313148282.09: localhost.localdomain 000438 1313148280.89
(drift +0.02 (-0.00)) 1313148283.09: localhost.localdomain 000439
1313148281.89 (drift +0.01 (-0.00))


 
  
  Changes from v2:
   * Fixed up commands being used on guest, lack of proper output
 redirection was confusing aexpect
   * Proper clean up previous instances of the monitor programs
 lying around, as well as log files
   * Resort to another method of determining host IP if the same
 has no fully qualified hostname (stand alone laptops, for
 example)
   * Only use a single session on guest to execute all the commands.
 previous version was opening unneeded connections.
   * Fix stress execution in guest and host, now the stress instances
 effectively start
   * Actively open guest and host firewall rules so heartbeat monitor
 communication can happen
  
  Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
  Signed-off-by: Pradeep Kumar Surisetty psuri...@linux.vnet.ibm.com
  ---
   client/tests/kvm/deps/heartbeat_slu.py |  205
  
  client/tests/kvm/tests_base.cfg.sample |   18 +++
  client/virt/tests/softlockup.py|  147
  +++ 3 files changed, 370 insertions(+), 0
  deletions(-) create mode 100755
  client/tests/kvm/deps/heartbeat_slu.py create mode 100644
  client/virt/tests/softlockup.py
  
  diff --git a/client/tests/kvm/deps/heartbeat_slu.py
  b/client/tests/kvm/deps/heartbeat_slu.py new file mode 100755
  index 000..697bbbf
  --- /dev/null
  +++ b/client/tests/kvm/deps/heartbeat_slu.py
  @@ -0,0 +1,205 @@
  +#!/usr/bin/env python
  +
  +
  +Heartbeat server/client to detect soft lockups
  +
  +
  +import socket, os, sys, time, getopt
  +
  +def daemonize(output_file):
  +try:
  +pid = os.fork()
  +except OSError, e:
  +raise Exception, error %d: %s % (e.strerror, e.errno)
  +
  +if pid:
  +os._exit(0)
  +
  +os.umask(0)
  +os.setsid()
  +sys.stdout.flush()
  +sys.stderr.flush()
  +
  +if file:
  +output_handle = file(output_file, 'a+', 0)
  +# autoflush stdout/stderr
  +sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
  +sys.stderr = os.fdopen(sys.stderr.fileno(), 'w', 0)
  +else:
  +output_handle = file('/dev/null', 'a+')
  +
  +stdin_handle = open('/dev/null', 'r')
  +os.dup2(output_handle.fileno

Re: KVM autotest tip of the week - How to run KVM autotest tests on an existing guest image

2011-07-25 Thread pradeep
On Mon, 25 Jul 2011 15:25:15 -0300
Lucas Meneghel Rodrigues l...@redhat.com wrote:

 Hi folks, a little later than expected, here are the docs that
 explain how to run KVM autotest tests on an existing guest image:
 
 http://autotest.kernel.org/wiki/KVMAutotest/RunTestsExistingGuest
 
 Since we are making adjustments on how to write and contribute new 
 tests, writing a new test will be the next tip of the week. Please
 bear with us :) We'd love to hear your feedback on our docs.
 

Thanks Lucas. Its very informative. 
How about multi host migration? 

--Pradeep

 Lucas
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH 0/5] Make unattended install on Linux safer, fix OpenSUSE/SLES installs

2011-04-19 Thread pradeep
On Mon, 18 Apr 2011 19:40:40 -0300
Lucas Meneghel Rodrigues l...@redhat.com wrote:

 While working on making unattended install on Linux guests safer,
 noticed that the recent patches changing unattended install to use
 -kernel and -initrd options [1] were breaking OpenSUSE and SLES
 installation. As a maintainer it is my duty to fix such breakages, so
 I did it. I tested all changes with OpenSUSE 11.4, which I downloaded
 from the opensuse website.
 
 I ask the IBM guys that contributed this guest support to go through
 and test the changes, I need some help here. Anyway, I am confident
 that this patchset will bring a major improvement for the users of
 those guests.

Hello Lucas, 

Now SLES guest install works fine. Thanks for fixing it. 

--pradeep
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[KVM] guest remote migration fails

2011-02-07 Thread pradeep
Migration of guest(both local and remote) fails with 2.6.37-rc8 host

Test Procedure:
---
A is the source host, B is the destination host:
1. Start the VM on B with the exact same parameters as the VM on A, in
migration-listen mode:
B: qemu-command-line -incoming tcp:0: (or other PORT))
2. Start the migration (always on the source host):
A: migrate -d tcp:B: (or other PORT)
3. Check the status (on A only):
A: (qemu) info migrate   


Expected results:
-
Migration should complete without any error

Actual results:

qemu-system-x86_64: VQ 2 size 0x40 Guest index 0xf000 inconsistent with
Host index 0x0: delta 0xf000
load of migration failed

Guest OS:
--
windows

Host Kernel:
---
2.6.37.rc8 on HS22

command used:
-
Source(A):
 /usr/local/bin/qemu-system-x86_64  -enable-kvm -m 4096 -smp 4  -name
win32 -monitor stdio  -boot c -drive
file=/home/storage/yogi/kvm_autotest_root/images/win7-32.raw,if=none,id=drive-ide0-0-0,boot=on,format=raw,cache=none
-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0
-netdev
tap,script=/home/yog/autotest/client/tests/kvm/scripts/qemu-ifup,id=hostnet0
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:86:e4:97
-usb -device usb-tablet,id=input0 -vnc :0


Destination (B):
/usr/local/bin/qemu-system-x86_64  -enable-kvm -m 4096 -smp 4  -name
win32 -monitor stdio  -boot c -drive
file=/home/storage/yogi/kvm_autotest_root/images 
/win7-32.raw,if=none,id=drive-ide0-0-0,boot=on,format=raw,cache=none
-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0
-netdev tap,script=/home/yog/autot
est/client/tests/kvm/scripts/qemu-ifup,id=hostnet0 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:86:e4:97 -usb
-device usb-tablet,id=input0 -vnc :5 -incoming 
tcp:0:


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Startup/Shutdown scripts for KVM Machines in Debian (libvirt)

2010-11-10 Thread pradeep
On Wed, 10 Nov 2010 09:01:40 +0100
Hermann Himmelbauer du...@qwer.tk wrote:

 Hi,
 I manage my KVM machines via libvirt and wonder if there are any
 init.d scripts for automatically starting up and shutting down
 virtual machines during boot/shutdown of the host?
 
 Writing this for myself seems to be not that simple, as when shutting
 down, the system has somehow to wait until all machines are halted
 (not responding guests have to be destroyed etc.), and I don't really
 know how to accomplish this.
 
 My host system is Debian Lenny, is there anything available? 
 Perhaps libvirt offers something I'm unaware of?

You can set it using autostart in virsh.

 
 
 Best Regards,
 Hermann
 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


qemu aborts if i add a already registered device from qemu monitor ..

2010-10-19 Thread pradeep
Hi

I tried to add a device to guest from upstream qemu monitor using
device_add.
Unknowingly i try to add already registered devices from qemu
monitor, my qemu monitor is aborted. I don't see a reason to kill
monitor. I think abort() is a bit rough. we need a better way to handle
it.  If a user try to add a already registered device, qemu should
convey this to user saying that, this device already registered and an
error message should be fine than aborting qemu.


QLIST_FOREACH(block, ram_list.blocks, next) {
if (!strcmp(block-idstr, new_block-idstr)) {
fprintf(stderr, RAMBlock \%s\ already registered,
abort!\n,
new_block-idstr);
abort();
}


If i return some other value in above code, instead of abort(), I
would  need change the code for every device, which i dont want to. 
Is there a way to check, if device is already enrolled or not in the very 
beginning of device_add
call.



Thanks
Pradeep
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/3] Launch other test during migration

2010-10-18 Thread pradeep
On Mon, 18 Oct 2010 00:59:20 -0400 (EDT)
Jason Wang jasow...@redhat.com wrote:

 Hello guys:
 
 Any further suggestion which I need to improve those patches?
 
 I agree that there's no much tests need to be run is this way except
 for migration. In order to validate the function of migration, many
 tests need to be run in parallel with migration and this series is
 just aimed at this.
 
 One major advantage of this is that it could greatly simplified the
 test design and could reuse existed test cases without modification.
 Without this, we must split the tests cases itself or split the
 migration test and modification to existed code is also required.
 
 One major issue is that not all tests could be run in this way, tests
 which needs to do monitor operation may not work well, but ususally
 this kind of test is meaningless during migration.
 
 Another probably issue is that we can not control precisely when the
 migration start, but it's not important to tests I post here because
 the background tests are usually launch as a kind of stress and need
 much more time than just a single migration to complete, so what we
 need to do here is just let the migration run until the background
 tests finish.
 
 The fact is that these tests work well and have find real issues with
 migration.
 
 Any comments or suggestions?
 
Good to cover different aspects of live migration using KVM 

In cloud,  When we are targetting for minimal downtime for migration, always we 
need to think about the performance impact of the
resources and actions.

we feel, its good to cover couple of below combinations.

 any state change of  a VM + migration

1. migration + reading data from host devices like CD, USB
2. migrate +file trasfer
3. migrate + reboot
4. internet download + migration
5. guest is given shutdown command and in parallel live migration
happens
6.hotplugging of nic, storage, memory, cpu + live migration


Thanks
Pradeep




















--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] KVM test: Remove image_boot=yes from virtio_blk variant

2010-10-14 Thread pradeep
On Thu, 14 Oct 2010 01:24:12 -0300
Lucas Meneghel Rodrigues l...@redhat.com wrote:

 Recent qemu can handle virtio without boot without boot=on,
 and qemu.git will simply state the option as invalid. So
 remove it from the default config on tests_base.cfg, just
 leave it there commented in case someone is testing older
 versions.


But older qemu shipped with distros might require boot=on.
Its good to check qemu version.


Thanks
Pradeep
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[AUTOTEST] [PATCH 1/2] KVM : ping6 test

2010-10-14 Thread pradeep
This patch is for Ping6 testing

* ping6 with various message sizes guest to/from local/remote host
  using link-local addresses 
  By default IPv6 seems to be disabled  on virbr0. Enable it by
  doing echo 0  /proc/sys/net/ipv6/conf/virbr0/disable_ipv6

Please find the below attached patch.

Thanks
Pradeep




ipv6_1
Description: Binary data


[AUTOTEST] [PATCH 1/2] KVM : ping6 test

2010-10-14 Thread pradeep
Changes for tests_base.cfg to include ping6 test

Please find below attached patch.

Thanks
Pradeep

ipv6_2
Description: Binary data


Re: [Autotest] [AUTOTEST] [PATCH 1/2] KVM : ping6 test

2010-10-14 Thread pradeep
On Thu, 14 Oct 2010 18:05:04 +0800
Amos Kong ak...@redhat.com wrote:

 On Thu, Oct 14, 2010 at 02:56:59PM +0530, pradeep wrote:
  This patch is for Ping6 testing
  
  * ping6 with various message sizes guest to/from local/remote
  host using link-local addresses 
By default IPv6 seems to be disabled  on virbr0. Enable it by
doing echo 0  /proc/sys/net/ipv6/conf/virbr0/disable_ipv6
  
  Please find the below attached patch
 
 We also need update related code in kvm_test_utils.py, and consider
 the difference of 'ping' and 'ping6'.


ping6 test again calls same ping, and enables ipv6.
so we dont need to make any changes in kvm_test_utils.py for
ping6.

 
  Signed-off-by: Pradeep K Surisetty psuri...@linux.vnet.ibm.com
  ---
  --- autotest/client/tests/kvm/tests/ping.py 2010-10-14
  14:20:52.523791118 +0530 +++
  autotest_new/client/tests/kvm/tests/ping.py 2010-10-14
  14:46:57.711797139 +0530 @@ -1,5 +1,6 @@ -import logging
  +import logging, time
   from autotest_lib.client.common_lib import error
  +from autotest_lib.client.bin import utils
   import kvm_test_utils
   
   
  @@ -27,10 +28,18 @@ def run_ping(test, params, env):
   nics = params.get(nics).split()
   strict_check = params.get(strict_check, no) == yes
   
  +address_type = params.get(address_type)
  +#By default IPv6 seems to be disabled on virbr0. 
  +ipv6_cmd = echo %s
   /proc/sys/net/ipv6/conf/virbr0/disable_ipv6
 
 We may use other bridge, so 'virbr0', need replace this hardcode name.
 We can reference to
 'autotest-upstream/client/tests/kvm/scripts/qemu-ifup'
 switch=$(/usr/sbin/brctl show | awk 'NR==2 { print $1 }')
 
 
  +
   packet_size = [0, 1, 4, 48, 512, 1440, 1500, 1505, 4054, 4055,
  4096, 4192, 8878, 9000, 32767, 65507]
   
   try:
  +if address_type == ipv6:
  +utils.run(ipv6_cmd % 0 )
  +time.sleep(5)
  +
   for i, nic in enumerate(nics):
   ip = vm.get_address(i)
   if not ip:
  @@ -68,5 +77,9 @@ def run_ping(test, params, env):
   if status != 0:
   raise error.TestFail(Ping returns non-zero
  value %s % output)
  +if address_type == ipv6:
  +utils.run(ipv6_cmd % 1 )
  +time.sleep(5)
  +
   finally:
   session.close()
  ---
 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Network Patch set V4

2010-10-13 Thread pradeep
On Tue, 12 Oct 2010 15:23:05 +0800
Amos Kong ak...@redhat.com wrote:

 On Mon, Oct 11, 2010 at 08:09:27PM +0530, pradeep wrote:
  Hi Lucas, i covered below  tests on RHEL 5.5 , RHEL 6 guests with
  vhost enbled.  Please find the below errors and cause for errros.
  
  I also attached few other test cases can be added to out TODO list
  of network patchset. I am working on couple of these issues.
  
  1. Nice_promisc:
  -
  With RHEL 5.5: PASS
 
 ...
 
 
 Thanks for your feedback, I will re-test them with rhel5/6 and reply
 the test result.
 

Thanks for that/

 
  -
  
  
  We may include below tests to for Network Patch set:
  I am working on couple of issues among below mentioned. 
  
  Ping6 testing
  
  * ping6 with various message sizes guest to/from local/remote
  host using link-local addresses 
By default IPv6 seems to be disabled  on virbr0. Enable it by
doing 
  
  NIC bonding test :
  
  https://fedoraproject.org/wiki/QA:Testcase_Virtualization_Nic_Bonding
 
 I have a draft patch for bonding testing, will improve and send it to
 mailist.
 
  NFS testing
  
  * create NFS server on guest, mount dir on host, copy and delete
files, do reverse on host 
  
  Setting and unsetting ethernet adapters.
  
 set_link name [up|down]
 
 I only wrote a testcases, didn't automate it, could you help to
 review ?
 
 Steps:
 1. boot up a guest with virtio_nic
 2. login guest through serial
 3. transfer a big file from guest to host
 guest) # scp a.out $host_ip:~
 4. put down link by monitor
 qemu) # set_link $nic_model.0 down
 5. try to capture date by tcpdump
 host)# tcpdump port $scp_port and src $guest_ip -i $tap
 6. put up link by monitor
 qemu) set_link $nic_model.0 down
 7. try to capture date by tcpdump
 host)# tcpdump port $scp_port and src $guest_ip -i $tap
 8. transfer a big file from host to guest
 host) # scp a.out $guest_ip:~
 9. put down link by monitor
 qemu) # set_link $nic_model.0 down
 10. try to capture date by tcpdump
 guest)# tcpdump port $scp_port and dest $guest_ip -i eth0
 11. put up link by monitor
 qemu) set_link $nic_model.0 down
 12. try to capture date by tcpdump
 host)# tcpdump port $scp_port and dest $guest_ip -i eth0

As Michel suggested, some thing simpler like Ping would be fine.
Apart from this every thing seems to be fine. 


 
 Expected Results:
 4. it should not capture nothing
 6. it should capture some packets
 10. it should not capture nothing
 12. it should capture some packets
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Network Patch set V4

2010-10-13 Thread pradeep



Hi Michael

could you share pointer to your tree which has rhel6 + vhost fixes.

Thanks
Pradeep

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Network Patch set V4

2010-10-11 Thread pradeep
Hi Lucas, i covered below  tests on RHEL 5.5 , RHEL 6 guests with vhost
enbled.  Please find the below errors and cause for errros.

I also attached few other test cases can be added to out TODO list of
network patchset. I am working on couple of these issues.

1. Nice_promisc:
-
With RHEL 5.5: PASS

With RHEL 6: FAIL

its failing to login to the RHEL6 guest by serial. 


02:07:06 ERROR| Test failed: TestFail: Could not log into guest 'vm2'


2. nicdriver_unload 


With RHEL 5.5: FAIL

With RHEL 6: FAIL


03:17:26 DEBUG| Got shell prompt -- logged in
03:17:26 DEBUG| Contents of environment: {'address_cache':
{'00:1a:4a:65:09:09': '192.168.122.66', '9a:52:2f:62:dd:52':
'192.168.122.39', '00:1a:64:12:e4:c1': '9.126.89.23',
'9a:52:2f:62:1a:b8': '192.168.122.95', '9a:52:2f:62:e6:67':
'192.168.122.176', '9a:52:2f:62:8c:b8': '192.168.122.59',
'9a:52:2f:62:0b:38': '192.168.122.7'}, 'vm__vm2': kvm_vm.VM instance
at 0x2311f80, 'tcpdump': kvm_subprocess.kvm_tail instance at
0x2314710, 'version': 0} 03:17:26 INFO | ['iteration.1'] 03:17:26
0x2314710ERROR| Exception escaping from test: Traceback (most recent
0x2314710call last): File
0x2314710/home/pradeep/vhost_net/autotest/client/common_lib/test.py,
0x2314710line 412, in _exec _call_test_function(self.execute, *p_args,
0x2314710**p_dargs) File
0x2314710/home/pradeep/vhost_net/autotest/client/common_lib/test.py,
0x2314710line 598, in _call_test_function return func(*args, **dargs)
  File /home/pradeep/vhost_net/autotest/client/common_lib/test.py,
line 284, in execute postprocess_profiled_run, args, dargs)
  File /home/pradeep/vhost_net/autotest/client/common_lib/test.py,
line 202, in _call_run_once
self.run_once_profiling(postprocess_profiled_run, *args, **dargs) File
/home/pradeep/vhost_net/autotest/client/common_lib/test.py, line 308,
in run_once_profiling self.run_once(*args, **dargs) File
/home/pradeep/vhost_net/autotest/client/tests/kvm/kvm.py, line 73, in
run_once run_func(self, params, env) File
/home/pradeep/vhost_net/autotest/client/tests/kvm/tests/nicdriver_unload.py,
line 27, in run_nicdriver_unload raise error.TestFail(Could not log
into guest '%s' % vm.name)

3. Netperf
--


RHEL 5.5: FAIL


RHEL6: FAIL

some tests fail with TCP_CRR, UDP_RR are failed. 
Looks like a bug..

4.multicast
-
RHEL 5.5: fail
RHEL 6:   PASS

09:14:44 DEBUG| Command failed; status: 147, output:
join_mcast_pid:8638

[1]+  Stopped python /tmp/join_mcast.py 20 225.0.0 1
09:14:44 INFO | Initial ping test, mcast: 225.0.0.1

09:14:44 DEBUG| PING 225.0.0.1 (225.0.0.1) from 9.126.89.201
e1000_0_5900: 56(84) bytes of data.
09:15:03 DEBUG| 


09:15:03 ERROR| Test failed: TestFail:  Ping return non-zero value PING
225.0.0.1 (225.0.0.1) from 9.126.89.201 e1000_0_5900: 56(84) bytes of
data.

5. mac_chnage
-

RHEL 5.5: PASS
RHEL6: FAIL

 Trying to log into guest 'vm2' by serial
login fail

This is the reason for nic_promisc also.

6. jumbo:
--

RHEL:5.5 FAIL
RHEL6: FAIL

09:50:55 DEBUG| PING 10.168.0.9 (10.168.0.9) from 9.126.89.201
e1000_0_5900: 16082(16110) bytes of data.
09:50:57 DEBUG| 
09:50:57 DEBUG| --- 10.168.0.9 ping statistics ---
09:50:57 DEBUG| 1 packets transmitted, 0 received, 100% packet loss,
time 2134ms
09:50:57 DEBUG| 
09:50:57 DEBUG| (Process terminated with status 1)
09:50:58 DEBUG| PING 10.168.0.9 (10.168.0.9) from 9.126.89.201
e1000_0_5900: 16082(16110) bytes of data.
09:51:00 DEBUG| 
09:51:00 DEBUG| --- 10.168.0.9 ping statistics ---
09:51:00 DEBUG| 1 packets transmitted, 0 received, 100% packet loss,
time 2047ms
09:51:00 DEBUG| 
09:51:00 DEBUG| (Process terminated with status 1)
09:51:01 DEBUG| PING 10.168.0.9 (10.168.0.9) from 9.126.89.201
e1000_0_5900: 16082(16110) bytes of data.
09:51:03 DEBUG| 
09:51:03 DEBUG| --- 10.168.0.9 ping statistics ---
09:51:03 DEBUG| 1 packets transmitted, 0 received, 100% packet loss,
time 2041ms
09:51:03 DEBUG| 
09:51:03 DEBUG| (Process terminated with status 1)
09:51:04 DEBUG| PING 10.168.0.9 (10.168.0.9) from 9.126.89.201
e1000_0_5900: 16082(16110) bytes of data.
09:51:06 DEBUG| 
09:51:06 DEBUG| --- 10.168.0.9 ping statistics ---
09:51:06 DEBUG| 1 packets transmitted, 0 received, 100% packet loss,
time 2036ms
09:51:06 DEBUG| 
09:51:06 DEBUG| (Process terminated with status 1)
09:51:07 DEBUG| Timeout elapsed
09:51:07 DEBUG| e1000_0_5900 Link encap:Ethernet  HWaddr
1E:1D:84:CF:CD:6B  


04:45:11 DEBUG| Running 'arp -d 192.168.122.115 -i e1000_0_5900'
04:45:11 ERROR| Test failed: TestError: MTU is not as expected even
after 10 seconds


7. file_transfer

RHEL 5.5: FAIL

file transfer pass from host to guest.
 file transfer fail while transferring from guest to host

Need to debug further for this.


RHEL 6: pass


8. ethtool
---
RHEL 5.5:  PASS
RHEL6: FAIL

05:26:44 DEBUG| Command failed; status: 92, output: Cannot set large
receive offload settings: Operation not supported
05:26:44 ERROR| Fail

Re: [Autotest] [PATCH 00/18] Network Patchset v4

2010-10-07 Thread pradeep
On Wed, 06 Oct 2010 23:09:33 -0300
Lucas Meneghel Rodrigues l...@redhat.com wrote:

 On Mon, 2010-09-27 at 18:43 -0400, Lucas Meneghel Rodrigues wrote:
  We are close to the end of this journey. Several little problems
  were fixed and we are down to some little problems:
 
 Ok, all patches applied. Thanks to everyone that helped on this
 effort!

Thanks for applying all patches to upstream. Can you push vhost_net
patch also.
Please find, my below attached test results on RHEL 5.5, RHEL 6
guests(vhost enabled)


I find issues with macchange  guest_s4 code. We need to fix this as
well. Will send erros in a separate mail.

Thanks
Pradeep



vhost
Description: Binary data


Re: [Autotest] [PATCH 00/18] Network Patchset v4

2010-10-07 Thread pradeep
On Thu, 07 Oct 2010 10:54:16 -0300
Lucas Meneghel Rodrigues l...@redhat.com wrote:


 
 I tested macchange quite a lot, so I wonder what the problem is. About
 guest_s4, interesting, we've been running this for a while, with no
 apparent problems. Please let us know and let's fix it ASAP.
 

I hope you have checked  attachment in my previous mail for other
failures with vhost_net.


Reg Mac change:
---

10/07 21:58:45 ERROR|   kvm:0080| Test failed: TestFail: Could not
log into guest 'vm1' 10/07 21:59:00 ERROR|  test:0415| Exception
escaping from test: Traceback (most recent call last):
  File /home/pradeep/vhost_net/autotest/client/common_lib/test.py,
line 412, in _exec _call_test_function(self.execute, *p_args, **p_dargs)
  File /home/pradeep/vhost_net/autotest/client/common_lib/test.py,
line 598, in _call_test_function return func(*args, **dargs)
  File /home/pradeep/vhost_net/autotest/client/common_lib/test.py,
line 284, in execute postprocess_profiled_run, args, dargs)
  File /home/pradeep/vhost_net/autotest/client/common_lib/test.py,
line 202, in _call_run_once
self.run_once_profiling(postprocess_profiled_run, *args, **dargs) File
/home/pradeep/vhost_net/autotest/client/common_lib/test.py, line 308,
in run_once_profiling self.run_once(*args, **dargs) File
/home/pradeep/vhost_net/autotest/client/tests/kvm/kvm.py, line 73, in
run_once run_func(self, params, env) File
/home/pradeep/vhost_net/autotest/client/tests/kvm/tests/mac_change.py,
line 24, in run_mac_change raise error.TestFail(Could not log into
guest '%s' % vm.name) TestFail: Could not log into guest 'vm1'

guest_s4

After suspending the guest, autotest fails to bring it back. 

10:43:51 DEBUG| Could not verify MAC-IP address mapping:
9a:18:2d:12:c9:56 --- 10.168.0.9 10:43:51 DEBUG| IP address or port
unavailable 10:43:56 DEBUG| Could not verify MAC-IP address mapping:
9a:18:2d:12:c9:56 --- 10.168.0.9 10:43:56 DEBUG| IP address or port
unavailable 10:44:00 DEBUG| (address cache) Adding cache entry:
02:18:0a:cc:59:c0 --- 10.168.0.8 10:44:00 DEBUG| (address cache)
Adding cache entry: 02:18:0a:cc:59:c0 --- 10.168.0.8 10:44:01 DEBUG|
Could not verify MAC-IP address mapping: 9a:18:2d:12:c9:56 ---
10.168.0.9 10:44:01 DEBUG| IP address or port unavailable 10:44:06
DEBUG| Could not verify MAC-IP address mapping: 9a:18:2d:12:c9:56 ---
10.168.0.9 10:44:06 DEBUG| IP address or port unavailable 10:44:08
DEBUG| Timeout elapsed 10:44:08 ERROR| Test failed: TestFail: Could not
log into VM after resuming from suspend to disk 10:44:08 DEBUG|
Postprocessing VM 'vm1'...

Thanks
Pradeep

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 18/18] KVM test: Add subtest of testing offload by ethtool

2010-10-06 Thread pradeep
On Mon, 27 Sep 2010 18:44:04 -0400
Lucas Meneghel Rodrigues l...@redhat.com wrote:


 +
 +vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
 +session = kvm_test_utils.wait_for_login(vm,
 +  timeout=int(params.get(login_timeout, 360)))
 +# Let's just error the test if we identify that there's no
 ethtool installed
 +if session.get_command_status(ethtool -h):
 +raise error.TestError(Command ethtool not installed on
 guest)
 +session2 = kvm_test_utils.wait_for_login(vm,
 +  timeout=int(params.get(login_timeout, 360)))
 +mtu = 1514
 +feature_status = {}
 +filename = /tmp/ethtool.dd
 +guest_ip = vm.get_address()
 +ethname = kvm_test_utils.get_linux_ifname(session,
 vm.get_mac_address(0))
 +supported_features = params.get(supported_features).split()

I guess split this expects input.

23:48:03 ERROR| Test failed: AttributeError: 'NoneType' object has no
attribute 'split'

22.12', '00:1a:4a:65:09:09': '192.168.122.66', '9a:52:2f:62:12:63':
'192.168.122.151', '9a:52:2f:62:6b:28': '192.168.122.35'}, 'version':
0, 'tcpdump': kvm_subprocess.kvm_tail instance at 0x27cb200} 23:48:05
INFO | ['iteration.1'] 23:48:05 ERROR| Exception escaping from test:
Traceback (most recent call last): File
/home/pradeep/vhost_net/autotest/client/common_lib/test.py, line 412,
in _exec _call_test_function(self.execute, *p_args, **p_dargs) File
/home/pradeep/vhost_net/autotest/client/common_lib/test.py, line 605,
in _call_test_function raise error.UnhandledTestFail(e)
UnhandledTestFail: Unhandled AttributeError: 'NoneType' object has no
attribute 'split' Traceback (most recent call last): File
/home/pradeep/vhost_net/autotest/client/common_lib/test.py, line 598,
in _call_test_function return func(*args, **dargs) File
/home/pradeep/vhost_net/autotest/client/common_lib/test.py, line 284,
in execute postprocess_profiled_run, args, dargs) File
/home/pradeep/vhost_net/autotest/client/common_lib/test.py, line 202,
in _call_run_once self.run_once_profiling(postprocess_profiled_run,
*args, **dargs) File
/home/pradeep/vhost_net/autotest/client/common_lib/test.py, line 308,
in run_once_profiling self.run_once(*args, **dargs) File
/home/pradeep/vhost_net/autotest/client/tests/kvm/kvm.py, line 73, in
run_once run_func(self, params, env) File
/home/pradeep/vhost_net/autotest/client/tests/kvm/tests/ethtool.py,
line 185, in run_ethtool supported_features =
params.get(supported_features).split() AttributeError: 'NoneType'
object has no attribute 'split'



--Pradeep
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH 18/18] KVM test: Add subtest of testing offload by ethtool

2010-10-06 Thread pradeep
On Wed, 6 Oct 2010 14:26:46 +0530
pradeep psuri...@linux.vnet.ibm.com wrote:

 On Mon, 27 Sep 2010 18:44:04 -0400
 Lucas Meneghel Rodrigues l...@redhat.com wrote:
 
 
 ion,
  vm.get_mac_address(0))
  +supported_features = params.get(supported_features).split()
 
 I guess split this expects input.
 
 23:48:03 ERROR| Test failed: AttributeError: 'NoneType' object has no
 attribute 'split'
 
Neglect my earlier mail. i was using rtl8139.
rtl8139 doesnt support this. 




 --Pradeep
 ___
 Autotest mailing list
 autot...@test.kernel.org
 http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH 14/18] KVM test: Add a netperf subtest

2010-10-06 Thread pradeep

 
 
 This case can pass with rhel5.5  rhel6.0, not test with fedora.
 it would not be the problem of testcase.
 
 I did not touch this problem, can you provide more debug info ? eg,
 tcpdump, ...

It seems like RHEL 5.5 issue
it fails only with TCP_CRR


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 14/18] KVM test: Add a netperf subtest

2010-10-05 Thread pradeep
In Todo list i find  TCP_CRR UDP_RR test case failures.

2) netperf

17:35:11 DEBUG| Execute netperf client
test: /root/autotest/client/tests/netperf2/netperf-2.4.5/src/netperf -t
TCP_CRR -H   10.16.74.142 -l 60 -- -m 1 17:35:45 ERROR| Fail to execute
netperf test, protocol:TCP_CRR 17:35:45 DEBUG| Execute netperf client
test: /root/autotest/client/tests/netperf2/netperf-2.4.5/src/netperf -t
UDP_RR -H 10.16.74.142 -l 60 -- -m 1 17:36:06 ERROR| Fail to execute
netperf test, protocol:UDP_RR




I havent noticed any issues with UDP_RR
But with RHEL 5.5 guest TCP_CRR fails. with other RHEL latest distro it
works fine.  Need to figure out if its test issue or RHEL 5.5 issue. 






--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cpulimit and kvm process

2010-10-01 Thread pradeep
On Fri, 1 Oct 2010 10:03:28 +0300
Mihamina Rakotomandimby miham...@gulfsat.mg wrote:

 Manao ahoana, Hello, Bonjour,
 
 I would like to launch several KVM guests on a multicore CPU.
 The number of the KVM process is over the number of physical cores.
 
 I would like to limit each KVM process to say... 10% of CPU
 
 I first use cpulimit 
 
 Would you know some better way to limit them? it's in order to avoid 4
 VM to hog all the 4 hardware cores.
 
 I also use all the livbirt tools, if there si any sol
 
 Misaotra, Thanks, Merci.
 


You should be able to limit cpu utilization using cgroups



 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 16/18] KVM test: Improve vlan subtest

2010-09-30 Thread pradeep
On Mon, 27 Sep 2010 18:44:02 -0400
Lucas Meneghel Rodrigues l...@redhat.com wrote:

 From: Amos Kong ak...@redhat.com
 
 This is an enhancement of existed vlan test. Rename the vlan_tag.py
 to vlan.py, it is more reasonable.
 . Setup arp from /proc/sys/net/ipv4/conf/all/arp_ignore
 . Multiple vlans exist simultaneously
 . Test ping between same and different vlans
 . Test by TCP data transfer, floop ping between same vlan
 . Maximal plumb/unplumb vlans
 

 +
 +vm.append(kvm_test_utils.get_living_vm(env,
 params.get(main_vm)))
 +vm.append(kvm_test_utils.get_living_vm(env, vm2))
 +
 +def add_vlan(session, id, iface=eth0):
 +if session.get_command_status(vconfig add %s %s % (iface,
 id)) != 0:
 +raise error.TestError(Fail to add %s.%s % (iface, id))
HI Lucas

I got below error with my guests. 

With (2.6.32-71 kernel) guest

21:17:23 DEBUG| Sending command: vconfig add eth0 1
21:17:23 DEBUG| Command failed; status: 3, output: ERROR: trying to add
VLAN #1 to IF -:eth0:-  error: No such device


 21:17:25 ERROR| Test failed: TestError: Fail to add eth0.1




 -subnet = 192.168.123
 -vlans = 10 20
 +subnet = 192.168

My guest has got DHCP ip as 10.168*.


With RHEL 5.5 guest


02:30:39 DEBUG| PING 192.168.1.2 (192.168.1.2) from 192.168.1.1 eth0.1:
56(84) bytes of data. 02:30:42 DEBUG| From 192.168.1.1 icmp_seq=1
Destination Host Unreachable 02:30:42 DEBUG| From 192.168.1.1
icmp_seq=2 Destination Host Unreachable


02:30:45 INFO | rem eth0.5
02:30:45 ERROR| Test failed: TestFail: eth0.1 ping 192.168.1.2
unexpected 02:30:45 DEBUG| Postprocessing VM 'vm1'...


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 07/18] KVM test: Add a subtest jumbo

2010-09-30 Thread pradeep


now jumbo tests works fine, since we added a static arp entry

thanks
Pradeep
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 13/18] KVM test: Add a subtest of changing MAC address

2010-09-30 Thread pradeep
On Mon, 27 Sep 2010 18:43:59 -0400
Lucas Meneghel Rodrigues l...@redhat.com wrote:

 From: Amos Kong ak...@redhat.com
 
 Test steps:
 
 1. Get a new mac from pool, and the old mac addr of guest.
 2. Execute the mac_change.sh in guest.
 3. Relogin to guest and query the interfaces info by `ifconfig`
 
 Changes from v3:

 
After successful mac address change, why are we trying to kill the
guest?

Please find the below logs..

  

3:26:13 DEBUG| 'kill_vm' specified; killing VM...
03:26:13 DEBUG| Destroying VM with PID 7420...
03:26:13 DEBUG| Trying to shutdown VM with shell command...
03:26:13 DEBUG| Trying to login with command 'ssh -o
UserKnownHostsFile=/dev/null -o PreferredAuthentications=password -p 22
r...@10.168.0.7' 03:26:13 DEBUG| Got 'Are you sure...'; sending 'yes'
03:26:14 DEBUG| Got password prompt; sending '123456' 03:26:15 DEBUG|
Got shell prompt -- logged in 03:26:15 DEBUG| Shutdown command sent;
waiting for VM to go down... 03:26:16 DEBUG| (address cache) Adding
cache entry: 9a:18:2d:12:78:aa --- 10.168.0.8 03:26:16 DEBUG| (address
cache) Adding cache entry: 9a:18:2d:12:78:aa --- 10.168.0.8 03:26:33
DEBUG| (address cache) Adding cache entry: 02:18:0a:cc:7c:8c ---
10.168.0.6 03:26:33 DEBUG| (address cache) Adding cache entry:
02:18:0a:cc:7c:8c --- 10.168.0.6 03:26:38 DEBUG| (qemu) (Process
terminated with status 0) 03:26:39 DEBUG| VM is down, freeing mac
address. 03:26:39 DEBUG| Freeing MAC addr for NIC
20100929-231813-f5uA:0: 9a:18:2d:12:74:20 03:26:39 DEBUG| Terminating
screendump thread...







--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 11/18] KVM test: Add a subtest of multicast

2010-09-30 Thread pradeep
On Mon, 27 Sep 2010 18:43:57 -0400
Lucas Meneghel Rodrigues l...@redhat.com wrote:

 From: Amos Kong ak...@redhat.com
 
 Use 'ping' to test send/recive multicat packets. Flood ping test is
 also added. Limit guest network as 'bridge' mode, because multicast
 packets could not be transmitted to guest when using 'user' network.
 Add join_mcast.py for joining machine into multicast groups.
 
 Changes from v1:
 - Just flush the firewall rules with iptables -F
 


After copying join_mcast.py to guest, autotest fails to run the below
command. 

05:22:56 DEBUG| Sending command: python /tmp/join_mcast.py 20 225.0.0 1
05:22:56 DEBUG| Command failed; status: 147, output:
join_mcast_pid:12727

[1]+  Stopped python /tmp/join_mcast.py 20 225.0.0 1


00:17:31 DEBUG| Sending command: kill -s SIGCONT 2005
00:17:31 ERROR| Test failed: TestFail:  Ping return non-zero value PING
225.0.0.1 (225.0.0.1) from 9.126.89.203 rtl8139_0_5900: 56(84) bytes of
data.


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 07/18] KVM test: Add a subtest jumbo

2010-09-29 Thread pradeep
O

 +Test the RX jumbo frame function of vnics:
 +
 +1) Boot the VM.
 +2) Change the MTU of guest nics and host taps depending on the
 NIC model.
 +3) Add the static ARP entry for guest NIC.
 +4) Wait for the MTU ok.
 +5) Verify the path MTU using ping.
 +6) Ping the guest with large frames.
 +7) Increment size ping.
 +8) Flood ping the guest with large frames.
 +9) Verify the path MTU.
 +10) Recover the MTU.
 +

Thanks for new set of patches.

Jumbo fails again since MTU is not as expected even.

02:46:14 INFO | Removing the temporary ARP entry
02:46:14 DEBUG| Running 'arp -d 10.168.0.6 -i kvmbr0'
02:46:14 ERROR| Test failed: TestError: MTU is not as expected even
after 10 seconds 02:46:14 DEBUG| Postprocessing VM 'vm2'...
02:46:14 DEBUG| VM object found in environment
02:46:14 DEBUG| Terminating screendump thread...


 +logging.info(Removing the temporary ARP entry)
 +utils.run(arp -d %s -i %s % (ip, bridge))
 

  I am just trying to understand, why are we trying to remove same
  guest ip arp cache. For sure host ip will be there in arp cache.
  Try with host ip @eth*. It works fine for me.

  

Thanks
Pradeep

 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 07/18] KVM test: Add a subtest jumbo

2010-09-29 Thread pradeep
On Wed, 29 Sep 2010 08:07:19 -0300
Lucas Meneghel Rodrigues l...@redhat.com wrote:


 
 Not as expected even after 10 seconds. The idea is to change the MTU,
 wait a little while and check it again. Yes, I also got this problem
 doing my test of the patch. Need to check why that is happening.
 

from guest i tried to remove arp entry from cache as mentioned in code.

arp -d 10.168.0.6 -i kvmbr0

It never worked for me. I guess this is the reason for this error.

  
  Thanks
  Pradeep
  
   
 
 
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 08/18] KVM test: Add basic file transfer test

2010-09-29 Thread pradeep
On Tue, 28 Sep 2010 15:24:25 +0200
Michael S. Tsirkin m...@redhat.com wrote:


  Signed-off-by: Amos Kong ak...@redhat.com
 
 Why scp_timeout? Not transfer_timeout?
 Is this really only scp file transfer to/from linux guest?
 Need to either name it so or generalize. Other things that
 need testing are NFS for linux guest, scp from windows, samba
 for linux and windows guests.


too Agree with transfer_timeout
For now, this test fails. Looks like test case issue.
Will figure out and send you a patch



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH 07/18] KVM test: Add a subtest jumbo

2010-09-27 Thread pradeep
Hi Lucas

Tried different combinations for this jumbo test case. it dint work for
me. I guess there is a problem while trying to remove ARP entry.
ARP entry can be removed from cache using ip and network
interface (for ex: eth0)

arp -d ip -i eth0


Error which i got:

23:06:14 DEBUG| Running 'arp -d 192.168.122.104 -i rtl8139_0_5900'
23:06:14 ERROR| Test failed: CmdError: Command arp -d 192.168.122.104
-i rtl8139_0_5900 failed, rc=255, Command returned non-zero exit status
* Command: 
arp -d 192.168.122.104 -i rtl8139_0_5900
Exit status: 255
Duration: 0.00138521194458

stderr:
SIOCDARP(pub): No such file or directory

When i try manually this one works for me.

Gues ip: 192.168.122.104

try arp -i 192.168.122.1 -i eth0


--Thanks
Pradeep
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH 07/18] KVM test: Add a subtest jumbo

2010-09-24 Thread pradeep
On Tue, 14 Sep 2010 19:25:32 -0300
Lucas Meneghel Rodrigues l...@redhat.com wrote:

 +session.close()
 +logging.info(Removing the temporary ARP entry)
 +utils.run(arp -d %s -i %s % (ip, ifname))
 

Hi Lucas

Tried different combinations for this jumbo test case. it dint work for
me. I guess there is a problem while trying to remove ARP entry.
ARP entry can be removed from cache using ip and network
interface (for ex: eth0)

arp -d ip -i eth0


Error which i got:

23:06:14 DEBUG| Running 'arp -d 192.168.122.104 -i rtl8139_0_5900'
23:06:14 ERROR| Test failed: CmdError: Command arp -d 192.168.122.104
-i rtl8139_0_5900 failed, rc=255, Command returned non-zero exit status
* Command: 
arp -d 192.168.122.104 -i rtl8139_0_5900
Exit status: 255
Duration: 0.00138521194458

stderr:
SIOCDARP(pub): No such file or directory

When i try manually this one works for me.

Gues ip: 192.168.122.104

try arp -i 192.168.122.1 -i eth0


--Thanks
Pradeep
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH 18/18] KVM test: Add subtest of testing offload by ethtool

2010-09-23 Thread pradeep
On Tue, 14 Sep 2010 19:25:43 -0300
Lucas Meneghel Rodrigues l...@redhat.com wrote:

 The latest case contains TX/RX/SG/TSO/GSO/GRO/LRO test.
 RTL8139 NIC doesn't support TSO, LRO, it's too old, so
 drop offload test from rtl8139. LRO, GRO are only
 supported by latest kernel, virtio nic doesn't support
 receive offloading function.
 
 Initialize the callbacks first and execute all the sub
 tests one by one, all the result will be check at the
 end. When execute this test, vhost should be enabled,
 then most of new features can be used. Vhost doesn't
 support VIRTIO_NET_F_MRG_RXBUF, so do not check large
 packets in received offload test.
 
 Transfer files by scp between host and guest, match
 new opened TCP port by netstat. Capture the packages
 info by tcpdump, it contains package length.
 
 TODO: Query supported offload function by 'ethtool'
 

Hi Lucas/AMos

Thanks for the patches. 

Please find below error, when  i try to run  ethtool test on
my guest (kernel: 2.6.32-71.el6.i386) which is on host (Kernel
2.6.32-71.el6.x86_64).


'module' object has no attribute 'get_linux_ifname'..



04:23:59 DEBUG| Got shell prompt -- logged in
04:23:59 INFO | Logged into guest 'vm1'
04:23:59 ERROR| Test failed: AttributeError: 'module' object has no
attribute 'get_linux_ifname' 04:23:59 DEBUG| Postprocessing VM 'vm1'...

Ethtool is trying to access get_linux_ifname which is not present in
kvm_test_utils.py. AM i missing any patches?



Thanks
Pradeep


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] KVM test: Memory ballooning test for KVM guest

2010-04-15 Thread pradeep

Hi Lucas

Please ignore my earlier patch
Find the correct patch with the suggested changes.


--SP


diff -purN autotest/client/tests/kvm/tests/balloon_check.py 
autotest-new/client/tests/kvm/tests/balloon_check.py
--- autotest/client/tests/kvm/tests/balloon_check.py1969-12-31 
19:00:00.0 -0500
+++ autotest-new/client/tests/kvm/tests/balloon_check.py2010-04-15 
18:50:09.0 -0400
@@ -0,0 +1,51 @@
+import re, string, logging, random, time
+from autotest_lib.client.common_lib import error
+import kvm_test_utils, kvm_utils
+
+def run_balloon_check(test, params, env):
+
+Check Memory ballooning:
+1) Boot a guest
+2) Change the memory between 60% to 95% of memory of guest using 
ballooning 
+3) check memory info
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+
+vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
+session = kvm_test_utils.wait_for_login(vm)
+fail = 0
+
+# Check memory size
+logging.info(Memory size check)
+expected_mem = int(params.get(mem))
+actual_mem = vm.get_memory_size()
+if actual_mem != expected_mem:
+logging.error(Memory size mismatch:)
+logging.error(Assigned to VM: %s % expected_mem)
+logging.error(Reported by OS: %s % actual_mem)
+
+
+#Check if info balloon works or not.
+status, output = vm.send_monitor_cmd(info balloon)
+if status != 0:
+logging.error(qemu monitor command failed: info balloon)
+fail += 1
+ 
+#Reduce memory to random size between 60% to 95% of actual memory
+percent = random.uniform(0.6, 0.95)
+new_mem = int(percent*actual_mem)
+vm.send_monitor_cmd(balloon %s % new_mem)
+time.sleep(20)
+status, output = vm.send_monitor_cmd(info balloon)
+ballooned_mem = int(re.findall(\d+,output)[0])
+if ballooned_mem != new_mem:
+logging.error(memory ballooning failed while changing memory from %s 
to %s %actual_mem %new_mem)  
+fail += 1
+
+#Checking for test result
+if fail != 0:
+raise error.TestFail(Memory ballooning test failed )
+session.close()
diff -purN autotest/client/tests/kvm/tests_base.cfg.sample 
autotest-new/client/tests/kvm/tests_base.cfg.sample
--- autotest/client/tests/kvm/tests_base.cfg.sample 2010-04-15 
09:14:10.0 -0400
+++ autotest-new/client/tests/kvm/tests_base.cfg.sample 2010-04-15 
18:50:35.0 -0400
@@ -171,6 +171,10 @@ variants:
 drift_threshold = 10
 drift_threshold_single = 3
 
+- balloon_check:  install setup unattended_install
+type = balloon_check
+extra_params += -balloon virtio
+
 - stress_boot:  install setup unattended_install
 type = stress_boot
 max_vms = 5


Re: [Autotest] [PATCH] KVM test: Memory ballooning test for KVM guest

2010-04-15 Thread pradeep

Lucas Meneghel Rodrigues wrote:




Hi Pradeep, I was reading the test once again while trying it myself,
some other ideas came to me. I spent some time hacking the test and sent
an updated patch with changes. Please let me know what you think, if you
are OK with them I'll commit it.

  

Hi Lucas

Patch looks fine to me. Thanks for your code changes.

--SP
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] KVM test: Memory ballooning test for KVM guest

2010-04-12 Thread pradeep

sudhir kumar wrote:


On Fri, Apr 9, 2010 at 2:40 PM, pradeep psuri...@linux.vnet.ibm.com wrote:
  

Hi Lucas

Thanks for your comments.
Please find the patch, with suggested changes.

Thanks
Pradeep



Signed-off-by: Pradeep Kumar Surisetty psuri...@linux.vnet.ibm.com
---
diff -uprN autotest-old/client/tests/kvm/tests/balloon_check.py
autotest/client/tests/kvm/tests/balloon_check.py
--- autotest-old/client/tests/kvm/tests/balloon_check.py1969-12-31
19:00:00.0 -0500
+++ autotest/client/tests/kvm/tests/balloon_check.py2010-04-09
12:33:34.0 -0400
@@ -0,0 +1,47 @@
+import re, string, logging, random, time
+from autotest_lib.client.common_lib import error
+import kvm_test_utils, kvm_utils
+
+def run_balloon_check(test, params, env):
+
+Check Memory ballooning:
+1) Boot a guest
+2) Increase and decrease the memory of guest using balloon command from
monitor


Better replace this description by Change the guest memory between X
and Y values
Also instead of using 0.6 and 0.95 below, better use two variables and
take their value from config file. This will give the user a
flexibility to narrow or widen the ballooning range.
  
Thanks for your suggestions. I dont  think, user should need flexibility 
here.
If ballooning doest work for one set of value, it will not work for any 
other.
And here, we are choosing between 60 to 95% of actual values, which is 
reasonable.


  

+3) check memory info
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+
+vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
+session = kvm_test_utils.wait_for_login(vm)
+fail = 0
+
+# Check memory size
+logging.info(Memory size check)
+expected_mem = int(params.get(mem))
+actual_mem = vm.get_memory_size()
+if actual_mem != expected_mem:
+logging.error(Memory size mismatch:)
+logging.error(Assigned to VM: %s % expected_mem)
+logging.error(Reported by OS: %s % actual_mem)
+
+#change memory to random size between 60% to 95% of actual memory
+percent = random.uniform(0.6, 0.95)
+new_mem = int(percent*expected_mem)
+vm.send_monitor_cmd(balloon %s %new_mem)



You may want to check if the command passed/failed. Older versions
might not support ballooning.
  


sure,  i will make changes here.
  

+time.sleep(20)


why 20 second sleep and why the magic number?
  
As soon as balloon command is passed, it takes some time for memory 
ballooing.
If we check info balloon as soon as we run ballooning, it will give 
weird values.

I just choose it as, 20sec and its not huge time from testing perspective.
  

+status, output = vm.send_monitor_cmd(info balloon)


You might want to put this check before changing the memory.

  

sure, will make changes here too.

+if status != 0:
+logging.error(qemu monitor command failed: info balloon)
+
+balloon_cmd_mem = int(re.findall(\d+,output)[0])


A better variable name I can think of is ballooned_mem
  


will  change it...
  

+if balloon_cmd_mem != new_mem:
+logging.error(memory ballooning failed while changing memory to
%s %balloon_cmd_mem)
+   fail += 1
+
+#Checking for test result
+if fail != 0:


In case you are running multiple iterations and the 2nd iteration
fails you will always miss this condition.
  


  

+raise error.TestFail(Memory ballooning test failed )
+session.close()
diff -uprN autotest-old/client/tests/kvm/tests_base.cfg.sample
autotest/client/tests/kvm/tests_base.cfg.sample
--- autotest-old/client/tests/kvm/tests_base.cfg.sample 2010-04-09
12:32:50.0 -0400
+++ autotest/client/tests/kvm/tests_base.cfg.sample 2010-04-09
12:53:27.0 -0400
@@ -185,6 +185,10 @@ variants:
drift_threshold = 10
drift_threshold_single = 3

+- balloon_check:  install setup unattended_install boot
+type = balloon_check
+extra_params +=  -balloon virtio
+
- stress_boot:  install setup unattended_install
type = stress_boot
max_vms = 5
---



Rest all looks good
  

___
Autotest mailing list
autot...@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest







  


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] KVM test: Memory ballooning test for KVM guest

2010-04-12 Thread pradeep


Lucas


Any comments??


--sp

pradeep wrote:


sudhir kumar wrote:

  

On Fri, Apr 9, 2010 at 2:40 PM, pradeep psuri...@linux.vnet.ibm.com wrote:
  


Hi Lucas

Thanks for your comments.
Please find the patch, with suggested changes.

Thanks
Pradeep



Signed-off-by: Pradeep Kumar Surisetty psuri...@linux.vnet.ibm.com
---
diff -uprN autotest-old/client/tests/kvm/tests/balloon_check.py
autotest/client/tests/kvm/tests/balloon_check.py
--- autotest-old/client/tests/kvm/tests/balloon_check.py1969-12-31
19:00:00.0 -0500
+++ autotest/client/tests/kvm/tests/balloon_check.py2010-04-09
12:33:34.0 -0400
@@ -0,0 +1,47 @@
+import re, string, logging, random, time
+from autotest_lib.client.common_lib import error
+import kvm_test_utils, kvm_utils
+
+def run_balloon_check(test, params, env):
+
+Check Memory ballooning:
+1) Boot a guest
+2) Increase and decrease the memory of guest using balloon command from
monitor

  

Better replace this description by Change the guest memory between X
and Y values
Also instead of using 0.6 and 0.95 below, better use two variables and
take their value from config file. This will give the user a
flexibility to narrow or widen the ballooning range.
  

Thanks for your suggestions. I dont  think, user should need flexibility 
here.
If ballooning doest work for one set of value, it will not work for any 
other.
And here, we are choosing between 60 to 95% of actual values, which is 
reasonable.


  
  


+3) check memory info
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+
+vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
+session = kvm_test_utils.wait_for_login(vm)
+fail = 0
+
+# Check memory size
+logging.info(Memory size check)
+expected_mem = int(params.get(mem))
+actual_mem = vm.get_memory_size()
+if actual_mem != expected_mem:
+logging.error(Memory size mismatch:)
+logging.error(Assigned to VM: %s % expected_mem)
+logging.error(Reported by OS: %s % actual_mem)
+
+#change memory to random size between 60% to 95% of actual memory
+percent = random.uniform(0.6, 0.95)
+new_mem = int(percent*expected_mem)
+vm.send_monitor_cmd(balloon %s %new_mem)

  

You may want to check if the command passed/failed. Older versions
might not support ballooning.
  



sure,  i will make changes here.
  
  


+time.sleep(20)

  

why 20 second sleep and why the magic number?
  

As soon as balloon command is passed, it takes some time for memory 
ballooing.
If we check info balloon as soon as we run ballooning, it will give 
weird values.

I just choose it as, 20sec and its not huge time from testing perspective.
  
  


+status, output = vm.send_monitor_cmd(info balloon)

  

You might want to put this check before changing the memory.

  


sure, will make changes here too.
  

+if status != 0:
+logging.error(qemu monitor command failed: info balloon)
+
+balloon_cmd_mem = int(re.findall(\d+,output)[0])

  

A better variable name I can think of is ballooned_mem
  



will  change it...
  
  


+if balloon_cmd_mem != new_mem:
+logging.error(memory ballooning failed while changing memory to
%s %balloon_cmd_mem)
+   fail += 1
+
+#Checking for test result
+if fail != 0:

  

In case you are running multiple iterations and the 2nd iteration
fails you will always miss this condition.
  



  
  


+raise error.TestFail(Memory ballooning test failed )
+session.close()
diff -uprN autotest-old/client/tests/kvm/tests_base.cfg.sample
autotest/client/tests/kvm/tests_base.cfg.sample
--- autotest-old/client/tests/kvm/tests_base.cfg.sample 2010-04-09
12:32:50.0 -0400
+++ autotest/client/tests/kvm/tests_base.cfg.sample 2010-04-09
12:53:27.0 -0400
@@ -185,6 +185,10 @@ variants:
drift_threshold = 10
drift_threshold_single = 3

+- balloon_check:  install setup unattended_install boot
+type = balloon_check
+extra_params +=  -balloon virtio
+
- stress_boot:  install setup unattended_install
type = stress_boot
max_vms = 5
---

  

Rest all looks good
  


___
Autotest mailing list
autot...@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest



  


  



___
Autotest mailing list
autot...@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
  


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Autotest] [PATCH] KVM test: Memory ballooning test for KVM guest

2010-04-09 Thread pradeep


Hi Lucas

Thanks for your comments.
Please find the patch, with suggested changes.

Thanks
Pradeep


Signed-off-by: Pradeep Kumar Surisetty psuri...@linux.vnet.ibm.com
---
diff -uprN autotest-old/client/tests/kvm/tests/balloon_check.py 
autotest/client/tests/kvm/tests/balloon_check.py
--- autotest-old/client/tests/kvm/tests/balloon_check.py1969-12-31 
19:00:00.0 -0500
+++ autotest/client/tests/kvm/tests/balloon_check.py2010-04-09 
12:33:34.0 -0400
@@ -0,0 +1,47 @@
+import re, string, logging, random, time
+from autotest_lib.client.common_lib import error
+import kvm_test_utils, kvm_utils
+
+def run_balloon_check(test, params, env):
+
+Check Memory ballooning:
+1) Boot a guest
+2) Increase and decrease the memory of guest using balloon command from 
monitor
+3) check memory info
+
+@param test: kvm test object
+@param params: Dictionary with the test parameters
+@param env: Dictionary with test environment.
+
+
+vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
+session = kvm_test_utils.wait_for_login(vm)
+fail = 0
+
+# Check memory size
+logging.info(Memory size check)
+expected_mem = int(params.get(mem))
+actual_mem = vm.get_memory_size()
+if actual_mem != expected_mem:
+logging.error(Memory size mismatch:)
+logging.error(Assigned to VM: %s % expected_mem)
+logging.error(Reported by OS: %s % actual_mem)
+
+#change memory to random size between 60% to 95% of actual memory
+percent = random.uniform(0.6, 0.95)
+new_mem = int(percent*expected_mem)
+vm.send_monitor_cmd(balloon %s %new_mem)
+time.sleep(20)
+status, output = vm.send_monitor_cmd(info balloon)
+if status != 0:
+logging.error(qemu monitor command failed: info balloon)
+
+balloon_cmd_mem = int(re.findall(\d+,output)[0])
+if balloon_cmd_mem != new_mem:
+logging.error(memory ballooning failed while changing memory to %s 
%balloon_cmd_mem)  
+   fail += 1
+
+#Checking for test result
+if fail != 0:
+raise error.TestFail(Memory ballooning test failed )
+session.close()
diff -uprN autotest-old/client/tests/kvm/tests_base.cfg.sample 
autotest/client/tests/kvm/tests_base.cfg.sample
--- autotest-old/client/tests/kvm/tests_base.cfg.sample 2010-04-09 
12:32:50.0 -0400
+++ autotest/client/tests/kvm/tests_base.cfg.sample 2010-04-09 
12:53:27.0 -0400
@@ -185,6 +185,10 @@ variants:
 drift_threshold = 10
 drift_threshold_single = 3
 
+- balloon_check:  install setup unattended_install boot
+type = balloon_check
+extra_params +=  -balloon virtio
+
 - stress_boot:  install setup unattended_install
 type = stress_boot
 max_vms = 5
---


[Autotest] [PATCH] KVM test: Memory ballooning test for KVM guest

2010-02-11 Thread pradeep

Hi


This path tests Memory ballooning functionality of a KVM guest.


Create a guest. Boot the guest with -balloon virtio

Try to increase/decrease the memory from qemu monitor and verify the 
changes.



Please find the attached patch.


Thanks
Pradeep

Signed-off-by: Pradeep Kumar Surisetty psuri...@linux.vnet.ibm.com
---
diff -uprN a/client/tests/kvm/tests/balloon_check.py 
b/client/tests/kvm/tests/balloon_check.py
--- a/client/tests/kvm/tests/balloon_check.py   2010-02-11 20:31:49.197953539 
+0530
+++ b/client/tests/kvm/tests/balloon_check.py   1970-01-01 05:30:00.0 
+0530
@@ -1,63 +0,0 @@
-import re, string, logging
-from autotest_lib.client.common_lib import error
-import kvm_test_utils, kvm_utils
-
-def run_balloon_check(test, params, env):
-
-Check Memory ballooning:
-1) Boot a guest
-2) Increase and decrease the memory of guest using balloon command from 
monitor
-3) check memory info
-
-@param test: kvm test object
-@param params: Dictionary with the test parameters
-@param env: Dictionary with test environment.
-
-
-vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
-session = kvm_test_utils.wait_for_login(vm)
-fail = 0
-
-# Check memory size
-ogging.info(Memory size check)
-expected_mem = int(params.get(mem))
-actual_mem = vm.get_memory_size()
-if actual_mem != expected_mem:
-logging.error(Memory size mismatch:)
-logging.error(Assigned to VM: %s % expected_mem)
-logging.error(Reported by OS: %s % actual_mem)
-
-#check info balloon command
-o, str = vm.send_monitor_cmd(info balloon)
-if o != 0:
-logging.error(qemu monitor command failed:)
-s =int(re.findall(\d+,str)[0])
-if s != actual_mem:
-logging.error(qemu monitor command failed: info balloon)
-raise error.TestFail(Memory ballooning failed while decreasing 
memory)
-
-#Reduce memory to 80% of actual memory
-new_mem = int (0.8 * actual_mem)
-vm.send_monitor_cmd( balloon %s  % new_mem)
-o, str = vm.send_monitor_cmd(info balloon)
-if o != 0:
-logging.error(qemu monitor command failed:)
-s = int(re.findall(\d+,str)[0])
-if s != new_mem:
-logging.error( memory ballooning failed )
-fail = 1
-
-# increase memory to actual size 
-vm.send_monitor_cmd( balloon %s  % new_mem )
-o,str = vm.send_monitor_cmd(info balloon)
-if o != 0:
-logging.error(qemu monitor command failed:)
-s = int(re.findall(\d+,str)[0])
-if s != actual_mem:
-logging.error(Memory ballooning failed while increasing memory)
-fail = 1
-
-#Checking for test result
-if fail != 0:
-raise error.TestFail(Memory ballooning test failed )
-session.close()
diff -uprN a/client/tests/kvm/tests_base.cfg.sample 
b/client/tests/kvm/tests_base.cfg.sample
--- a/client/tests/kvm/tests_base.cfg.sample2010-02-11 21:12:13.792955256 
+0530
+++ b/client/tests/kvm/tests_base.cfg.sample2010-02-11 20:24:06.408947096 
+0530
@@ -158,10 +158,6 @@ variants:
 drift_threshold = 10
 drift_threshold_single = 3
 
-- balloon_check:  install setup unattended_install
-type = balloon_check
-extra_params += -balloon virtio
-
 - stress_boot:  install setup unattended_install
 type = stress_boot
 max_vms = 5
@@ -320,7 +316,7 @@ variants:
 
 variants:
 - 8.32:
-no setup balloon_check
+no setup
 image_name = fc8-32
 cdrom = linux/Fedora-8-i386-DVD.iso
 md5sum = dd6c79fddfff36d409d02242e7b10189
@@ -331,7 +327,7 @@ variants:
 unattended_file = unattended/Fedora-8.ks
 
 - 8.64:
-no setup balloon_check
+no setup
 image_name = fc8-64
 cdrom = linux/Fedora-8-x86_64-DVD.iso
 md5sum = 2cb231a86709dec413425fd2f8bf5295
@@ -342,7 +338,6 @@ variants:
 unattended_file = unattended/Fedora-8.ks
 
 - 9.32:
-no balloon_check
 image_name = fc9-32
 cdrom = linux/Fedora-9-i386-DVD.iso
 md5sum = 72601f685ea8c808c303353d8bf4d307
@@ -353,7 +348,6 @@ variants:
 unattended_file = unattended/Fedora-9.ks
 
 - 9.64:
-no balloon_check
 image_name = fc9-64
 cdrom = linux/Fedora-9-x86_64-DVD.iso
 md5sum = 05b2ebeed273ec54d6f9ed3d61ea4c96
@@ -364,7 +358,6 @@ variants:
 unattended_file = unattended/Fedora-9.ks
 
 - 10.32

[Autotest] [PATCH] KVM test: Memory ballooning test for KVM guest

2010-02-11 Thread pradeep

Hi


This path tests Memory ballooning functionality of a KVM guest.


Create a guest. Boot the guest with -balloon virtio

Try to increase/decrease the memory from qemu monitor and verify the 
changes.



Please find the attached patch.


Thanks
Pradeep

Signed-off-by: Pradeep Kumar Surisetty psuri...@linux.vnet.ibm.com
---
diff -uprN a/client/tests/kvm/tests/balloon_check.py 
b/client/tests/kvm/tests/balloon_check.py
--- a/client/tests/kvm/tests/balloon_check.py   2010-02-11 20:31:49.197953539 
+0530
+++ b/client/tests/kvm/tests/balloon_check.py   1970-01-01 05:30:00.0 
+0530
@@ -1,63 +0,0 @@
-import re, string, logging
-from autotest_lib.client.common_lib import error
-import kvm_test_utils, kvm_utils
-
-def run_balloon_check(test, params, env):
-
-Check Memory ballooning:
-1) Boot a guest
-2) Increase and decrease the memory of guest using balloon command from 
monitor
-3) check memory info
-
-@param test: kvm test object
-@param params: Dictionary with the test parameters
-@param env: Dictionary with test environment.
-
-
-vm = kvm_test_utils.get_living_vm(env, params.get(main_vm))
-session = kvm_test_utils.wait_for_login(vm)
-fail = 0
-
-# Check memory size
-ogging.info(Memory size check)
-expected_mem = int(params.get(mem))
-actual_mem = vm.get_memory_size()
-if actual_mem != expected_mem:
-logging.error(Memory size mismatch:)
-logging.error(Assigned to VM: %s % expected_mem)
-logging.error(Reported by OS: %s % actual_mem)
-
-#check info balloon command
-o, str = vm.send_monitor_cmd(info balloon)
-if o != 0:
-logging.error(qemu monitor command failed:)
-s =int(re.findall(\d+,str)[0])
-if s != actual_mem:
-logging.error(qemu monitor command failed: info balloon)
-raise error.TestFail(Memory ballooning failed while decreasing 
memory)
-
-#Reduce memory to 80% of actual memory
-new_mem = int (0.8 * actual_mem)
-vm.send_monitor_cmd( balloon %s  % new_mem)
-o, str = vm.send_monitor_cmd(info balloon)
-if o != 0:
-logging.error(qemu monitor command failed:)
-s = int(re.findall(\d+,str)[0])
-if s != new_mem:
-logging.error( memory ballooning failed )
-fail = 1
-
-# increase memory to actual size 
-vm.send_monitor_cmd( balloon %s  % new_mem )
-o,str = vm.send_monitor_cmd(info balloon)
-if o != 0:
-logging.error(qemu monitor command failed:)
-s = int(re.findall(\d+,str)[0])
-if s != actual_mem:
-logging.error(Memory ballooning failed while increasing memory)
-fail = 1
-
-#Checking for test result
-if fail != 0:
-raise error.TestFail(Memory ballooning test failed )
-session.close()
diff -uprN a/client/tests/kvm/tests_base.cfg.sample 
b/client/tests/kvm/tests_base.cfg.sample
--- a/client/tests/kvm/tests_base.cfg.sample2010-02-11 21:12:13.792955256 
+0530
+++ b/client/tests/kvm/tests_base.cfg.sample2010-02-11 20:24:06.408947096 
+0530
@@ -158,10 +158,6 @@ variants:
 drift_threshold = 10
 drift_threshold_single = 3
 
-- balloon_check:  install setup unattended_install
-type = balloon_check
-extra_params += -balloon virtio
-
 - stress_boot:  install setup unattended_install
 type = stress_boot
 max_vms = 5
@@ -320,7 +316,7 @@ variants:
 
 variants:
 - 8.32:
-no setup balloon_check
+no setup
 image_name = fc8-32
 cdrom = linux/Fedora-8-i386-DVD.iso
 md5sum = dd6c79fddfff36d409d02242e7b10189
@@ -331,7 +327,7 @@ variants:
 unattended_file = unattended/Fedora-8.ks
 
 - 8.64:
-no setup balloon_check
+no setup
 image_name = fc8-64
 cdrom = linux/Fedora-8-x86_64-DVD.iso
 md5sum = 2cb231a86709dec413425fd2f8bf5295
@@ -342,7 +338,6 @@ variants:
 unattended_file = unattended/Fedora-8.ks
 
 - 9.32:
-no balloon_check
 image_name = fc9-32
 cdrom = linux/Fedora-9-i386-DVD.iso
 md5sum = 72601f685ea8c808c303353d8bf4d307
@@ -353,7 +348,6 @@ variants:
 unattended_file = unattended/Fedora-9.ks
 
 - 9.64:
-no balloon_check
 image_name = fc9-64
 cdrom = linux/Fedora-9-x86_64-DVD.iso
 md5sum = 05b2ebeed273ec54d6f9ed3d61ea4c96
@@ -364,7 +358,6 @@ variants:
 unattended_file = unattended/Fedora-9.ks
 
 - 10.32

SLES 10 SP1 guest install fails with SCSI

2009-07-13 Thread Pradeep K Surisetty


I tried to install SLES 10 SP1 Guest using qemu. Install halts in the early
stages of installation with the SCSi. But with  IDE installation is
successful.
seems to be SCSI emulation issue..


Host OS Info :
CPU Model  :  Dual-Core AMD Opteron(tm) Processor 2218

Host OS:
fedora11 rawhide

Host Kernel :
uname -a
Linux mls21a 2.6.31rc2

Steps to reproduce:

1. Create qcow2
qemu-img create -f qcow2 sles10.qcow2 6G

3. Convert sles10.qcow2 to sles10.raw

qemu-img convert -f raw sles10.qcow2 sles10.raw

4.start guest install
qemu-kvm  -cdrom SLES-10-SP1-DVD-i386-RC5-DVD1.iso  -drive
file=sles10.raw,if=scsi -m 512 -smp 2


Observations On host:
ls_scsi error

observations oh guest:
guest install halts


Let me know if any other information is required.

Thanks
-Pradeep


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Fw: [KVM] soft lockup with RHEL 5.3 guest remote migaration

2009-05-28 Thread Pradeep K Surisetty

Find the total dmesg for RHEL5.3 guest

http://pastebin.com/f7e22fd1a

Regards
Pradeep


- Forwarded by Pradeep K Surisetty/India/IBM on 05/28/2009 12:13 PM
-
   
 Pradeep K 
 Surisetty/India/I 
 BM To 
   kvm@vger.kernel.org 
 05/27/2009 10:07   cc 
 AMPavan Naregundi/India/i...@ibmin,
   Sachin P Sant/India/i...@ibmin   
   Subject 
   [KVM] soft lockup with RHEL 5.3 
   guest remote migaration 
   
   
   
   
   
   



Tried to migrate RHEL 5.3 guest to remote machine. It fails to migrate with
the
below soft lockup message on guest.

I haven't faced this issue, with qemu-kvm-10.1. Remote migration fails with
qemu-kvm-0.10.4.


=
BUG: soft lockup - CPU#0 stuck for 10s! [init:1]

Pid: 1, comm: init
EIP: 0060:[c044d1e9] CPU: 0
EIP is at handle_IRQ_event+0x39/0x8c
 EFLAGS: 0246Not tainted  (2.6.18-125.el5 #1)
EAX: 000c EBX: c06e7480 ECX: c79a8da0 EDX: c0734fb4
ESI: c79a8da0 EDI: 000c EBP:  DS: 007b ES: 007b
CR0: 8005003b CR2: 08198f00 CR3: 079c7000 CR4: 06d0
 [c044d2c0] __do_IRQ+0x84/0xd6
 [c044d23c] __do_IRQ+0x0/0xd6
 [c04074ce] do_IRQ+0x99/0xc3
 [c0405946] common_interrupt+0x1a/0x20
 [c0428b6f] __do_softirq+0x57/0x114
 [c04073eb] do_softirq+0x52/0x9c
 [c04059d7] apic_timer_interrupt+0x1f/0x24
 [c053a6ae] add_softcursor+0x13/0xa2
 [c053ab36] set_cursor+0x3a/0x5c
 [c053ab9e] con_flush_chars+0x27/0x2f
 [c0533a31] write_chan+0x1c5/0x298
 [c041e3d7] default_wake_function+0x0/0xc
 [c05315ea] tty_write+0x147/0x1d8
 [c053386c] write_chan+0x0/0x298
 [c0531ffd] redirected_tty_write+0x1c/0x6c
 [c0531fe1] redirected_tty_write+0x0/0x6c
 [c0472cff] vfs_write+0xa1/0x143
 [c04732f1] sys_write+0x3c/0x63
 [c0404f17] syscall_call+0x7/0xb
 ===

Source machine:

Machine: x3850
Kernel: 2.6.30-rc6-git4
qemu-kvm-0.10.4


Destination machine:

Machine: LS21
Kernel: 2.6.30-rc6-git3
qemu-kvm-0.10.4


Steps to Reproduce:

1. Install RHEL 5.3 guest on x3850 with above mentioned kernel on qemu
version
2. NFS mount the dir containing the guest image on Destination(LS21)
3. Boot the rhel guest
4. Wait for guest migration on  LS21 by following command
qemu-system-x86_64 -boot c rhel5.3.raw -incoming tcp:0:

5. Start the migration on source(x3850)
  a. On guest press Alt+Ctl+2 to switch to qemu prompt
  b. Run migrate -d tcp:'Destination IP':
6. Above command hang the guest for around 3 or 4 min and give the call
trace
on dmesg showing the  soft lockup on guest.


Few other details:

Tried to migrate from ls21 to another ls21 and faced the same failure.
I haven't faced this issue, when i was using qemu-10.1
Migration of RHEL 5.3 guest from ls21 to another ls21 succeeded with
qemu-10.1


Regards
Pradeep


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Fw: [KVM] soft lockup with RHEL 5.3 guest remote migaration

2009-05-28 Thread Pradeep K Surisetty

Tried local migration of RHEL 5.3 guest on x3850.
After local migration, both source and destination are in active state and
leaves soft lockup oops.

Thanks
Pradeep

- Forwarded by Pradeep K Surisetty/India/IBM on 05/28/2009 04:01 PM
-
   
 Pradeep K 
 Surisetty/India/I 
 BM To 
   kvm@vger.kernel.org 
 05/28/2009 12:13   cc 
 PMPavan Naregundi/India/i...@ibmin,
   Sachin P Sant/India/i...@ibmin   
   Subject 
   Fw: [KVM] soft lockup with RHEL 5.3 
   guest remote migaration 
   
   
   
   
   
   



Find the total dmesg for RHEL5.3 guest

http://pastebin.com/f7e22fd1a

Regards
Pradeep


- Forwarded by Pradeep K Surisetty/India/IBM on 05/28/2009 12:13 PM
-
   
 Pradeep K 
 Surisetty/India/I 
 BM To 
   kvm@vger.kernel.org 
 05/27/2009 10:07   cc 
 AMPavan Naregundi/India/i...@ibmin,
   Sachin P Sant/India/i...@ibmin   
   Subject 
   [KVM] soft lockup with RHEL 5.3 
   guest remote migaration 
   
   
   
   
   
   



Tried to migrate RHEL 5.3 guest to remote machine. It fails to migrate with
the
below soft lockup message on guest.

I haven't faced this issue, with qemu-kvm-10.1. Remote migration fails with
qemu-kvm-0.10.4.


=
BUG: soft lockup - CPU#0 stuck for 10s! [init:1]

Pid: 1, comm: init
EIP: 0060:[c044d1e9] CPU: 0
EIP is at handle_IRQ_event+0x39/0x8c
 EFLAGS: 0246Not tainted  (2.6.18-125.el5 #1)
EAX: 000c EBX: c06e7480 ECX: c79a8da0 EDX: c0734fb4
ESI: c79a8da0 EDI: 000c EBP:  DS: 007b ES: 007b
CR0: 8005003b CR2: 08198f00 CR3: 079c7000 CR4: 06d0
 [c044d2c0] __do_IRQ+0x84/0xd6
 [c044d23c] __do_IRQ+0x0/0xd6
 [c04074ce] do_IRQ+0x99/0xc3
 [c0405946] common_interrupt+0x1a/0x20
 [c0428b6f] __do_softirq+0x57/0x114
 [c04073eb] do_softirq+0x52/0x9c
 [c04059d7] apic_timer_interrupt+0x1f/0x24
 [c053a6ae] add_softcursor+0x13/0xa2
 [c053ab36] set_cursor+0x3a/0x5c
 [c053ab9e] con_flush_chars+0x27/0x2f
 [c0533a31] write_chan+0x1c5/0x298
 [c041e3d7] default_wake_function+0x0/0xc
 [c05315ea] tty_write+0x147/0x1d8
 [c053386c] write_chan+0x0/0x298
 [c0531ffd] redirected_tty_write+0x1c/0x6c
 [c0531fe1] redirected_tty_write+0x0/0x6c
 [c0472cff] vfs_write+0xa1/0x143
 [c04732f1] sys_write+0x3c/0x63
 [c0404f17] syscall_call+0x7/0xb
 ===

Source machine:

Machine: x3850
Kernel: 2.6.30-rc6-git4
qemu-kvm-0.10.4


Destination machine:

Machine: LS21
Kernel: 2.6.30-rc6-git3
qemu-kvm-0.10.4


Steps to Reproduce:

1. Install RHEL 5.3 guest on x3850 with above mentioned kernel on qemu
version
2. NFS mount the dir containing the guest image on Destination(LS21)
3. Boot the rhel guest
4. Wait for guest migration on  LS21 by following command
qemu-system-x86_64 -boot c rhel5.3.raw -incoming tcp:0:

5. Start the migration on source(x3850)
  a. On guest press Alt+Ctl+2 to switch to qemu prompt
  b. Run migrate -d tcp:'Destination IP':
6. Above command hang the guest for around 3 or 4 min and give the call

[KVM] soft lockup with RHEL 5.3 guest remote migaration

2009-05-26 Thread Pradeep K Surisetty

Tried to migrate RHEL 5.3 guest to remote machine. It fails to migrate with
the
below soft lockup message on guest.

I haven't faced this issue, with qemu-kvm-10.1. Remote migration fails with
qemu-kvm-0.10.4.


=
BUG: soft lockup - CPU#0 stuck for 10s! [init:1]

Pid: 1, comm: init
EIP: 0060:[c044d1e9] CPU: 0
EIP is at handle_IRQ_event+0x39/0x8c
 EFLAGS: 0246Not tainted  (2.6.18-125.el5 #1)
EAX: 000c EBX: c06e7480 ECX: c79a8da0 EDX: c0734fb4
ESI: c79a8da0 EDI: 000c EBP:  DS: 007b ES: 007b
CR0: 8005003b CR2: 08198f00 CR3: 079c7000 CR4: 06d0
 [c044d2c0] __do_IRQ+0x84/0xd6
 [c044d23c] __do_IRQ+0x0/0xd6
 [c04074ce] do_IRQ+0x99/0xc3
 [c0405946] common_interrupt+0x1a/0x20
 [c0428b6f] __do_softirq+0x57/0x114
 [c04073eb] do_softirq+0x52/0x9c
 [c04059d7] apic_timer_interrupt+0x1f/0x24
 [c053a6ae] add_softcursor+0x13/0xa2
 [c053ab36] set_cursor+0x3a/0x5c
 [c053ab9e] con_flush_chars+0x27/0x2f
 [c0533a31] write_chan+0x1c5/0x298
 [c041e3d7] default_wake_function+0x0/0xc
 [c05315ea] tty_write+0x147/0x1d8
 [c053386c] write_chan+0x0/0x298
 [c0531ffd] redirected_tty_write+0x1c/0x6c
 [c0531fe1] redirected_tty_write+0x0/0x6c
 [c0472cff] vfs_write+0xa1/0x143
 [c04732f1] sys_write+0x3c/0x63
 [c0404f17] syscall_call+0x7/0xb
 ===

Source machine:

Machine: x3850
Kernel: 2.6.30-rc6-git4
qemu-kvm-0.10.4


Destination machine:

Machine: LS21
Kernel: 2.6.30-rc6-git3
qemu-kvm-0.10.4


Steps to Reproduce:

1. Install RHEL 5.3 guest on x3850 with above mentioned kernel on qemu
version
2. NFS mount the dir containing the guest image on Destination(LS21)
3. Boot the rhel guest
4. Wait for guest migration on  LS21 by following command
qemu-system-x86_64 -boot c rhel5.3.raw -incoming tcp:0:

5. Start the migration on source(x3850)
  a. On guest press Alt+Ctl+2 to switch to qemu prompt
  b. Run migrate -d tcp:'Destination IP':
6. Above command hang the guest for around 3 or 4 min and give the call
trace
on dmesg showing the  soft lockup on guest.


Few other details:

Tried to migrate from ls21 to another ls21 and faced the same failure.
I haven't faced this issue, when i was using qemu-10.1
Migration of RHEL 5.3 guest from ls21 to another ls21 succeeded with
qemu-10.1


Regards
Pradeep


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


SLES 10 SP1 guest install fails with scsi drive

2009-05-20 Thread Pradeep K Surisetty

I tried to install SLES 10 SP1 Guest using qemu. Install halts in the early
stages of installation when the interface drive is SCSi. But with  IDE
interface installation is successful. seems to be SCSI emulation issue..

Host OS Info :
CPU Model  :  Dual-Core AMD Opteron(tm) Processor 2218

Host OS:
fedora11 rawhide

Host Kernel :
uname -a
Linux mls21a 2.6.30-rc5 #1 SMP Wed May 13 18:11:51 IST 2009 x86_64 x86_64
x86_64 GNU/Linux

Steps to reproduce:

1. Create qcow2
qemu-img create -f qcow2 sles10.qcow2 6G

3. Convert sles10.qcow2 to sles10.raw

qemu-img convert -f raw sles10.qcow2 sles10.raw

4.start guest install
qemu-kvm  -cdrom SLES-10-SP1-DVD-i386-RC5-DVD1.iso  -drive
file=sles10.raw,if=scsi -m 512 -smp 2



Let me know if any other information is required.

Thanks
-Pradeep

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


qemu sles 11 guest failed to boot up after the guest install

2009-05-20 Thread Pradeep K Surisetty

I tried to install SLES 11 Guest using qemu. it doesn't boot it up after
successful install and dropping to minimum shell . Seems to be not finding
the
disk .

Host OS Info :
CPU Model  :  Dual-Core AMD Opteron(tm) Processor 2218

Host OS: fedora11 rawhide

Host Kernel :

uname -a
Linux mls21a 2.6.30-rc5 #1 SMP Wed May 13 18:11:51 IST 2009 x86_64 x86_64
x86_64 GNU/Linux

Steps to reproduce:

1. Create qcow2
qemu-img create -f qcow2 sles11.qcow2 6G

2. Convert sles11.qcow2 to sles11.raw

qemu-img convert -f raw sles11.qcow2 sles11.raw

3.start guest install
qemu-kvm -cdrom SLES-11-DVD-x86_64-GM-DVD1.iso -drive
file=sles11.raw,if=scsi,cache=off -m 512 -smp 2

4. After the guest install boot up from the image.
qemu-kvm -boot c sles11.raw

this command 4 fails to boot up the guest as mentioned above, dropping to
minimum shell.


Let me know if any other information is required.

Thanks
Pradeep

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: qemu sles 11 guest failed to boot up after the guest install

2009-05-20 Thread Pradeep K Surisetty


Avi Kivity a...@redhat.com wrote on 05/20/2009 04:21:21 PM:



  4. After the guest install boot up from the image.
  qemu-kvm -boot c sles11.raw
 
  this command 4 fails to boot up the guest as mentioned above, dropping
to
  minimum shell.

 Try qemu-kvm -drive file=sles11.raw,boot=on

Tried the above command. its dropping to minimum shell

Regards
Pradeep



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html