On 10/17/2012 04:42 AM, Plamen Dimitrov wrote:
Thank you guys for getting involved with this issue.

Yes, certain tests and
methods only support main_vm, and I'm not surprised unattended install
is one of them. If this was "fixed" would it be a better solution for
you?
I already thought about this and even did a fix using the "vms" parameter that
made it possible. However, after rethinking what would be the best approach
from the perspective of design and usability I removed this fix. The main
reason for this is that targeting a single virtual machine while not others
gives more freedom to everybody using this test. This is how I came to a
conclusion that the best option would be not to touch the test itself but
rather use the "main_vm" parameter. It is completely sequential approach and
we can see the test results for each vm in an independent way.

Ahh, okay, thanks for clarification.


So to recap, I am not really thinking about installing multiple vms in a
single test, but they still need the install test performed on them at some
time. And as mentioned in the previous e-mails, I get a few errors from which
I could identify the following:


1) Some parameters of the VM are changed after installation forcing the cache
to be reset. I don't know what could be changed and this occurs at least when
multiple vms are active for each VM:

10/17 10:08:23 DEBUG|   virt_vm:0444| VM params in env don't match requested,
restarting.

Essentially, what this means is 'vm1' qemu-kvm command line string - as produced by current params + network info cache (/tmp/address_pool) does not match command line string produced by same method but using 'vm1' state (and params) as stored in env file.

Most likely, something in params changed for the named vm. Though it's possible the network info. cache (/tmp/address_pool) isn't working properly.

TBH, there are a LOT of interacting parts in this single call (vm.needs_restart()) which aren't always clear. Some of the parts rely on params to encode state more than they should. It's a sensitive / fragile area of the code.


Solution: I guess we can live with a few extra restarts of each VM, especially
if it is only during installation.

Do you really need to perform a fresh/full installation each time? Maybe you could use the image_copy "test" instead?



2) The nic of the second VM creates a warning while installing the first one
and resetting.

10/17 10:15:20 WARNI|env_proces:0461| Could not verify DHCP lease:
02:00:00:00:00:52 -->  172.16.1.172
10/17 10:15:20 DEBUG|    kvm_vm:1806| Destroying VM with PID 4708
10/17 10:15:20 DEBUG|    kvm_vm:1832| Trying to kill VM with monitor command
10/17 10:16:20 INFO |   aexpect:0786| [qemu output] (Process terminated with
status 0)
10/17 10:16:21 DEBUG|    kvm_vm:1849| VM is down

This is probably harmless. The code is in utils_misc verify_ip_address_ownership(). It uses the arp-cache to try to pin an ip address to a list of mac addresses (possibly from different VMs). There's lots of ways this check can break, hence it's a warning, not an error.

IIRC, there's also _NO_ locking done on the address_cache database so it's possible something is clashing (though I have yet to see it).


Solution: The second VM still does not have OS installed on it, therefore a

Ahh, this is why you get the above warning, no IP was requested for it yet.

session to shut it down encounters problems. I guess some tweaking of

This may be expected if graceful=True in call to vm.shutdown() but it should eventually go down.

parameters should do the trick.


@Lucas: You said you have never thought about installing multiple VMs in a
single test until now. What is your usual way of installing them in seperate
tests?


--
Chris Evich, RHCA, RHCE, RHCDS, RHCSS
Quality Assurance Engineer
e-mail: cevich + `@' + redhat.com o: 1-888-RED-HAT1 x44214

_______________________________________________
Autotest-kernel mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/autotest-kernel

Reply via email to